# The Secret Life of Half-Life

Guest Post by Willis Eschenbach [see update at the end of the head post]

I first got introduced to the idea of “half-life” in the 1950s because the topic of the day was nuclear fallout. We practiced hiding under our school desks if the bomb went off, and talked about how long you’d have to stay underground to be safe, and somehow in all of that we never pondered exactly how a cheap sheet metal and plywood desk would ward off an atomic bomb … a simpler time indeed. But I digress. Half-life, as many people know, is how long it takes for a given starting amount of some radioactive substance to decay until only half of the starting amount remains. For example, the half-life of radioactive caesium 137 is about thirty years. This means if you start with a gram of radioactive caesium, in thirty years you’ll only have half a gram. And in thirty more years you’ll have a quarter of a gram. And in thirty years there will only be an eighth of a gram of caesium remaining, and on ad infinitum.

This is a physical example of a common type of natural decay called “exponential decay”. The hallmark of exponential decay is that every time period, the decay is a certain percentage of what remains at that time. Exponential decay also describes what happens when a system which is at some kind of equilibrium is disturbed from that equilibrium. The system doesn’t return to equilibrium all at once. Instead, each year it moves a certain percentage of the remaining distance to equilibrium. Figure 1 shows the exponential decay after a single disturbance at time zero, as the disturbance is slowly decaying back to the pre-pulse value.

Figure 1. An example of a hypothetical exponential decay of a system at equilibrium from a single pulse of amplitude 1 at time zero. Each year it moves a certain percentage of the distance to the equilibrium value. The “half-life” and the time constant “tau” are two different ways of measuring the same thing, which is the decay rate. Half-life is the time to decay to half the original value. The time constant “tau” is the time to decay to 37% of the original value. Tau is also known as the “e-folding time”.

Note that the driving impulse in Figure 1 is a single unit pulse, and in response we see a steady decay back to equilibrium. That is to say, the shape of the driving impulse is very different from the shape of the response.

Let’s consider a slightly more complex case. This is where we have an additional pulse of 1.0 units each succeeding year. That case is shown in Figure 2.

Figure 2. An example of a hypothetical exponential decay from constant annual pulses of amplitude 1. The pulses start at time zero and continue indefinitely.

Now, this is interesting. In the beginning, the exponential decay is not all that large, because the disturbance isn’t that large. But when we add an additional identical pulse each year, the disturbance grows.

But when the disturbance grows, the size of the annual decay grows as well. As a result, eventually the disturbance levels off. After a while, although we’re adding a one unit pulse per year, the loss due to exponential decay one pulse per year, so there is no further increase.

The impulse in Figure 2 is a steady addition of 1 unit per year. So once again, the shape of the response is very different from the shape of the exponentially decayed response.

With that as prologue, we can look at the relationship between fossil fuel emissions and the resulting increase in airborne CO2. It is generally accepted that the injection of a pulse of e.g volcanic gases into the planetary atmosphere is followed by an exponential decay of the temporarily increased volcanic gas levels back to some pre-existing equilibrium. We know that this exponential decay of an injected gas pulse is a real phenomenon, because if that decay didn’t happen, we’d all be choked to death from accumulated volcanic gases.

Knowing this, we can use an exponential decay analysis of the fossil fuel emissions data to estimate the CO2 levels that would result from those same emissions. Figure 3 shows theoretical and observed increases in various CO2 levels.

Figure 3. Theoretical and observed CO2 changes, in parts per million by volume (ppmv). The theoretical total CO2 from emissions (blue line) is what we’d have if there were no exponential decay and all emissions remained airborne. The red line is the observed change in airborne CO2. The amount that is sequestered by various CO2 sinks (violet) is calculated as the total amount put into the air (blue line) minus the observed amount remaining in the air (red line). The black line is the expected change in airborne CO2, calculated as the exponential decay of the total CO2 injected into the atmosphere. The calculation used best-fit values of 59 years as the time constant (tau) and 283 ppmv as the pre-industrial equilibrium level.

The first thing to notice is that the total amount of CO2 from fossil fuel emissions is much larger than the amount that remains in the atmosphere. The clear inference of this is that various natural sequestration processes have absorbed some but not all of the fossil fuel emissions. Also, the percentage of emissions that are naturally sequestered has remained constant since 1959. About 42% of the amount that is emitted is “sequestered”, that is to say removed from the atmosphere by natural carbon sinks.

Next, as you can see, using an exponential decay analysis gives us an extremely good fit between the theoretical and the observed increase in atmospheric CO2. In fact, the fit is so good that most of the time you can’t even see the red line (observed CO2) under the black line (calculated CO2).

Before I move on, please note that the amount remaining in the atmosphere is not a function of the annual emissions. Instead, it is a function of the total emissions, i.e. it is a function of the running sum of the annual emissions starting at t=0 (blue line).

Now, I got into all of this because against my better judgment I started to watch Dr. Salby’s video that was discussed on WUWT here. The very first argument that Dr. Salby makes involves the following two graphs:

Figure 4. Dr. Salby’s first figure, showing the annual global emissions of carbon in gigatonnes per year.

Figure 5. Dr. Salby’s second figure, showing the observed level of CO2 at Mauna Loa.

Note that according to his numbers the trend in emissions increased after 2002, but the CO2 trend is identical before and after 2002. Dr. Salby thinks this difference is very important.

At approximate 4 minutes into the video Dr. Salby comments on this difference with heavy sarcasm, saying:

The growth of fossil fuel emission increased by a factor of 300% … the growth of CO2 didn’t blink. How could this be? Say it ain’t so!

OK, I’ll step up to the plate and say it. It ain’t so, at least it’s not the way Dr. Salby thinks it is, for a few reasons.

First, note that he is comparing the wrong things. Observed CO2 is NOT a function of annual CO2 emissions. It is a function of total emissions, as discussed above and shown in Figure 3. The total amount remaining in the atmosphere at any time is a function of the total amount emitted up to that time. It is NOT a function of the individual annual emissions. So we would not expect the two graphs to have the same shape or the same trends.

Next, we can verify that he is looking at the wrong things by comparing the units used in the two graphics. Consider Figure 4, which has units of gigatonnes of carbon per year. Gigatonnes of carbon (GtC) emitted, and changes in airborne CO2 (parts per million by volume, “ppmv”), are related by the conversion factor of:

2.13 Gigatonnes carbon emitted = 1 ppmv CO2

This means that the units in Figure 4 can be converted from gigatonnes C per year to ppmv per year by simply dividing them by 2.13. So Figure 4 shows ppmv per year. But the units in Figure 5 are NOT the ppmv per year used in Figure 4. Instead, Figure 5 uses simple ppmv. Dr. Salby is not comparing like with like. He’s comparing ppmv of CO2 per year to plain old ppmv of CO2, and that is a meaningless comparison.

He is looking at apples and oranges, and he waxes sarcastic about how other scientists haven’t paid attention to the fact that the two fruits are different … they are different because there is no reason to expect that apples and oranges would be the same. In fact, as Figure 3 shows, the observed CO2 has tracked the total human emissions very, very accurately. In particular, it shows that we do not expect a large trend change in observed CO2 around the year 2000 such as Dr. Salby expects, despite the fact that such a trend change exists in the annual emission data. Instead, the change is reflected in a gradual increase in the trend of the observed (and calculated) CO2 … and the observations are extremely well matched by the calculated values.

The final thing that’s wrong with his charts is that he’s looking at different time periods in his trend comparisons. For the emissions, he’s calculated the trends 1990-2002, and compared that to 2002-2013. But regarding the CO2 levels, he’s calculated the trends over entirely different periods, 1995-2002 and 2002-2014. Bad scientist, no cookies. You can’t pick two different periods to compare like that.

In summary? Well, the summary is short … Dr. Salby appears to not understand the relationship between fossil fuel carbon emissions and CO2.

That would be bad enough, but from there it just gets worse. Starting at about 31 minutes into the video Dr. Salby makes much of the fact that the 14C (“carbon-14”) isotope produced by the atomic bomb tests decayed exponentially (agreeing with what I discussed above) with a fairly short time constant tau of about nine years or so.

Figure 6. Dr. Salby demonstrates that airborne residence time constant tau for CO2 is around 8.6 years. “NTBT” is the Nuclear Test Ban Treaty.

Regarding this graph, Dr. Salby says that it is a result of exponential decay. He goes on to say that “Exponential decay means that the decay of CO2 is proportional to the abundance of CO2,” and I can only agree.

So far so good … but then Dr. Salby does something astounding. He graphs the 14C airborne residence time data up on the same graph as the “Bern Model” of CO2 pulse decay, says that they both show “Absorption of CO2”, and claims that the 14C isotope data definitively shows that the Bern model is wrong …

Figure 7. Dr. Salby’s figure showing both the “Bern Model” of the decay of a pulse of CO2 (violet line), along the same data shown in Figure 6 for the airborne residence time of CO2 (blue line, green data points).

To reiterate, Dr. Salby says that the 14C bomb test (blue line identified as “Real World”) clearly shows that the Bern Model is wrong (violet line identified as “Model World”).

But as before, in Figure 8 Dr. Salby is again comparing apples and oranges. The 14C bomb test data (blue line) shows how long an individual CO2 molecule stays in the air. Note that this is a steady-state process, with individual CO2 molecules constantly being emitted from somewhere, staying airborne in the atmosphere with a time constant tau of around 8 years, and then being re-absorbed somewhere else in the carbon cycle. This is called the “airborne residence time” of CO2. It is the time an average CO2 molecule stays aloft before being re-absorbed.

But the airborne residence time (blue line) is very, very different from what the Bern Model (violet line) is estimating. The Bern Model is estimating how long it takes an entire pulse of additional CO2 to decay back to equilibrium concentration levels. This is NOT how long a CO2 molecule stays aloft. Instead, the Bern Model is estimating how long the increased atmospheric concentration from a pulse of injected CO2 takes to decay back to pre-pulse conditions. Let me summarize:

Airborne residence time (bomb test data): how long an individual CO2 molecule stays in the air.

Pulse decay time (Bern Model):  how long the increased atmospheric concentration from a pulse of injected CO2 takes to decay back to pre-pulse conditions.

So again Dr. Salby is conflating two very different measurements—airborne residence time on the one hand (blue line), and CO2 post-pulse concentration decay time on the other hand (violet line). It is meaningless to display them on the same graph. The 14C bomb test data neither supports nor falsifies the Bern Model. The 14C data says nothing about the Bern Model, because they are measuring entirely different things.

I was going to force myself to watch more of the video of his talk. But when I got that far into Dr. Salby’s video, I simply couldn’t continue. His opening move is to compare ppmv per year to plain ppmv, and get all snarky about how he’s the only one noticing that they are different. He follows that up by not knowing the difference between airborne residence time and pulse decay time.

Sorry, but after all of that good fun I’m not much interested in his other claims. Sadly, Dr. Salby has proven to me that regarding this particular subject he doesn’t understand what he’s talking about. I do know he wrote a text on Atmospheric Physics, so he’s nobody’s fool … but in this case he’s way over his head.

Best regards to each of you on this fine spring evening,

w.

For Clarity: If you disagree with something, please quote the exact words you disagree with. That will allow everyone to understand the exact nature of your disagreement.

Math Note: The theoretical total CO2 from emissions is calculated using the relationship 1 ppmv = 2.13 gigatonnes of carbon emitted.

Also, we only have observational data on CO2 concentrations since 1959. This means that the time constant calculated in Figure 3 is by no means definitive. It also means that the data is too short to reliably distinguish between e.g. the Bern Model (a fat-tailed exponential decay) and the simple single exponential decay model I used in Figure 3.

Data and Code: I’ve put the R code and functions, the NOAA Monthly CO2 data (.CSV), and the annual fossil fuel carbon emissions data (.TXT) in a small zipped folder entitled “Salby Analysis Folder” (20 kb)

[Update]: Some commenters have said that I should have looked at an alternate measure. They said instead of looking at atmospheric CO2 versus the cumulative sum of annual emissions, I should show annual change in atmospheric CO2 versus annual emissions. We are nothing if not a full service website, so here is that Figure.

As you can see, this shows that it is a noisy system. Despite that, however, there is reasonably good and strongly statistically significant correlation between emissions and the change in atmospheric CO2. I note also that this method gives about the same numbers for the airborne fraction that I got from my analysis upthread.

w.

## 559 thoughts on “The Secret Life of Half-Life”

1. joelobryan says:

CO2 is a red herring. A means to an end for Agenda 21. Nothing more. The proof is in the alarmists’ refusals to acknowledge any of the benefits of higher than pre-industrial 280 ppm level. Seizing control of western economies through energy utilization IS the holy grail to control and re-distribution of wealth.
The environmental impacts of higher CO2 are overplayed to achieve that end.

• Old Goat says:

That’s the whole climate change scam in a nutshell.

Nothing to do with CO2, at all.

• Daniel Kuhn says:

lol, more conspiracy theories at WUWT…..

so much for science

• meltemian says:

Agreed!

• +1

(Although I read somewhere that CO2 is negatively impacting red herring.)

• Robert Schuman says:

Sorry Max! You read the graph wrong. CO2 has caused Red Herring to spawn at 4X the norman rate.

• Max
They have entered their data upside down.
Haven’t we been here before?

• richard verney says:

There is a concept called the half life of facts; this suggests that about 50% of matters that we take as ‘fact’ today, will in 10 years time be shown not to be a true and actual FACT.

It is probable that the climate warmist’s claim that CO2 is a significant driver of gloabl temperatures, which at least as far as MSM, ‘consinsus’ scientists and politicians is taken as being ‘fact’ will be one of these ‘facts’ which in 10 years time will be shown not to be an actual FACT.

• G. Karst says:

That is probably true in a free and open society. However if the Greens and Environmentalist are able to control both the media and the scientific machinery… then in 10years… so called facts become superfacts which are maintained until they have no useful purpose. We see something like that happening now despite our best efforts to counter. Very dangerous indeed. GK

• wayne says:

I very much agree Joel. After nearly two years studying other planetary atmospheres in depth it is clear there is nothing there in co2.

2. “For example, the half-life of radioactive caesium is about seventy days.”

That should be: For example, IF the half-life of radioactive caesium WERE about seventy days, then …….

• SMC says:

The biological half life (how long it stays in the body) of Cs-137 is about 70 days.

3. xyzzy11 says:

Yep – well it depends on which isotope too! Caesium 135 has a half life of about 2 million years, 137 is 30 years, 134 is 2 years but most other isotopes have half lives measured in seconds to days.

• Willis Eschenbach says:

Thanks, Dennis and xyzzy11, fixed. Moving too fast and depending on my memory. Such is life.

w.

• george e. smith says:

So Willis, If a 14CO2 molecule is removed from the atmosphere, that presumably is the consequence of some process. For example, it might have dissolved in the water of some droplet, that then rained out.
Now that marked molecule might re-enter the atmosphere; perhaps I drank the rain drop and later exhaled the 14CO2 back into the atmosphere.

So some of the “decayed” excess 14CO2 can get recycled back to the atmosphere.

Presumably, the observed abundance reflects any recirculation of marked molecules

??

I tend to think of CO2 (and H2O) as PERMANENT components of the atmosphere, and whether a particular molecule get replaced by another particular molecule, is somewhat irrelevant.

G

At the north pole, where grows no trees, some 18-20ppmm of excess CO2 (three times the ML amount) gets removed in as little as five months.

So the decay rate would eliminate an excess of 120ppmm (above 280ppmm) in 30 months, suggesting a decay time constant of 2 1/2 years for whatever processes occur at the north pole.

• Willis Eschenbach says:

george e. smith April 20, 2015 at 1:28 pm

So Willis, If a 14CO2 molecule is removed from the atmosphere, that presumably is the consequence of some process. For example, it might have dissolved in the water of some droplet, that then rained out.
Now that marked molecule might re-enter the atmosphere; perhaps I drank the rain drop and later exhaled the 14CO2 back into the atmosphere.

So some of the “decayed” excess 14CO2 can get recycled back to the atmosphere.

Presumably, the observed abundance reflects any recirculation of marked molecules

Thanks, George, interesting question. First, yes, the observed abundance is a measured value, so perforce it must include everything.

However, I doubt very much if the recirculation is at all significant. Once an atom of 14C leaves the atmosphere, it enters the very, very much larger reservoir of carbon circulating around the carbon cycle. This immediately dilutes by orders of magnitude.

In addition, the various atoms will be recirculated at different times. This dilutes them in time, as some of them will not reappear for decades, centuries, or longer.

As a result of this large temporal and spatial dilution, it seems to me that there wouldn’t be any significant recirculation. And indeed, this is borne out by the fact that the 14C numbers decayed all the way back to the pre-bomb pulse values.

Regards,

w.

• Billy Liar says:
• Well that’s interesting. Thanks. It certainly warrants confirmation, one way or another.

4. Kasuha says:

I think that text about figure 7 needs a little more explanation.

Nuclear tests did not measurably increase amount of CO2 in atmosphere, they only altered its isotopical composition. Assuming all 14C released in these tests ended up in CO2 which I’m not quite sure about. Therefore its decay line follows 14C diffusion between atmosphere and other carbon storages (biosphere, soil, surface sea waters, deep sea waters, …) in near balanced conditions and while atmospheric CO2 concentrations remain nearly constant.

Bern model estimates decay of doubling of CO2 in atmosphere. That’s not simple overturn, that’s when there suddenly appears large disproportion in balance between saturation of individual carbon storages and even the overturn rate changes significantly.

A 14C pulse in such conditions would show even faster decay.

5. KTM says:

It seems that your Figure 3 could be reproduced using any given year as a starting point.

To properly analysis the problem put forward by Dr. Shelby it seems that you should re-draw Figure 3 two times, first showing the curve over his specified 1990 – 2001 period, then again using his 2002-2014 period. According to you they should be equal, since the observed rate of increase in CO2 from Mauna Loa has been identical over both periods. Perhaps you could overlay them for us, like you did for your black and red curves in Figure 3, so we can see for ourselves that the human emissions predict there should be an identical rate of rise over both periods.

6. Nylo says:

First, note that he is comparing the wrong things. Observed CO2 is NOT a function of annual CO2 emissions. It is a function of total emissions, as discussed above and shown in Figure 3. The total amount remaining in the atmosphere at any time is a function of the total amount emitted up to that time. It is NOT a function of the individual annual emissions. So we would not expect the two graphs to have the same shape or the same trends.

Hi Willis,

I agree that “we would not expect the two graphs to have the same shape or the same trends”. As I didn’t watch Salby’s video, I don’t know if that’s what he claims that should be happening, I will assume yes. If so, he is wrong. However, once one assumes that the increase in CO2 concentration is totally our fault, and IMO it is (as your figure 3 shows, Nature is actually working to try to counter it rather than adding more), then a significant increase of how much CO2 we emit should be followed by a significant increase in how much CO2 concentration rises. If we were, in 2013, emitting to the atmosphere the equivalent of 11*0.14=1.54 ppmv of CO2 more per year than we were in 2002, even if Nature partially counters that increase, we should be seeing an increase of the speed at which CO2 increases, of probably not as much as 1.54ppmv/year, but still SOME increase. Half of it? 40% of it? I don’t know. But we would NOT expect it to remain the same that it was in 2002. And that’s significant, in my opinion.

Do you, perhaps, think that the rate should be what is observed? Just by eyeballing, even your model of CO2 decay seems to not expect that. In the later part, it goes from below the red line to above the red line in the end, so your model seems to think that we should have seen a higher increase of CO2 concentration in the last 10 years or so.

Kind regards,

• Mike Jonas says:

Nylo – The curious thing is that although the ocean is absorbing about half the amount of CO2 that mankind is emitting, that does not mean that mankind is responsible for all of the atmospheric increase. If mankind had emitted no CO2, and if the temperature had done exactly what the temperature has done (gone up a bit) then atmospheric CO2 would actually have increased a bit through emission from the ocean.

• Richard111 says:

I’m not sure about your last sentence Mike. If the oceans had warmed slightly wouldn’t the absorbance of CO2 decreased?

• Alex says:

Richard111

Partial pressure increase = greater solublity. Temperature increase = lower solubility. Balancing act.

• Daniel Kuhn says:

Without our CO2 emissions temps would not have gone up in the late 20th century.

Also the oceans would still be mostly sinks ….

• Daniel Kuhn says:

“then atmospheric CO2 would actually have increased a bit through emission from the ocean.”

no

• Mike Jonas says:

Richard111 – (As Alex said) A warmer ocean holds less CO2 so releases it into the atmosphere (or absorbs less from the atmosphere).

• MIke,

Yes, but the equilibrium between oceans and atmosphere only changes some 8 ppmv/°C. That makes that the ~0.8°C increase since the LIA is good for about 6 ppmv increase in the atmosphere. That is all.
The rest of the 110 ppmv increase is from humans, which emitted some 200 ppmv over the past 160 years…

• Mike Jonas says:

If mankind had emitted no CO2, and if the temperature had done exactly what the temperature has done (gone up a bit) then atmospheric CO2 would actually have increased a bit through emission from the ocean.

I agree. Global T has fluctuated slightly due to natural forcing, not because of CO2 (not saying that CO2 has no effect, but any warming from CO2 is just too small to measure). But all the available evidence shows that ∆CO2 follows ∆T, not vice-versa.

And pay no attention to anyone who asserts that “Without our CO2 emissions temps would not have gone up in the late 20th century.”

That is no more than a religious Belief. Without evidence showing cause and effect, statements like that belong on Hotwhopper, not on a science site.

• Willis Eschenbach says:

Mike Jonas April 20, 2015 at 12:28 am

Nylo – The curious thing is that although the ocean is absorbing about half the amount of CO2 that mankind is emitting, that does not mean that mankind is responsible for all of the atmospheric increase. If mankind had emitted no CO2, and if the temperature had done exactly what the temperature has done (gone up a bit) then atmospheric CO2 would actually have increased a bit through emission from the ocean.

Mike, you are right about what happens … but the effect is quite small. As the ocean warms it outgasses, increasing the atmospheric CO2.

However, both theoretical and observational studies put the size of this effect at something around 15 ppmv per degree C of temperature rise. This puts the ocean thermal contribution of the 20th century (a warming of about 0.6°C) at about 10 ppmv of CO2. This 10 ppmv is far too small to explain the total CO2 increase of about 100 ppmv during that same time.

w.

• Catherine Ronconi says:

Daniel Kuhn
April 20, 2015 at 1:27 am

There is not a shred of evidence supporting your baseless assertion that without man-made CO2 the late 20th century would not have warmed.

The warming from c. 1977-96 can be accounted for without considering the rise in CO2 levels, the effect of which, if any, is negligible. The world warmed from the end of the LIA in the mid-19th century naturally in cycles related to oceanic oscillations. The early 20th century warming from the late ‘teens to late ’40s was followed by the cooling from then until the late ’70s, so the next cycle due was warming. The late 20th century warming looks virtually identical to the early 20th century warming cycle.

CO2 alarmism fails to reject the null hypothesis, ie that the late 20th century warming was entirely or predominantly from the same natural causes that produced all previous such warming cycles within the Holocene and prior interglacials.

• Steven Mosher says:

DB

“But all the available evidence shows that ∆CO2 follows ∆T, not vice-versa.”

This is wrong. In the first place incrases in c02 both precede and follow increases in temperature.
AGW theory tells us this will be so and the data show that. In fact the “lag” was predicted before it was ascertained.

Finally there is this paper.

Here is a hint…

“We have known for a while that the Earth has historically had higher levels of greenhouse gases during warm periods than during ice ages. However, it had so-far remained impossible to discern cause and effect from the analysis of information (in encapsulated gas bubbles) contained in ice cores.

An international team of researchers led by Egbert van Nes from Wageningen University (Netherlands) now used a novel mathematical insight developed to have a fresh look at the data. The analysis reveals that the glacial cycles experienced by the planet over the past 400,000 years are governed by strong internal feedbacks in the Earth system. Slight variations in the Earth orbit known as Milankovitch cycles, functioned merely as a subtle pacemaker for the process. In addition to the well understood effect of greenhouse gases on the Earth temperature, the researchers could now confirm directly from the ice-core data that the global temperature has a profound effect on atmospheric greenhouse gas concentrations. This means that as the Earth temperature rises, the positive feedback in the system results in additional warming.

“A fundamental insight by George Sugihara from the USA on how one can use observed dynamics in time series to infer causality caused a big splash in the field,” explains Egbert van Nes. “It immediately made us wonder whether it could be used to solve the enigma of the iconic correlated temperature and gas history of the Earth.”

Indeed this riddle has proven hard to solve. A slight lead of Antarctic temperature over CO2 variations has been argued to point to temperature as a driver of CO2 changes. However, more recent studies cast doubt on the existence of a significant time-lag between CO2 and temperature.

“It can be highly misleading to use simple correlation to infer causality in complex systems,” says George Sugihara from Scripps Institution of Oceanography (USA). “Correlations can come and go as mirages, and cause and effect can go both ways as in kind of chicken and egg problem, and this requires a fundamentally different way to look at the data.”

As direct evidence from data has been hard to achieve, Earth system models are used as a less direct alternative to quantify causality in the climate system. However, although the effects greenhouse gases on Earth’s temperature are relatively well understood, estimating the actual strength of this effect is challenging, because it involves a plethora of mechanisms that are difficult to quantify and sometimes oppose each other.

“Our new results confirm the prediction of positive feedback from the climate models,” says Tim Lenton, a climate researcher from the University of Exeter (UK). “The big difference is that now we have independent data based evidence.”

http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate2568.html

The following Movie will show you how it works

http://simplex.ucsd.edu/Movie_Sall.mov

and one of the guys. not your typical “ivory tower” kind of scientist
http://en.wikipedia.org/wiki/George_Sugihara

• Mike Jonas says:

Ferdinand and Willis – I was addressing the statement “once one assumes that the increase in CO2 concentration is totally our fault, and IMO it is“. But you are both right, the effect is modest, and I should have said so.

• Daniel Kuhn says:

Catherine Ronconi

nope. you are wrong. very wrong. but hey, when you think you can explain the late 20th cenutry warming, publish your evidence in the scinetific literature and see if it can convince the experts on this topic.

• D. Kuhn says:

when you think you can explain the late 20th cenutry warming…

It is not the job of scientific skeptics to explain, but rather, to debunk. We have done an excellent job of debunking the failed notion that CO2 is the control knob of the climate.

==================

Steven Mosher,

“AGW Theory”…

NOT.

The AGW conjecture cannot make accurate, consistent predictions. So it cannot be a ‘theory’.

• Steven Mosher says:

DB

I note that you have run away from the debate on “lag” and the science of causality in dynamical systems.
Again, WRT the “lag”

1. It was predicted by the theory
2. It was discovered thus confirming the theory.

Evidence comes in tree varieties: Evidence can CONFIRM a theory ( not prove it ) Evidence can
DISCONFIRM a theory ( theories are not falsified they are modified) and evidence can be unrelated
to a theory. When we predict that the world will warm and it does, that is confirmation. When we predict the world will warm by 2C and it warms by 1C, that is still confirmation, and also an indication that improvement is possible.

“The AGW conjecture cannot make accurate, consistent predictions. So it cannot be a ‘theory”

Wrong.
The theory has made successful predictions since its inception.
For example in the 1930s guy Calandar predicted that if we increase C02 the temperature would go up.
He was correct. The temperature did go up.
The accuracy of the predictions is also pretty damn good considering the complexity of the system.
Warming predictions run about 2C per century and we see something consistently less than that. Lets put it this way: If the warming were on 1 C per century it would STILL be a good prediction. good enough to base policy on. Faced with the question “how much will it warm? we have these two answers

A) A best estimate of 2C, but it is likely high.
B) Skeptics who blather that we cant know

No decision maker is going to listen to a person who says that they can’t know. It’s tuesday morning, my
weekly forecast is due and it’s running as I type. It will be wrong, sometimes 15% high, sometimes 20%
an sometimes 40% high. Nevertheless, we take action based on the forecast and the model because it
works better than shrugging your shoulders. A few critics try to point out errors that everyone knows about as if they were doing something by merely being critical. Doubt doesnt win. Critics who cant improve a model are never listened to. They make a lot of noise, but they have no power.

You can’t do science by merely criticizing. If you don’t work to improve understanding, it you avoid the debate as skeptics do by shrugging their shoulders, then no one will listen to you

• Steven Mosher, April 20, 2015 at 11:57 am

Steven, I have my reservations about the CO2 lead-lag in ice cores.

It is rather difficult to see any lead-lag during a deglaciation, as over the 5000 years warming, at least 4000 years CH4 and CO2 rise overlap with the temperature rise.

During the onset of a new ice age, things are quite clear: CO2 drops thousands of years later than temperature and CH4 which are synchronous.
That is not a matter of problems with the gas age – ice age difference, as CH4 is measured in the gas phase as CO2 is.

Here for the previous interglacial:

where temperature is at a new minimum and ice sheets at a new maximum before CO2 starts to drop…

Moreover, according to prof. Lindzen, the energy needed to melt the ice sheets over the period of the warming is about 200 W/m2 continuous. The extra supply by increasing CO2 levels is less than 2 W/m2…

• Crispin in Waterloo says:

Nylo

When reading the article I fully expected Willis to show what the emission and absorption rate would have to be to get a straight line increase. I thought that is where the argument was headed. All the elements are there.

This sort of addresses some comments below which indicate there are people having trouble adding the two curves together in their mind: the increasing decay rate if the total disturbance in concentration continues to go up, and the fixed disturbance followed by a % decay.

To get a straight line increase in concentration the emission rate would not be linear, it would have to be enough to cover absorption plus the amount of increase. As the total went up, the amount to cover for absorption would increase, plus any more generating another increase.

On a minor point about the gigaton to ppm thing:

W: “The theoretical total CO2 from emissions is calculated using the relationship 1 ppmv = 2.13 gigatonnes of carbon emitted.”

By my calculated guess “carbon” here does not mean “carbon-dioxide”. It seems to be the approximate mole mass of air divided by the mole mass of carbon. 25.6/12 ~2.13. Because the terms carbon and carbon-dioxide are used interchangeably in the alarmosphere always take the time to find out which is being used. The conversion number for CO2 would be 25.6/44 ~0.58. If I am wrong someone please correct it.

• Crispin,

GtC as carbon is mass, ppmv CO2 in the atmosphere is ratio: volume of CO2 in total volume of air, including the ratio in molecular mass, that is ~2.13 GtC/ppmv.

The near linear increase of the rate of change of emissions, increase in CO2 and sink rate, was caused by the slightly quadratic increase of total emissions and increase in the atmosphere over time:

7. asybot says:

Just a thought and I am no scientist. But to me one of the main reasons that the nuclear high atmospheric and actually all testing in the late 50’s and 60’s (including especially the hydrogen and neutron bomb tests) was stopped had nothing to do with radiation. I believe that the reason they stopped is that those political powers realized that the CME impacts of those tests were having a serious effect on all of their unprotected electronics on the battlefield and at home for both sides and so nobody would “win”.. ( so back to the horse and buggy and the proliferation of the arms race in conventional weaponry) but gee I could be wrong..

• GregK says:

Nobody would have won whatever they did.

Einstein, I believe, said that he didn’t know what World War 3 would be fought with but he knew what weapons would be used in World War 4…….sticks and stones

• asybot says:

Thanks GregK I had forgotten about that statement, I wonder who has the bigger sticks these days though
ISIS?

8. ghl says:

Willis,you have confused me.
“It is the time an average CO2 molecule stays aloft before being re-absorbed.”
What property of the molecule is averaged?

• Alex says:

It’s based on probabilities. Billions upon billions of molecules. Some may be absorbed almost immediately some may take many years. It probably follows a bell shaped curve. The centre of the curve would be the average time that this happens

• Owen in GA says:

The really bad thing is it is probably more like a Boltzmann distribution. The problem is those are hard to deal with so we forget the long high tail and pretend it is a normal bell curve to make the statistics work out nicely. Most of the time it works out close enough for instrument resolution, so we go with it.

• Willis Eschenbach says:

ghl April 19, 2015 at 11:00 pm

Willis,you have confused me.

“It is the time an average CO2 molecule stays aloft before being re-absorbed.”

What property of the molecule is averaged?

Thanks, ghl. We’re averaging the length of time that a molecule of CO2 stays airborne—the time between being emitted somewhere into the atmosphere, and then later being re-absorbed into some other part of the carbon cycle (plants, the ocean, etc.).

w.

• george e. smith says:

Willis, I think (sometimes dangerous) that what you intend is ” the MEAN time that a TYPICAL CO2 molecule stays aloft is blah blah blah ..”

And I too would doubt that it is bell shaped.

I tend to stay away from “average” unless I mean a strict mathematical average, which of course implies a given data set of exactly known numbers.

Well I shun averages anyway; they are too late to do anything about.

g

9. Mike Jonas says:

re the half-life of atmospheric CO2 : The main sink of atmospheric CO2 is the ocean. The sink rate is proportional to the difference in CO2 partial pressure between atmosphere and ocean. This pressure difference has a half-life which is, by my calcs, around 13 years. IOW all other things being equal, the pressure difference halves in about 13 years. But – and it is quite a big “But” – that doesn’t mean that half of that extra atmospheric CO2 goes into the ocean in 13 years, because the absorption of CO2 pushes up the CO2 partial pressure in the ocean (and that’s complicated by chemical reactions in the ocean that reduce it). The end effect so far has been that around half of the total amount of man-made CO2 in the last few decades has gone into the ocean, and is likely to continue to do so for quite a long time yet (statements by alarmists that the ocean is getting “saturated” appear to be false). But in the long term, the atmospheric CO2 concentration will end up higher than it otherwise would be, because of the increased CO2 partial pressure in the ocean. How long will that take to go down? I don’t know. But with a new glacial coming in the next few thousand years, I sincerely hope that the extra ocean CO2 partial pressure lasts a lot longer than that, because that extra atmospheric CO2 is really going to be needed to keep the world’s plant life going.

• Mike Jonas says:
But with a new glacial coming in the next few thousand years, I sincerely hope that the extra ocean CO2 partial pressure lasts a lot longer than that, because that extra atmospheric CO2 is really going to be needed to keep the world’s plant life going.

HI Mike,
I agree with your comment in general, and have posted similar comments for several years.
However, I have been severely time-constrained in recent years so have done no detailed work.
Have you run the numbers?
Does the great “low-CO2” extinction event occur in the next Ice Age, or the one after that, or…?

Regards, Allan

• kim says:

The sun and the biome conspire to almost irreversibly sequester carbon. If man did not exist, it would be useful to invent him.
===========================

• kim says:

Fortunately, once man’s pitiful little aliquot of injected carbon is exhausted, and atmospheric CO2 resumes its inevitable downward progress, the progress will be slow enough for plants to evolve, as they have already done.
=========================

• Mike Jonas says:

Hi Allan – Sorry, no I haven’t run the numbers. I am not sure whether my formulae are valid all the way down into a full glacial (eg, sea ice area is important, and a modest error there could grow errors exponentially over time). So if I think that medical advances can keep me alive long enough to observe it directly, I’ll consider moving to the tropics and watch events from there. [NB. That “if” is as in formal logic p->q]

• Alex says:

No problem. Produce more cement.

• Owen in GA says:

Or start mining and burning clathrates

• https://wattsupwiththat.com/2015/03/14/matt-ridley-fossil-fuels-will-save-the-world-really/#comment-1883937

[excerpt]

I have no time to run the numbers, but I do not think we have millions of years left for carbon-based life on Earth.

Over time, CO2 is ~permanently sequestered in carbonate rocks, so concentrations get lower and lower. During an Ice Age, atmospheric CO2 concentrations drop to very low levels due to solution in cold oceans, etc. Below a certain atmospheric CO2 concentration, terrestrial photosynthesis slows and shuts down. I suppose life in the oceans can carry on but terrestrial life is done.

So when will this happen – in the next Ice Age a few thousands years hence, or the one after that ~100,000 years later, or the one after that?

In geologic time, we are talking the blink of an eye before terrestrial life on Earth ceases due to CO2 starvation.
________________________

I wrote the following on this subject, posted on Icecap.us some months ago:

On Climate Science, Global Cooling, Ice Ages and Geo-Engineering:
[excerpt]

Furthermore, increased atmospheric CO2 from whatever cause is clearly beneficial to humanity and the environment. Earth’s atmosphere is clearly CO2 deficient and continues to decline over geological time. In fact, atmospheric CO2 at this time is too low, dangerously low for the longer term survival of carbon-based life on Earth.

More Ice Ages, which are inevitable unless geo-engineering can prevent them, will cause atmospheric CO2 concentrations on Earth to decline to the point where photosynthesis slows and ultimately ceases. This would devastate the descendants of most current [terrestrial] life on Earth, which is carbon-based and to which, I suggest, we have a significant moral obligation.

Atmospheric and dissolved oceanic CO2 is the feedstock for all carbon-based life on Earth. More CO2 is better. Within reasonable limits, a lot more CO2 is a lot better.

As a devoted fan of carbon-based life on Earth, I feel it is my duty to advocate on our behalf. To be clear, I am not prejudiced against non-carbon-based life forms, but I really do not know any of them well enough to form an opinion. They could be very nice. :-)

Best, Allan

• Mike M. says:

Allan MacRae,

You wrote: “Over time, CO2 is ~permanently sequestered in carbonate rocks, so concentrations get lower and lower.”
That is not correct. Carbonate rocks get subducted into the mantle where they decompose under the high temperature, The CO2 is eventually returned to the atmosphere via volcanoes. Otherwise, life would have disappeared a very long time ago.

Atmospheric CO2 has gone up and down over time, within a moderately constrained range (something like 180 to 1500 ppmv). So there are negative feedbacks. Less CO2 in the atmosphere means less uptake, whether by plants, weathering rocks, or into the ocean. Also, lower temperature, which slows weathering. More CO2 has the opposite effect. So we need not fear a low CO2 extinction.

• Jan Christoffersen says:

Mike M,

Minor amounts of carbonate rocks are subducted with oceanic plates into the Earth’ crust because there is very little carbonate in deep oceanic sediments (CO2 dissolves at great ocean depths). Most carbonate deposits are located in stable continental basins, for example, the huge Great Basin of central-western North America, a shallow-sea environment that has sequestered countless trillions of tons of CO2 in carbonate rocks for well over 400 million years.

• Peter says:

Yes Mike you are right and you must count also with sequestration on dry land most of which is stable for hundreds of millions years.
So there must exist longer cycle where carbonate rocks from continental shelf and continents are recycled and changed back to CO2 and atmosphere.
I would bet on methane and oil in this. After some time most of limestone on continents and continental shelf is changing to methane and and oil and then seeping back to surface, as it is lighter than rocks, then replenishing CO2 in atmosphere.

• Catherine Ronconi says:

Mike M.
April 20, 2015 at 7:43 am

For the Cenozoic and Mesozoic Eras and Carboniferous and Permian Periods, you’re right about the upper limit on CO2 level, but not for the first 184 million years of the Paleozoic Era, nor for most if not all of the billions of years of the Pre-Cambrian Eons. In the Cambrian, Ordovician, Silurian and Devonian Periods, atmospheric CO2 levels were in several thousand parts per million, and even (much) higher during the Pre-Cambrian.

• Mike- I am not an expert in this area but I suggest your subduction argument fails due to the magnitudes of CO2 sinks (large) vs sources (small).

If I am wrong, why do we have areally huge, thick stable beds of carbonates all over the planet?

You seems to be saying the sinks and sources are in equilibrium and I suggest they are not.

I suppose one should run the numbers (if possible) but I do not have the time.

Regards, Allan

• Kim says:
Fortunately, once man’s pitiful little aliquot of injected carbon is exhausted, and atmospheric CO2 resumes its inevitable downward progress, the progress will be slow enough for plants to evolve, as they have already done.

I say:
Kim, I presume you are referring to C3 vs. C4 plants etc.
I repeat, I have not run the numbers, but I suggest this extinction event could happen in a few thousand years, or a few hundred thousand years – the blink of an eye in geologic time.
Do you really think plants will have time to adapt?

Alex and Owen:
That’s a lot of cement.
What to do with it? Pave the planet to keep the dust down?
That’s quite a treadmill we will be on. :-)

10. Mike McMillan says:

I’m having difficulty understanding why the residence time of a pulse of CO2 should be different from the residence time of its individual molecules. The carbon involved is brand new carbon, created from nitrogen, and it is a measurable pulse.

Further explanation is needed.

• Alex says:

Carbon is element 6 and Nitrogen is element 7. Its not very likely to change one into the other outside some complex nuclear process.

• Owen in GA says:

They are talking about those produced by a nuclear explosion. The bomb releases a very large number of neutrons which thermalize in the atmosphere. When they strike a 14N, it spits out at proton to form 14C. The 14C then oxidizes in the atmosphere to form CO2. The reaction cross section of the 14N(n,p)14C reaction is about 2 barns which is fairly large. You just have to have a large source of thermal neutrons to make lots of it. A nuclear bomb is just such a source.

• Suppose you have a market stall, with high cash turnover. You make a big sale – someone pays you \$1000 in \$10 notes. At the end of the day, you may not have many of those notes left, but you have still have the benefit of the sale..

• Kasuha says:

With 14C experiment, you mark all money owned by people in a selected city and study how long it will take until they get rid of all of them, replacing them with unmarked money.
With Bern model you double amount of money owned by all people in a selected city and study how long it will take until they return to be equally rich as the rest of the nation.

• kim says:

Hmmmm. It seems to me the Bern model is unphysical, neglecting all the negative feedbacks recruited in a more gradual rise in CO2. And therein lies all the difference.
===========

• You would have to take into account that the ocean is venting some 90 GtC/y in the form of CO2 into the atmosphere, without much 14C, while it sucks up 92 GtC/y from the atmosphere with the 14C. That way the carbon cycle rinses out 14C rather quickly from the atmosphere.

http://en.wikipedia.org/wiki/Carbon_cycle

Note also that this is a process with some fractination, the heavier (14)CO2 has more affinity with water, so it goes into the water much more easier than it comes out, compared to normal (12)CO2

• Mike M. says:

Mike McMillan,

“I’m having difficulty understanding why the residence time of a pulse of CO2 should be different from the residence time of its individual molecules. The carbon involved is brand new carbon, created from nitrogen, and it is a measurable pulse.

Further explanation is needed.”

With radioactive decay, the process is one way: K-40 decays to give Ar-40, but the reverse does not happen. So there is never an equilibrium.

But CO2 taken up by the oceans and plants is not one way. The plants die, decay, and return the CO2 to the atmosphere. CO2 molecules pass from the atmosphere into the ocean and vice versa.

So let D be the rate at which CO2 dissolves in the ocean and let E be the rate at which CO2 evaporates from the ocean. The residence time of individual molecules is C/D, where C is the amount of CO2 in the atmosphere. After a pulse of CO2 into the atmosphere, the rate at which the atmospheric concentration decreases is D-E. So the time constant for the decrease is C/(D-E); that is much larger, i.e., slower.

For C14, there is essentially none in the ocean, so the time constant you measure is the shorter one, C/D.

• gareth says:

Carbon 14 half life is 3730 years, much longer than either time constant that we are discussing. Therefore, even uncorrected for radioactive decay, C14 concentration can validly be used to measure sequestration time constant.
I think Willis is wrong to say that there is a difference between an individual molecule and bulk gas.The C14 is merely a marker, mixed in with the bulk, and thus will behave no differently from the bulk. Otherwise he is implying that sequestration time constant is a function of the magnitude of the impulse.

• Gareth,

The problem is the time delay between sink and source between the deep oceans: what goes into the deep is the isotopic composition of today (with some fractionation) but what comes out of the deep oceans is the composition of ~1000 years ago, which was less that half what goes into the deep. That makes that the decay rate of a 14CO2 peak is at least a factor 4 faster than for a 12/13CO2 peak…

• That makes no sense – the real airborne fraction is decreasing and is now only about 40%. The 2002 problem is actually a decreasing AF since about 2002 (increasing emissions and flat atmospheric growth).

• edimbukvarevic,

There is no law that says that the airborne fraction should remain the same, natural sinks are highly variable, as they depend of temperature (El Niño), drought, sunlight (Pinatubo) and total CO2 pressure in the atmosphere. If you look at the period 1987-1995, there was a similar drop, even with increasing temperature, while the current period shows a flat temperature…

11. Dodgy Geezer says:

…somehow in all of that we never pondered exactly how a cheap sheet metal and plywood desk would ward off an atomic bomb…

Purely as an aside, sitting under a desk would be expected to minimise injury from flying glass, and burns from a heat pulse passing through the window.

There’s not a lot you can easily do to protect someone sitting in the open inside the radius of a nuclear fireball. But if a nuclear attack on a city is underway, the bombs will be airbursts, relying on the blast and heat pulse to spread destruction very widely. Heat pulses will cause fires many miles from ground zero, and one of the easy cheap things you can do to lower destruction, if you have time, is to paint your windows white (since many people have some white paint somewhere handy). The heat pulse comes through a window before a blast (when the window is still intact), and would start thousands of small internal fires inside exposed homes.

This simple action could save hundreds of thousands of lives in a nuclear exchange. So it was suggested in Civil Defence pamphlets of the 1950s. And the Ban the Bomb activists mocked it so successfully that it was taken out – thus ensuring far greater carnage. It was about then, in my formative years, that I began to adopt the cynicism which has marked my adult ones….

• Purely as an aside, sitting under a desk would be expected to minimize injury from flying glass, and burns from a heat pulse passing through the window.

Just last year, sitting under a desk would have saved many injuries from the overhead blast of the Siberian meteorite from flying glasses. People went to the windows to “see” – – and thus were blinded.

• meltemian says:

“Cold War kids were hard to kill,
under their desks in an air-raid drill”
….. Billy Joel

• PiperPaul says:

…sitting under a desk would be expected to minimise injury…

Yeah. There’s a more than a little ignorant self-flattery (sound familiar?) involved with people scoffing at and mocking “duck and cover”.

• BFL says:

Unless you lived anywhere near a Minuteman silo complex as the Russians had 2 each 20-25 megaton war heads targeting each control center for these. They yields were large because the targeted silos were “hardened” and underground and the Russian missiles weren’t very accurate at the time. Now note that these high yield bombs were dirt diggers designed to access the American silo control centers, so try imagining what a ground burst of 25 megatons would not only do to the surrounding (large) area but to those areas/states downstream of the huge amount of radioactive fallout generated. Not pretty to say the least. Now about 400 of these silos still exist and though the Russian missiles are more accurate and therefore somewhat smaller, (0.8Mgton) a ground burst of this size would still create a lot of havoc. Some of the Russian launch complexes are also in the Ukraine.
http://www.huffingtonpost.com/2012/09/20/minuteman-missiles-hidden-silos-america_n_1897913.html
http://en.wikipedia.org/wiki/R-36_%28missile%29

• milodonharlani says:

This book excerpt in the HuffPo is absurd:

“Those are 450 ICBMs still capable of reaching targets around the world as quickly as you could have a pizza delivered to your door. This represents countless megatons of thermonuclear material— enough to turn the world into what journalist Jonathan Schell once warned would be a “republic of insects and grass.””

The megatons are in fact easily counted. Two hundred of the Minuteman III missiles are being or have been fitted with single W87 warheads, yielding 300 kilotons each. That totals 60 megatons. The other 250 retain their old, triple-MIRVed W78 warheads with up to 350 Kt, for a maximum of 262.5 Mt. The grand total then is under 322.5 Mt.

It is preposterous to assert that 950 warheads each yielding 300 to 350 Kt could turn the world into “a republic of insects and grass”. In the first place, they’d be used against enemy military targets, since they were designed to attack Soviet ICBM silos. But even if they were used in a homicidal, optimum burst height attack against the largest cities in the world, they couldn’t kill all humans (probably fewer than a billion), let alone everything except insects and grass.

• milodonharlani says:

The order of magnitude range of megadeaths in such an improbable attack would be 100 to 1000, most likely in the low hundreds.

12. Who cares what is causing the CO2 amount to rise? It doesn’t do squat to alter temperature so it’s irrelevant

• kim says:

Well, if it does, it alters it to the beneficial side, so it’s all good. Relax, and roll with the punches, er, uh, adapt.
================

13. Evan HIghlander says:

Just as Question….. not so much about Radioactive Decay but thinking about Exponential “decay” as part of a resonant cycle, WOULD / COULD the Disturbance Amplitude INCREASE again, and then too cyclically be Exponential to some max. WHat would cause that to happen AND be the Parameters of such a cyclical event ?

• Alex says:

Energy tends to dissipate. You might have several cycles running in a system and the amplitudes may coincide to give a ‘ bump’ , but the overall signal would diminish. Unless you are continuing to feed energy into the system.

• Willis Eschenbach says:

Thanks, Alex. In this case, we’re not measuring energy. We’re measuring carbon and the carbon cycle, which doesn’t “tend to dissapate”.

Regards,

w.

14. kim says:

Simple. The elephant is enlarging, outgassing, and net sinking. We need more biologists touching the elephant.
========================

• Alex says:

EeeeW!

• kim says:

Yeah, sure, I’m as blind as everyone else.
===============

15. Chris Hanley says:

I have no opinion concerning Dr Salby’s ideas because I’m not a scientist nor related professional, my only interest is as a taxpayer and forced consumer of idiotically expensive wind-generated power.
Here’s an oddity though which I’ve never seen explained, according to Law Dome ice core proxies, CO2 started to rise …

… long before human CO2 emissions from fossil fuels kick-off just after WW2:

• “long before human CO2 emissions from fossil fuels kick-off just after WW2”
That is analysed here. The initial rise was due to the forest clearing that came with European colonisation.

• kim says:

Or rice, or sumpin’. Nick’s not sure.
============

• kim says:

But he’ll confidently assert.
====================

• Phlogiston says:

So what caused the fall of CO2 between the Cambrian and Carboniferous? Presumably palaeo-tourism by our tine travelling decendants. (It has to be humans, at least we’re agreed on that, I mean – what else could it be??)

• Crispin in Waterloo says:

What you are saying, Nick, is that the CO2 rise chart is not accurate. You are saying that the AG CO2 output by mankind started much earlier and was much more massive that all the chart makers claim. The implication is that men with hand axes can change the composition of the atmosphere and therefore the planetary climate.

Nonsense.

• Chris Hanley says:

“The initial rise was due to the forest clearing that came with European colonisation …” Nick Stokes 1:19 am.
====================================
Lord knows how Dr Houghton (Dr. Houghton is an ecologist with interests in the role that terrestrial ecosystems play in climate change and the global carbon cycle) came up with global land use data back to 1850.
Emeritus Professor Michael Williams at Oxford, while recognising the historical uncertainties of forest clearing, burning etc., has dismissed the notion of pristine forests and pastures prior to European colonisation:
“Whether in Europe, the Americas, Africa, or Asia, the record is clear — the axe, together with dibble-and-hoe cultivation, and later the light low, often integrated with pastoral activity in Old World situations, reduced the extent of the forest. Fire was particularly destructive in this process. It was not pristine wilderness in which the indigenous inhabits were either incapable or unwilling to change anything. Everywhere, it was a far more altered world and forest than has been thought up to now”:

• “The implication is that men with hand axes can change the composition of the atmosphere”

Men with hand axes changed the landscape, in US, Australia, Canada. We know that. You can calculate the carbon implications. Houghton and others have done that.

• DEEBEE says:

Nick why can your analysis be wrong?

• Phlogiston says:

Over the course of the Pleistocene the continent of Africa, at least, has alternated between dominance of forest and grassland. This resulted in the selection of adaptability and evolution of humans. Due to climate change (the natural, real kind) linked to glacial phases. How was this related to CO2 and why was that bad or good?

• Curious George says:

Any data regarding forest CO2 uptake vs corn field CO2 uptake vs rice field?

• Carbon indians theory:
Most indians died from diseases after Columbus discovered America,
and thus their practice of burning the prairie ended, thus diminishing CO2,
and starting the Little Ice Age.

• Owen in GA says:

Nice theory…not sure of the timings… but if there was a missing /sarc it could be totally approved dogma.

• Dodgy Geezer says:

… CO2 started to rise …… long before human CO2 emissions from fossil fuels kick-off just after WW2:

Yes. About 1750. When the Industrial Revolution started. In England…

16. kim says:

I’d also challenge Willis’s 42%. It’s more like 55% and is not constant, but very slightly increasing. Hard to explain, if I’m right, and I’m not sure.
=================

• kim says:

Recruitment of negative feedbacks, probably biological.
==========

• kim says:

Hmmmm. I may have been thinking backwards about this. Nonetheless, I maintain that the percent of new emissions that is sequestered is not a constant, but slightly increasing, even in the face of rising emissions. If I’m wrong, and it is a constant, the increasing sequestration still needs a ready explanation.
=============

• There are two different numbers. The CO2 increase is about 42% of total CO2 added (including land use change) or 55% of fossil fuel emissions alone.

17. richardscourtney says:

Friends

Absence of correlation indicates absence of direct causation
but
correlation does not indicate causation.

Hence, the agreement of the red and black lines in Willis’ Figure 3 only indicates a possibility that the recent rise in atmospheric CO2 concentration is caused by anthropogenic (i.e. man-made) CO2 emissions overloading the ability of the carbon cycle to sequester all the CO2 emissions both natural and anthropogenic.

At issue is whether that possibility is true: e.g. the IPCC says it is and Salby says it is not. Available data is not sufficient to resolve this issue although there are people on both ‘sides’ of the issue who select data to support their claims.

However, the recently launched OCO-2 satellite promises to provide data capable of resolving the issue in less than a year from now.

It is interesting to note that the preliminary NASA satellite data seems to refute the IPCC CO2 sink and source model supported by Willis in his above article.

I commend this essay on WUWT where Ronald D Voisin considers “Three scenarios for the future of NASA’s Orbiting Carbon Observatory”.

In particular, I ask people to peruse the illustration at the link which shows ‘Average carbon dioxide concentration Oct 1 to Nov 11, 2014 from OCO-2′. The very low CO2 concentration over highly industrialised northern Europe and especially the UK contradicts the ‘overload hypothesis’ used e.g. by the IPCC to assert that anthropogenic CO2 emissions are overloading CO2 sinks and, thus, causing the recent rise in atmospheric CO2 concentration. The low concentration in that region indicates that over the short time of the illustration the natural and anthropogenic CO2 emissions were all being sequestered local to their sites of emission. Clearly, this finding directly contradicts the ‘overload hypothesis’: however, it is for a very short time period and, therefore, the finding may be misleading.

When at least an entire year has been monitored by OCO-2 then it will be possible to observe if the anthropogenic CO2 emissions are or are not overloading the ability of the CO2 sinks to absorb them. We only have to wait a year for OCO-2 to indicate if the IPCC or Salby or neither of them is right. And there is no good reason to pretend anybody now knows which of them is right.

Richard

• richardscourtney says:

Friends:

I failed to provide the link to Voisin’s article. Sorry.

Richard

• kim says:

This is key. We have poor understanding of the carbon cycle, and as understanding improves, the paradigm will shift. Just how, I dunno.
=================

• kim says:

One thing is for sure, that is that a higher atmospheric CO2 level is good for carbon based life forms. Now we just have to convince ourselves that we are carbon based life forms, and the rest is easy.
==============================

• Chris Hanley says:

Ronald Voisin’s droll comment about NASA’s ad hoc hypothesis explaining an inconvenient OCO-1 image viz.: “Australian industrial activity may have pushed it’s CO2 output upwind into the lush forests of Malaysia” caused much LOLing.

• Mike Jonas says:

I commented on the original as follows:
Mike Jonas December 29, 2014 at 4:06 pm
Ronald D Voisin’s analysis is incorrect. If you look at the graph of CO2 concentration against time, you will see that the seasonal variation dominates in the short term. Only over a period of several years is the growth in CO2 concentration apparent. So it is not at all surprising that short term local factors dominate the CO2 pattern at any one point in time. This does not in any way disprove that fossil fuel usage has been the major driver of CO2 concentration over the last few decades.

• richardscourtney says:

Mike Jonas

I wrote

Available data is not sufficient to resolve this issue although there are people on both ‘sides’ of the issue who select data to support their claims.

and

When at least an entire year has been monitored by OCO-2 then it will be possible to observe if the anthropogenic CO2 emissions are or are not overloading the ability of the CO2 sinks to absorb them. We only have to wait a year for OCO-2 to indicate if the IPCC or Salby or neither of them is right. And there is no good reason to pretend anybody now knows which of them is right.

Ronald D Voisin’s analysis is incorrect. If you look at the graph of CO2 concentration against time, you will see that the seasonal variation dominates in the short term. Only over a period of several years is the growth in CO2 concentration apparent. So it is not at all surprising that short term local factors dominate the CO2 pattern at any one point in time. This does not in any way disprove that fossil fuel usage has been the major driver of CO2 concentration over the last few decades.

I respond because your reply demonstrates the truth of my assertions that “there are people on both ‘sides’ of the issue who select data to support their claims” and “We only have to wait a year for OCO-2 to indicate if the IPCC or Salby or neither of them is right. And there is no good reason to pretend anybody now knows which of them is right.”

You clearly are unwilling to wait the remaining months, and those who claim “fossil fuel usage has been the major driver of CO2 concentration over the last few decades” have a responsibility to demonstrate that their claim is right: others only have a duty to demand that they do demonstrate it.

Your arm-waving about different time-scales is twaddle in the context of your argument (it is very relevant to what I think is actually happening but that, too, is not relevant).

Willis provides the Mauna Loa data as his Figure 5 above. There is no indication that the sinks are overloading.

It is not true that as you claim, “Only over a period of several years is the growth in CO2 concentration apparent”. NO! The “growth in CO2 concentration” is the residual of the seasonal variation each year, and it is “apparent” each year. Importantly, that seasonal variation which provides each annual rise indicates that the sinks are NOT overloaded.

The seasonal variation is a ‘saw tooth’ consisting of rapid linear increase followed by rapid linear decrease in each year. The residual of the seasonal increase occurs because the ‘down slope’ is shorter than the ‘up slope’. Importantly, the rate of decrease does not reduce before the reversal which it would if the sinks were being filled: clearly the sinks do NOT fill.

I can also argue this the other way. But the most cogent argument is that the sinks don’t fill. Available data cannot resolve the matter but a year of OCO-2 data probably will.

Richard

• Mike says:

re richardscourtney April 20, 2015 at 2:07 am

Allan MacRae , Ole Hulum and others have shown that short term d/dt CO is proportional to SST.

There are several depths of ocean sinks each with their own time constant, upwelling and downwelling currents in Indian ocean, tropics and poles; land, biosphere sinks and sources. It’s massively complex which is why Willis’ single exponential fit does not tell us that much.

OCO2 may well help see exactly where this is happening and thus clarify sinks and sources and establish that there is a strong temperature dependency. I’m not sure that one or ever several years data will be enough to work out the centennial component of all this.

• richardscourtney says:

Mike

It seems I may not have been adequately clear.

You say to me and I very strongly agree

OCO2 may well help see exactly where this is happening and thus clarify sinks and sources and establish that there is a strong temperature dependency. I’m not sure that one or ever several years data will be enough to work out the centennial component of all this.

However, I wrote

When at least an entire year has been monitored by OCO-2 then it will be possible to observe if the anthropogenic CO2 emissions are or are not overloading the ability of the CO2 sinks to absorb them. We only have to wait a year for OCO-2 to indicate if the IPCC or Salby or neither of them is right. And there is no good reason to pretend anybody now knows which of them is right.

Seeing that the sinks are or are not being overloaded is not the same as determining “the centennial component of all this”.

Please note that Mike Jonas has provided an example of the common assertion that the claim of anthropogenic emissions overloading the carbon cycle has to be disproved. That assertion is often made although it is an example of superstition (those who make a claim need to justify while others only have a duty to shout “Prove it!”). However, it seems likely that OCO-2 data will disprove it.

Richard

• Mike says:

Thanks Richard,

I think we will see that there is a strong temperature dependency short term but that has already been shown. OCO2 will give us some good regional detail.

You are probably right that just one year may be enough to establish that sinks are not being saturated. Though that can probably be seen also from existing data like the annual cycle plot I posted above.

It’s also interesting to look at rate of change at MLO with longer filter.

• richardscourtney says:

Mike

It seems that you and I have entered a state of mutual strong agreement.

Yes, the Mauna Loa data does provide the suggestion you say. My point is that several things make suggestions but – at present – nothing provides definitive evidence, and accumulation of OCO-2 data for a complete year promises to indicate or refute the assertion that anthropogenic CO2 emissions are overloading the carbon sinks to cause the recent rise in atmospheric CO2.

I am willing to wait for that data although I have more reason than most to not want to wait that long.

Richard

• Mike says:

Yes Richard, it seems we agree on much of this.

Whether sinks are saturating is not really a yes/no question. It should perhaps be to what if any extent the rate of take up of atm CO2 being slowed by a partial saturation.

The whole question of saturation seems to relate to the idea of the Revel buffer: complex interdependencies of various ion concentrations in the surface layer of the oceans. This is the king-pin of the absorption argument and without that there is little question that the oceans have an immense capacity to absorb CO2.

What do you think we will be able to get from OCO2 ( Ohhh-CO2 ! ) data that will constrain the idea of Revel buffer, which seems still to be largely a speculative hypothesis?

• Mike Jonas says:

Richard – I am actually arguing much the same case as you. When I say “ it is not at all surprising that” I am arguing that it is too soon to jump to conclusions from the short term (<1yr) data and that there there is an alternative possible explanation. I argue that we need to wait for several years of data before we can draw particular conclusions. You argue similarly but nominate one year. I argue for several yyears, because with an annual average increase of around 2ppm, a single year's data isn't going to be conclusive. When I say "does not in any way disprove” I mean “does not disprove”, not “does prove”.
The most curious part of your criticism of me is the bit about the sinks not overloading. I emphatically did not say that the sinks are overloading, for a very simple reason : I have done a lot of calculations on ocean and atmospheric CO2, and there is no evidence that I can find that the ocean is becoming saturated; on all measures, the rate of absorption of CO2 by the ocean is either stable or accelerating slightly.

• richardscourtney says:

Mike

What do you think we will be able to get from OCO2 ( Ohhh-CO2 ! ) data that will constrain the idea of Revel buffer, which seems still to be largely a speculative hypothesis?

Please note that my answer is not an evasion. When the data exists then people will be able to discuss what it indicates but, until then, discussion of what the data will or will not indicate is pointless.

“I don’t know” is a scientific statement that has been much neglected in the ‘climate debate’.

Richard

• Phlogiston says:

Richard Courtney is right – the OCO and corresponding Japanese satellite data expose 99% of “science” on CO2 sources as witchcraft with no factual basis or relevance.

Voison is right. The Sahara desert emits more CO2 than Manhatten. Suck on that.

• richardscourtney says:

Mike Jonas

Sincere thanks for the clarification.

I apologise if I misunderstood you and – I hope – your clarification will correct any misunderstanding I provided for onlookers.

For clarity to assert any similarity and/or difference between our views, I address your saying to me

I argue that we need to wait for several years of data before we can draw particular conclusions. You argue similarly but nominate one year. I argue for several yyears, because with an annual average increase of around 2ppm, a single year’s data isn’t going to be conclusive.

My point is about the specific issue of whether the recent rise in atmospheric CO2 concentration is caused by anthropogenic CO2 emissions overloading the ability of the sinks to sequester all the total CO2 emission (i.e. both anthropogenic and natural).

If that overloading is causing the rise then all regions of major anthropogenic CO2 emissions (i.e. major industrial regions) must have an above average atmospheric CO2 concentration over a year if the rise occurs in that year.

As my first post in this sub-thread explained

When at least an entire year has been monitored by OCO-2 then it will be possible to observe if the anthropogenic CO2 emissions are or are not overloading the ability of the CO2 sinks to absorb them. We only have to wait a year for OCO-2 to indicate if the IPCC or Salby or neither of them is right. And there is no good reason to pretend anybody now knows which of them is right.

The important point is the ability of sinks near to the anthropogenic emissions to sequester the total of emissions (both natural and anthropogenic) near to the anthropogenic emissions.

Richard

• george e. smith says:

Richard, the Mauna Loa CO2 data has only about one third of the cyclic variation that occurs at the north pole, where the amplitude is more like 18-20pmm. And the up slope is for about seven months, and the downslope for five months.

So the down slope at NP is able to expunge 120ppmm excess CO2 (over 280ppmm) in about 30 months so the decay time constant is about 2.5 years for whatever gose on at the NP.

My guess it is the fresh water sea ice melting, to become CO2 deprived ocean water, which then rapidly sucks up (or down) its Henry’s Law amount of atmospheric CO2.

When the refreeze starts in the fall, the CO2 segregation coefficient between the liquid/solid phases expels the CO2 from the ice into the sea water from which it is then expelled to the atmosphere, because the cold ocean water is already at its Henry’s Law saturation level.

Just my opinion.
g

• Crispin in Waterloo says:

Thank you George E

I have been saying it for some time and the math doesn’t look too wrong to countenance it. There is lake ice and snow cover to consider as well. Ice and snow have basically zero CO2 in them. Not too surprisingly they appear in winter and disappear in summer (mostly). I am not so sure about Edmonton…

:)

• richardscourtney says:

george e. smith

Thankyou for your “opinion” that – as Crispin in Waterloo says – fits the data better than the ‘overload hypothesis’ promoted by e.g. the IPCC.

I iterate my point that we need data to assess the possible “opinions” and I have high hopes that OCO-2 will provide needed data.

Richard

• richard verney says:

Richard

Obviously, we need to see data from a full year (ie., all 4 seasons) before reaching preliminary conclusions (although we already know from the Japanese Satellite Data that the IPCC assumptions was questionable).

I understaood that NASA would be releasing the data every 3 months. There ought tio have been a March update (being 3 months after the December 2014 release). That update is about 1 month overdue.

Does NASA not like what the Nov 12 to Feb 11th data plot/photo/scan shows?

• Manfred says:

NASA successfully launched its first spacecraft dedicated to studying atmospheric carbon dioxideat 2:56 a.m. PDT (5:56 a.m. PDT) on Wednesday, July 2, 2014.

richardscourtney, that year you refer to is nearly up.
So, do we trust the data? Will the raw data require pre-Paris adjustment?

• richardscourtney says:

Manfred

NASA successfully launched its first spacecraft dedicated to studying atmospheric carbon dioxideat 2:56 a.m. PDT (5:56 a.m. PDT) on Wednesday, July 2, 2014.

richardscourtney, that year you refer to is nearly up.
So, do we trust the data? Will the raw data require pre-Paris adjustment?

Only NASA can provide proper answers to your questions. I offer my response.

As Ferdinand Engelbeen says here

Ferdinand Engelbeen
April 20, 2015 at 11:06 am

The new data are available at:
http://disc.sci.gsfc.nasa.gov/datareleases/First_CO2_data_from_OCO-2

That URL provides this link to URLs to actual data files.

Those files presently run from 2 January 2015 to 9 March 2015. The preliminary data released in graphical form were for the previous October.

My opinions on the above facts are:
1.
The satellite was launched in early July 2014 but the earliest data is for October 2014 which seems a long time (i.e. 3 months) for establishing and calibrating the satellite for its purpose.
2.
It is strange that the data is not collated in graphical form for each month as they were for October 2014: the software to produce the graphs clearly exists (or did when the October data was collated).
3.
It is strange that the data was obtained for October 2014 but the provided data file does not commence until 2 January 2015.
4.
If OCO-2 data for October 2014 to October 2015 were available then there would be a continuous 12-month OCO-2 data set prior to the IPCC Paris Meeting.
5.
If OCO-2 data is only available for after 2 January 2015 there would not be a continuous 12-month OCO-2 data set prior to the IPCC Paris Meeting.
6.
I anticipate that there will not be a continuous 12-month OCO-2 data set prior to the IPCC Paris Meeting because embarrassing facts need to be avoided at the time of the Meeting.

Richard

• kim says:

Is that one of those things, rsc, they call a falsifiable prediction?
====================

• richardscourtney says:

kim

Yes.

Richard

18. Mike says:

Nice informative article as always Willis. However, a few things are not correct.

The 14C bomb test data (blue line) shows how long an individual CO2 molecule stays in the air. Note that this is a steady-state process, with individual CO2 molecules constantly being emitted from somewhere, staying airborne in the atmosphere with a time constant tau of around 8 years, and then being re-absorbed somewhere else in the carbon cycle.

This argument is often brought up by warmists and is incorrect.

CO2 absorption is a two way process. Both oceanic and organic land-based sinks also re-emit, the annual in and out flow is almost two orders of magnitude bigger than the averaged annual change. This is discussed in Gosta Petterson’s papers. ( Something Salby also points out. )

This means that C14 molecules get emitted from ocean and organic decay so the C14 curve does NOT just reflect uptake of individual molecules.

Your description above stops at an individual molecule being absorbed by a sink and thus implicitly assumes it stays there. If that were the case you would be right, but it is not the case.

I don’t think Salby is rigorous and I don’t linke his style. I remain unconvinced by his presentation and want to see a proper paper, not a slide presentation, but I don’t think the temperature vs CO2 question has been properly addressed. In that he is correct.

I would draw attention to the deviations in your fig3 , these are close to the bit Salby highlights and probably do indicate a temperature dependency.

That looks small, so appears it may be negligible but the differences in temperature were also small, so it may still be a significant part of the centennial scale rise.

This needs looking at properly. I don’t think Salby is doing that but he is raising valid questions.

The reason that the Bern model is so long is that is has an additional 189y time constant. That is a key part of the alarmist hype were CO2 emissions will be around for centuries and continue to produce warming even if we sign up to a legally binding agreement to go back to the stone age later this year, in Paris.

As you point out we can not even work out the short time constant accurately so 189y is a joke.

Don’t be too quick to dismiss the temp CO2 question because you don’t like Salby’s work. This is not a simple question so don’t imagine that fitting one exponential to slightly curved line necessarily established the underlying cause(s).

• “so it may still be a significant part of the centennial scale rise.”

Well, here is the plot of CO2/temp over the past 400000 years. Temperatures go from warmer than present to about 8°C colder. CO2 varies between 180 and 280 ppm. It’s now at 400 ppm. We haven’t been emerging from an ice age in that century.

• Mike Jonas says:

Nick Stokes – No that’s not a valid statement, because the two sets of data (the 450k-yr and the current CO2) are at very different resolutions. A change such as the recent CO2 change would likely not show at all in the Vostok data.

• Mike says:

Thanks, NIck.

Last time I looked at the Vostok ice core data, I seem to recall the gaps between the data points were large enough to miss the entire chirstian era, never mind 25 years of alarming warming or 18 years of not so warming. Neither is the flip between between two different quasi-stable climate states very informative to the current situation in an interglacial.

If you look at the last interglacial in that graph you will see that the CO2 variations are fairly flat , whilst the temp data is in a heavy downward slide. Clearly one single linear relationship is not an appropriate way to explain the relationship.

This analysis shows 8ppm/year/kelvin for inter-annual variation, 4 ppm/year/kelvin , as the inter-decadal ratio.

Also the large swing around the 1998 El Nino gives a similar result of around 9ppmv/K/a ie inter-annual scale ( linked in above )

As even a trivial single exponential relaxation would show rate of change will diminish with the period length.

From this we can get a ball-park estimation of centennial change as of order 2ppmv/K/a ie 200ppmv/K/century based on the apparent halving of rate of change with each order of magnitude.

Since temps have been steadily rising for the last century that gives an estimation of the magnitude of the temperature driven change.

0.7K * 200ppmv/K/century = 140 ppmv

This at least merits a more rigorous analysis. It’s a shame Salby did not do one.

• William R says:

What this graph tells me is that obviously every 100,000 years, men with axes come and clear land, thus increasing atmospheric CO2, just like in the late 19th century, correct?

• Catherine Ronconi says:

Nick,

Sorry, but you’re wrong. Stomata data show that during the Eemian, ambient CO2 was about 330 ppmv or more. Presumably levels were even higher during yet warmer interglacials, although I haven’t read studies on them.

Antarctic ice cores are not a good basis for reconstructing past CO2 levels.

• “A change such as the recent CO2 change would likely not show at all in the Vostok data.”

What it does show is that huge variations in temperature were associated with CO2 variation, but that was quite modest. Something like 10 ppm per degree. The data doesn’t rule out brief spikes, but that would not alter that magnitude of dependence.

To put it the other way around, if the recent <1° rise really has caused a 120 ppm rise in CO2, or large part thereof, what would an ice age do?

• Mike,

Henry’s law for the solubility gives a change of CO2 for a temperature change of around 8 ppmv/K, not ppmv/K/unit-of-time.

Whatever the time needed to reach a new equilibrium: seconds by spraying seawater in a closed air cylinder, to hundreds of years for the whole earth, the same equilibrium is reached: static as well as dynamic.
The time component only shows with which speed the new equilibrium will be reached: that depends of the partial pressure difference between seawater and atmosphere and exchange speed (stirring, waves,…) and decreases with decreasing difference to get zero when the equilibrium is reached….

Further, temperatures did cool 1946-1975 and are flat 2000-now but CO2 simply did go up all the time with increasing speed…

• Catherine,

Stomata data are proxies with as largest problem: they grow on land with a huge variability and bias of local CO2 levels. The bias can be accounted for by calibrating the stomata data over last century against direct measurements and… ice core data. But there is no guarantee that the local bias didn’t change over the previous centuries due to (huge) changes in land use in the main wind direction.
On the other side, ice core data resolution gets worse the further you go back in time, but a worse resolution doesn’t change the average over the period of the resolution.
Thus if the average stomata data differ from the ice core data over the full resolution period, the stomata data are certainly wrong…

• Javier says:

Ferdinand, we have three proxy records of past CO2 levels and a computer model. Two of the proxies and the computer model agree that past hundred thousand years CO2 levels are significantly higher than the third proxy, yet we discard the two proxies on unreliability grounds and ignore the model to trust the outlier.

The two proxies that agree are stomata and Greenland ice cores, and the computer model Geocarb III.

The outlier is Antarctic ice cores.

Looks to me like the wrong approach to the problem to assume that the outlier is the correct one.

• Javier,

Sorry for the late reply, it gets difficult to track all the replies…

– Ice cores are direct measurements of ancient CO2 levels, not proxies, be it smoothed over 10 to 600 years, depending of the snow accumulation rate at the origin of the core.

– The Geocarb model is a very rough model based on proxies and simply can’t be used for the past century as its resolution is the multi-millennia.

– Greenland ice cores are unreliable for CO2 measurements as besides sea salt (carbonate) deposits, frequently volcanic eruptions from nearby Iceland dispose highly acidic volcanic dust. That gives in-situ CO2 formation and more during – now abandoned – wet CO2 measurements (melting all ice and evacuating CO2 under vacuum).

– Stomata are proxies, with all the problems that proxies have. In this case confounding factors like drought, nutrients and most above all: unknown changes in local bias…

Thus I prefer the real thing above the surrogate, be it that the higher resolution and higher (local) variability of stomata are interesting to further investigate.

• Mike,

The 14C decay rate is a special case: it is partly residence time and partly decay rate: You have a residence time if all molecules simply swap place and the total mass remains the same:
Atmospheric CO2 / throughput = 800 GtC / 150 GtC/year = 5.3 years.

The total CO2 decay rate is how much CO2 mass disappears into sinks to no return under the extra CO2 pressure in the atmosphere:
Increase in the atmosphere / net sink rate = 110 ppmv / 2.15 ppmv/year = over 50 years.

For 14C, ocean surface and the bilk of vegetation were simply swapping 14C with the atmosphere and more or less in equilibrium before the 1963 stop of above ground nuclear tests.
The difference is in the deep oceans: what goes into the deep oceans is the 14C level of today, but what comes out is the level of ~1000 years ago. For the 1960 situation, with the pre-bomb peak around 50% of the bomb spike, that gives for the extra CO2 of that time:
– 41 GtC CO2 out (99% 12CO2, 1% 13CO2 + 100% of the bomb spike).
– 40 GtC CO2 in (99% 12CO2, 1+% 13CO2 + 45% of the bomb spike).

While about 97.5% of all 12CO2 returns and thus 2.5% adds to the decay rate, about 98% 13CO2 returns, as that was not diluted by low-13C fossil fuels and finally 97.5*45% of 14CO2 returns… That makes that the 14CO2 decay rate is longer than the residence time, but also a lot faster than the decay rate for 12/13CO2…

Thus Pettersson and Salby both firmly underestimate the decay rate of the bulk of CO2…

Where the Bern model goes wrong is that it assumes a rapid saturation of the deep oceans, where there is no sign for. Originally that was built for 3000 and 5000 GtC, which indeed would give a much higher permanent level in the atmosphere, but the current ~400 GtC since the start of the industrial revolution is good for a residual 3 ppmv increase in the atmosphere after full equilibrium with the deep oceans…

19. Mike says:

For the emissions, he’s calculated the trends 1990-2002, and compared that to 2002-2013. But regarding the CO2 levels, he’s calculated the trends over entirely different periods, 1995-2002 and 2002-2014. Bad scientist, no cookies.

It’s sloppy, but these are not “entirely different” periods, you’ve upped his sloppy and gone for factually incorrect.

Since one of the two quantities being discussed is the integral of the other and the rise is only slightly curved we should see this relationship, it is not really “apples and oranges”.

Dr. Salby is not comparing like with like. He’s comparing ppmv of CO2 per year to plain old ppmv of CO2, and that is a meaningless comparison.

… neither is it “meaningless”. It is not correctly done but it is pointing to a change that merits being looked at correctly.

I’m not defending Salby , I wish he’d do it properly, but that is not reason to throw out and ignore temperature dependency. Several authors have pointed out that short term change in CO2 : d/dt ( CO2 ) is a function of temperature. No one seems to have satisfactorily determined how much of the inter-decadal to centennial change is also temperature related.

Like most things it is not a binary, black or white answer. Some part of the long term change is certainly due to temperature rise over the last 300 years. If the oceans were not warmer, they would certainly have absorbed a greater proportion of human emissions. The question is, how much?

The trouble with Bern model and any uniquely exponential model is that it TOTALLY ignores temperature dependency , which is clearly seen in short term change. Finding fault with Salby’s work does not address the question.

If there is a significant long term rise due to temperature, this could easily be falsely contributing to long term 189y time constant in Bern model, which is at the heart of the global warming alarmism.

If CO2 is absorbed in 20-30y as at time const of circa 9 years would suggest , the alarmist predictions suddenly loose their alarming magnitude.

• Mike, the variability in rate of change of CO2 is studied by several others, here in the speech of Pieter Tans for 50 years of Mauna Loa data, from slide 11 on:
http://esrl.noaa.gov/gmd/co2conference/pdfs/tans.pdf

It is the short-term influence of temperature and drought on (mainly tropical) vegetation.
But the longer term increase >3 years is NOT caused by vegetation: vegetation is a net sink for CO2 over the years.

As the (very) long term influence of temperature on CO2 levels is around 8 ppmv/K the warming since 1960 is good for 5 ppmv of the 80 ppmv increase since that year…

20. Stocky says:

The problem I have is the proposition that CO2 has held steady at circa 280ppmv for millennia. Most things in the natural world fluctuate, indeed CO2 fluctuates widely from place to place, day to day and month to month. It therefore seems incredible to believe it should not change from 280ppmv by nature alone.

The ice core methodology on which this theory is based, must be flawed. I suspect as CO2 diffuses and get compressed as the firn compacts, it does not give an accurate picture of annual CO2 trends. It certainly does not match real world measurements taken in the 19th and 20th century.

• Stocky,

The best resolution ice cores are better than a decade and span the last 150 years. Direct measurements taken over the oceans in that period are around the ice core CO2 levels and there is a 20 year overlap (1960-1980) between the ice cores and direct measurements at the South Pole within +/- 1.2 ppmv.

The current natural fluctuations as measured worldwide are +/- 8 ppmv maximum seasonal and +/- 1 ppmv over 1-3 years around the trend. That is not measurable in the ice cores, but hardly of interest as the variability levels out after 2-3 years.

21. Leo Smth says:

What the discrepancy between C14 and total CO2 half lives shows is that CO2 is being absorbed and re-emitted by (presumably) the biosphere. And that what is re-emitted is ‘old’ CO2 that has been there a long time, like before atomic tests.

This is of course consistent with organic decay and breakdown and indeed plant eating organisms etc etc.

What this raises is the possible mechanisms by which this occurs, and how these relate to climate change.

There is after all a huge amount of evidence that rising temperatures increase the amount of CO2 emitted by the biosphere.

It may turn out that in fact the cliamte change alarmists have it all back to front: Human emission of CO2 is largely irrelevant, since total atmospheric concentrations may be driven instead by the rise in temperatures (50-100) years ago…

..which were caused by something else entirely…

• Mike says:

There is a dilution effect at each interchange. This will be much greater in the ocean where there is more mixing. Leave decay is mainly of leaf growth from the same or previous year. Wood decay will introduce a longer lag. From memory the oceanic annual cycle is about twice the CO2 mass of land based cycle.

This needs to be analysed as a rate reaction. See Gosta Pettersson’s work for more detail on chemical engineering explanation of rate reactions. This also gives the main time constant of the order of ten years.

• Joe Born says:

You may want to consider the questions I posed in this post and the answers given by Ferdinand Engelbeen in the ensuing thread.

• Leo Smth says:

Yeah – seem to be on parallel tracks there.

Add a re-emission lag and it gets complex. Add non linearity and all bets are off..

22. johnmarshall says:

Half life is the loss of half the radioactivity over a period not half the mass.

Half life length shows how active/dangerous a particular item is, a very short half life indicates a very dangerous substance, a very long half life indicates no real danger, 96Ru has a half life of 3.1×10^13 years so not dangerous from an ionising radiation standpoint. (though how that half life was calculated I have no idea)

• Mike says:

John, the activity falls by half since half the mass of the radio-isotope no longer exists , it’s the same thing.

How dangerous something is depends upon the activity ( becquerels ) and nature of the radiation ( alpha being the most destructive if ingested ).

If you have 300 becquerel of a long half-life isotope it is just as dangerous as 300 becquerel of a short half-life isotope. however, it will remain so for millennia.

So your idea that “a very long half life indicates no real danger” seems rather confused.

• Owen in GA says:

for a very long half life item to have the same activity of a very short half life item, you would need to have an proportionally larger number of the long half life items. As an example, to get the same disintegrations per second from something that has 1000 second half life as something with a 1 second half life, you have to have 1000 times as much material to start with.

• george e. smith says:

Mike when you say “half of the mass of the isotope no longer exists,” that isn’t strictly true.

Virtually ALL of the mass still exists but now it is something else, either a different element or some small particle, such as neutron, proton, alpha or whatever.

Mass is not destroyed in radioactive decay (well not much is; just binding energies etc.

• Half life is the loss of half the radioactivity over a period not half the mass.

Well, more accurately, a half life is the time to lose half of whatever is being counted. For example, if I have a contaminate-filled room (of soot or paint particles or perfume – regardless of good, bad, or indifferent particles) the half life is the time that half of them get absorbed or get combined or (if radiation) decay.

Half life length shows how active/dangerous a particular item is, a very short half life indicates a very dangerous substance, a very long half life indicates no real danger

But a short life means also that after a short period of time, the radiation will not be present any more. One of the most troublesome radioactive substances around a reactor is cobalt 60, with an inconvenient half life of 5-6 years. Long enough to always be around, short enough to decay rapidly

(though how that half life was calculated I have no idea)

Error bars. And trust in the people making the measurement.

• Owen in GA says:

You also have to have a very large sample of very long half life items to try to shrink those error bars somewhat. If you get enough of it in one place you can get a few disintegrations per day. If you monitor that for a few 1000 days and get a consistent count over the entire period, you can then extrapolate the half life out to millions of years with fairly good confidence. Smaller error bars = longer observation time and larger source size. This is all assuming that the apparatus continues to operate through the period.

• Owen in GA says:

BFL,

This is true, but when you set up the right conditions to start a large neutron concentration, you can forget about half life and move into the realm of nuclear fission reactions – two very different regimes. Fission will just pretty much happen when you have a large supply of low energy neutrons lying around – it(235U+n) has a very large reaction cross section for low energy n.

• SMC says:

Be careful not to confuse Activity, measured in Bequerels (Bq) or Curies (Ci), with Dose Rate, measured in Grays (Gy) or REM (Reontgen Equivilant Man).

23. jonesingforozone says:

Thank you, Willis.

The 14C bomb dispersed quickly through the atmosphere and into the ocean due to nearly zero partial pressure with respect to the tagged 14C.

This would be true of any gas that one could tag. A bomb of the tagged mixture will have a half life of ~8.6 years when released in an untagged environment, according to the 14C bomb test.

The Bern Graph, on the other hand, takes for granted a 280ppm floor, as though rock formation would cease.

24. Mike says:

I’m not sure that Salby’s fit to C14 is that accurate ( he does not show his residual and I don’t think it would be very good ) , though accuracy is not essential to his point.

This models seems to give a very close fit, though Pettersson says it should not be fitted directly to the C14 ratio anyway.

It is interesting that the ratio of the magnitudes of the short and long exponentials in this model is exactly the same as Salby’s 8.64 time const. Mathematical coincidence?

25. Joe Born says:

Did anyone catch the source of the “tau=59.59470829” in Mr. Eschenbach’s code?

• Mike says:

I understood it was his own fit but I’m not sure he said explicitly what was fitted to what.

• Joe Born says:

Thanks. Yes, the high number of decimal places suggests that the computer spit it out of some operation, but it would be nice to see what that operation was.

• Mike says:

The trouble with this is that emissions have not had a constant rate of growth since the pre-industrial period.

It is always possible to fit an exponential to a slightly curved segment of line like cumulative CO2 but with that length of data and very light curvature the uncertainty will be large. Also the pre-ind level is uncertain and will change the fit.

The problem is that the integrated cumulative sum removes most of the detail that may enable us to analyse the system. That is why the rate of change stuff I linked above may be more informative.

• Willis Eschenbach says:

Joe Born April 20, 2015 at 3:24 am

Did anyone catch the source of the “tau=59.59470829″ in Mr. Eschenbach’s code?

Thanks, Joe, good question. For reasons of laziness, I did that in a separate spreadsheet. I used tau and the pre-industrial concentration as the variables. I started with the initial value of atmospheric CO2 in 1959. Succeeding values were calculated as

CO2[ t ] = P + (CO2[ t-1 ] + E[ t ] – P) * alpha

where the subscript “t” is time, CO2 is atmospheric CO2 concentration, P is the pre-industrial value, E is annual emission (in ppmv), and alpha is exp(-1/tau).

I used Excel’s “Solver” function to optimize the values of P and tau in order to minimize the sum of the squared residuals. That gave me ~ 57 years for tau and ~ 283 ppmv for P, the pre-industrial CO2 concentration. Since there were no constraints on the fitting process, I consider the fact that the best fit for P (283 ppmv) is quite close to the generally assumed pre-industrial value of 275 ppmv to be a confirmation of the method that I am using.

Regards,

w.

• Joe Born says:

Thanks a lot. I’d thought it was something like that, but I hate to guess.

26. Superficially written. Neither clarifies nor clearly rebuts.

• Willis Eschenbach says:

Shub Niggurath April 20, 2015 at 4:13 am

Superficially written. Neither clarifies nor clearly rebuts.

What was “superficially written”, and by whom? What doesn’t clarify, and what is not clear? What is not rebutted?

Your comment is totally opaque. This is why I ask people to quote what they object to.

w.

27. William Astley says:

Willis Eschenbach

Willis,
Salby does total mass balance of CO2 in the atmosphere and states that he does total mass balance.
Salby analysis and conclusion is correct. It appears you did not understand Salby’s presentation and it appears you bring emotion into scientific analysis which is purposeless and clouds your summary/blocks your understanding of the issues/science.

The sum of all CO2 inputs in the atmosphere (anthropogenic is only one and volcanic eruptions are not the major source of natural CO2) minus the total sinks of CO2 in the atmosphere equals rise of CO2 in the atmosphere. That is mass balance.

As Salby correctly states the only input of CO2 into the atmosphere which we know with certainty is anthropogenic CO2. There is an immense amount of CO2 and CH4 that is flowing into the atmosphere from the deep earth. For example CH4 levels in the atmosphere mysteriously doubled for no physical reason and then stopped rising.

As Salby’s notes anthropogenic CO2 continues to increase in rate post 2002 yet the rate of rise of atmosphere CO2 does not increase. That is a fact a paradox.

Salby does a calculation of the maximum possible change in sink rate base and notes sink rate is proportional to total atmospheric CO2 not the change in atmosphere CO2.

He finds the maximum possible bound on the change sink rate does not explain the observation that post 2002 the rate of increase in atmospheric CO2 is constant yet the is a rise in total anthropogenic CO2.
The CO2 sinks do not increase the percentage of CO2 that is sequestered when total atmospheric CO2 increases with the exception of plants which thrive when atmospheric CO2 increase.

The major source of new CO2 into the atmosphere is deep core CH4 that is released into the biosphere as CH4, CO2 (micro organism eat the CH4), and liquid petroleum.

The key to solving the CO2 puzzle is to read and understand the late Nobel Prize winning astrophysicist book The Deep Hot Biosphere: The Myth of Fossil Fuels). It appears you have not read that book or the related papers. I am working on summary of Gold theory and book for this forum. I will include an explanation of Salby theory and explain the related mechanisms.

http://www.amazon.com/Deep-Hot-Biosphere-Fossil-Fuels/dp/0387952535/ref=sr_1_2?s=books&ie=UTF8&qid=1429530650&sr=1-2&keywords=Deep+HOt+Biosphere+The+Myth+of+Fossil+Fuels
Atmospheric CH4 is about to fall and atmospheric CO2 is about to fall. I say that because I understand physically what is happening.

It appears you also did not listen to or understand Salby’s previous video which discusses irreversible sinks of CO2 verses movement of CO2 into the surface ocean which is reversible. The IPCC Bern CO2 model assumes there is very little exchange of deep ocean water with surface ocean water. The C14 carbon from the surface ocean water under the Bern assumption should therefore linger. It does not which is one of the many observations that supports the assertion the Bern model is incorrect and the half life of CO2 in the atmosphere is between 3 and 7 years.

Regards,
William

• Mike says:

“The key to solving the CO2 puzzle is to read and understand the late Nobel Prize winning astrophysicist book The Deep Hot Biosphere”

What Nobel Prize did he get ? Was he a participating author of an IPCC report ?!

• William Astley says:

The Nobel committee will need to rescind the award made to the IPCC as the entire IPCC scientific premise is incorrect.

• It doesn’t do anything for the reputation of an outstanding scientist to make false claims about his being a Nobel laureate.

• Totally. Plus it dilutes the accomplishment of those of us who actually received one.

• Some rain on your abiotic petroleum parade. Its ‘not even wrong’. Gold’s book is crackpot speculation. The Swedish experiment based on Gold found only trace oil in pump contaminated drilling mud. The Russian claims about the Ukraine deposits are bad geology. Those reservoirs are sourced from standard marine shales overthrust by fractured basement rock.

There is obviously abiotic methane. It exists in the outer solar system (Titan), but apparently not the inner solar systems rocky planets. It is produced on Earth by serpentization (mineral hydration) of ultramafic rock catalyzed by iron. That been known for decades, and at least 7 European seeps have been identified. Very recently, the first meaningful accumulation of such abiotic methane was discovered in methane clathrate on the Framm Strait seabed.

But not petroleum.

• Catherine Ronconi says:

Rud,

There most certainly is abiotic methane on the rocky inner planets. You yourself comment on Earth’s abiotic methane. Methane on Mars might be biotic, but probably isn’t. Mercury’s tenuous atmosphere contains trace amounts of methane, almost certainly abiotic. Venus’ atmosphere is loaded with the stuff, presumably abiotic.

• Catherine Ronconi says:

And, while not a planet, lunar astronauts detected methane in the Moon’s thin atmosphere as well.

• Bernie Hutchins says:

Rud –

According to Gold’s book “Deep Hot Biosphere” they pumped up 12 tons of oil (“looking like ordinary crude oil” – Danish Geological Survey) along with 15 tons of fine-grained magnetite. The significance of the magnetite is FIRST of all that Gold considered it to have been produced by microbes who reduced it from another iron oxide. SECOND it was the magnetite “paste” that clogged up the well making further drilling impossible. It’s fine with me if you yourself choose to consider 12 tons a mere “trace” if you say exactly that. But I would also inquire if you knew WHY they stopped drilling (the magnetite). Yes, it was not practical to continue. But that a far different issue and does not prove anything about whether it was or was not there.

• Catherine Ronconi says:

Bernie,

It also seems to me that some commenters fail to distinguish Gold’s argument from that of present Russian and Ukrainian scientists who advocate abiotic oil through on-going geochemical processes. Gold in fact advocated residual primordial abiotic oil, delivered to earth by meteors and comets, plus biotic contribution from deep microbes feeding on this food source, then themselves serving as organic feedstock for further biotic production of long-chain hydrocarbons.

IIRC Gold’s argument.

• Bernie Hutchins says:

Reply to Catherine Ronconi April 20, 2015 at 1:08 pm

Thanks – Quite right. Tommy started with a “Deep Earth” hypothesis for primordial GAS (methane). Then he added life to the upwelling methane: the “Deep Hot BIOSPHERE” . Famously turning thinking around: not life reworked by geology, but geology reworked by life (microbes deep in the rocks).

And he sure could ask the inconvenient questions about conventional wisdom. He could be wrong – but “crackpot” is a very unfortunate term used by some of his critics.

• Catherine Ronconi says:

Bernie,

Life has certainly reworked the atmosphere and hydrosphere, so why not the lithosphere as well? Or reworked it in this way, since clearly life has already affected the lithosphere in other ways.

• Bernie, since you seem worked up and thinking I am evasive on this. So I just wasted an hour fact checking from press and scientific commentary at the time of the Swedish experiment. Things you can still do to educate yourself on the falsity of abiotic oil (but not abiotic methane).
First, Gold persuaded the Swedish well investors that they would find commercial amounts of methane, not petroleum. They obviously didn’t.
Second, although there are (interestingly) magnetite producing bacteria, their magnetite is nanoparticle sized and could not jam a drill bit. Almost all granites contain either magnetite (iron) or ilmenite (iron/titanium) trace mineralization. Finding ‘fine grained’ magnetite does not mean it came from bacterial sources–especially when drilling granite.
Third, either you misread Gold’s book or in it he misrepresented what was brought up from the bottom of the hole. It was 12 tons of sludge (drill cuttings unavoidably mixed with drilling mud) in which there was some trace oil. It was not 12 tons of oil. See for example http://www.science-frontiers. They reported before and after in #69 and #79. Even the drillers on site thought it came from pump leaks into the drill mud.

Gold shifted his story over time from primordial methane to abiotic oil after this exploit. He also then asserted that undeniable biomakers in oil just means abiotic oil sources were contaminated by deep dwelling microbes. Shape shifting an original theory that much is itself prima facie evidence the theory is wrong.
Don’t take the book at face value. That is like taking AR4 at face value. Big mistake.

• Bernie Hutchins says:

Replying to ristvan April 20, 2015 at 2:17 pm

(1) I did NOT misread Gold. See pages 120-121. He said 12 tons of crude oil as verified by the Danish Geological Survey. Perhaps you should read the book. Heck – I have YOUR book!

(2) Your link to science-frontiers goes NOWHERE. It says: “Sorry, We could not find http://www.science-frontiers“. So you leave us out in the cold.

• MRW says:

Dr. J.F. Kenney, who worked in Russia under Soviet rule with Russian scientists at the Russian Academy of Sciences, wrote this about Thomas Gold:

Sometime during the late 1970’s, a British-American, one-time astronomer named Thomas Gold discovered the modern Russian-Ukrainian theory of deep, abiotic petroleum origins. Such was not difficult to do, for there are many thousands of articles, monographs, and books published in the mainstream Russian scientific press on modern Russian petroleum science. Gold could read the Russian language fluently.

In 1979, Gold began publishing the modern Russian-Ukrainian theory of petroleum origins, as if such were his own ideas and without giving credit to the Russian (then, Soviet) petroleum scientists from whom he had taken the material. Gold tried to alter the modern Russian-Ukrainian theory of deep, abiotic petroleum origins with notions of his own in order to conceal its provenance. He gave his “ideas” the (very misleading) name the “deep gas theory.”

Worse yet, Gold’s alterations of modern Russian petroleum science were utterly wrong. Specifically, Gold’s claims that there exist large quantities of natural gas (methane) in the Earth at depths of its mantle are completely wrong. Such claims are upside-down and backwards. At the pressures of the mantle, methane is unstable, and the hydrogen-carbon system there evolves the entire suite of heavier hydrocarbons found in natural petroleum, in the Planck-type distribution which characterizes natural petroleum. Methane at pressures of the mantle of the Earth will decompose to evolve octane, diesel oil, heavy lubricating oils, alkylbenzenes, and the compounds found in natural petroleum. [These properties of the hydrogen-carbon system have been described at greater length and rigor in a recent article in Proceedings of the National Academy of Sciences.1] Regrettably, Gold was as ignorant of statistical thermodynamics as he was of ethics.

The full discussion with cites is here: http://www.gasresources.net/plagiarism%28overview%29.htm

• Steve P says:

MRW April 21, 2015 at 7:06 am

Thank you for setting the record straight vis-a-vis Gold and the Russians.

~

There are several interesting sub-threads that have developed from Willis’s essay, but it takes longer to find them in the tangle of comments, than to read or reply.

Previously, the host had requested that we give the new format a try. Speaking for myself only, I have given it a try, and it doesn’t work.

I would urge the host to reconsider the ‘reply’ format.

Thanks

28. Mike says:

Willian: “Salby does a calculation of the maximum possible change in sink rate base and notes sink rate is proportional to total atmospheric CO2 not the change in atmosphere CO2.”

That was one of the first question marks his presentation raised for me , where he pulled that from. This is typical of his style which you can go unnoticed in a slide presentation but would be a blatant omission is a paper. It’s long over due that he puts what he has on paper and stops relying of videos of slides.

Since you consider that you understand his work, maybe you can explain how he derived that.
Thanks.

29. TonyL says:

You wrote:
“Exponential decay also describes what happens when a system which is at some kind of equilibrium is disturbed from that equilibrium. The system doesn’t return to equilibrium all at once. Instead, each year it moves a certain percentage of the remaining distance to equilibrium.”

Not true at all.
Exponential decay is an example of First Order Kinetics. Your equilibrium case is an example of Approach To Equilibrium kinetics. They are NOT the same. You can do a First Order kinetics analysis on an Approach to equilibrium system and get a pretty good match for two or three half-lives, but after that, the modeled decay is way to fast. After 5 or so half-lives, the discrepancies can get pretty ugly. The math of Approach To Equilibrium is rather more complex and not as well known, so people tend to use First Order instead. As I mentioned, this is usually at least serviceable for two or three half-lives, but then trouble starts.
(Imagine a classroom of physics undergraduates, told to consider a transistor as a linear device, “over a short range”. They are all pumping their fists in the air, chanting in unison “TOO THE FIRST ORDER, TOO THE FIRST ORDER”. You get the idea.)

Now consider:
First Order kinetics describes the reaction A -> B, simple enough. There is no reverse reaction.

Approach to Equilibrium describes the system A -> B and B -> A. There is the reverse reaction.
If you start with all A, at first it looks like First Order, because the reverse reaction is too small to be significant. Over time, the reverse reaction, B -> A grows significant, and First Order no longer works well. At the end, we have equilibrium, where the rates of A -> B and B -> A are equal . And that is a fundamental definition of equilibrium.

Now consider CO2 absorption by the oceans. I do not think we should imply (by our mathematical treatment) that out-gassing is insignificant. I think there is much mischief about CO2 “residence times” because of this.
Cheers.

• MikeB says:

If the physics graduates were chanting in unison “TOO THE FIRST ORDER, TOO THE FIRST ORDER”, I would recommend they did a course in English first.

• TonyL says:

They are undergrads, what do you expect?

• Steve P says:

Don’t confuse the hypothetical chanters with the writer. He who puts words in the mouths of others must spell them correctly too.

30. Mike says:

Thanks Tony. That seems to echo what I said above, this C14 curve is not the single molecule residence time.

Gosta Pettersson discusses reversible / non-reversible reactions in his articles:
http://www.false-alarm.net/

• TonyL says:

Looks interesting. I will check them out.
Thanks for the tip.

• Mike, the 14C curve is not the same as the 12C/13C decay curve for an excess injection of fossil fuels either:
What goes into the deep oceans was the isotopic composition of 1960 at the peak of the bomb spike and some extra 12/13C. What returns is the composition of ~1000 years before: somewhat lower in quantity than what goes into the deep, but with a lot less 14C of long before the peak.

That makes that the 14C curve is less fast than the residence time, nut still a lot faster than the decay rate of an excess 12/13CO2 injection above equilibrium…

31. Scott says:

I would really be interested in Mr. Eschenbach’s contacting Dr. Salby and discussing with him his apparent disagreements. Then I’d like to see a distillation of that conversation for those of us who are not at the pinnacle of understanding every last detail. I’d also be interested to see what anyone who is versed in this subject to comment themselves (i.e.: William Astley who brought up some interesting points).
Hopefully this can be done. It’s what the debate (yes, I said that dirty word) in badly in need of……

• Mike says:

What is needed is for Salby to put his hypothesis down on paper and stop messing around with video presentations.

Then everyone can have a look and see whether there’s a valid point being made.

I’m sick of doing freeze frame on a fuzzy video of a slide in a presentation that does not include the derivation of some key aspects. I don’t see that Willis or anyone else needs to be involved in that process. He just needs to stop messing around and publish. Even if it’s on arxiv.org or something.

• Scott says:

I remember something in Dr. Salby’s video about “not publishing” till his data was released?
Who could keep “his” data from him? Does he not have copies of it?

• Willis Eschenbach says:

Mike April 20, 2015 at 6:09 am

Mike April 20, 2015 at 6:09 am Edit
What is needed is for Salby to put his hypothesis down on paper and stop messing around with video presentations.

Then everyone can have a look and see whether there’s a valid point being made.

What he said … faffing around with the video was most unpleasant.

w.

• Catherine Ronconi says:

Scott,

The Australian university which fired him “owns” his data and won’t let him have it.

32. RERT says:

WIllis – How does your time constant of 59 years correspond to the Bern Model? If I recall correctly, the Bern Model assumes some fraction of emissions stays in the air permanently. Can you fit the actual data with the Bern Model? The Bern model does appear to be the decay rate of a pulse of enhanced concentration, per your fit.

R.

• Willis Eschenbach says:

Good question, RERT. There’s not enough data yet to distinguish between a simple exponential (as I’ve shown above) and a multiple exponential decay. The Bern Model assumes that emitted CO2 takes one of four paths, each of which has a different time constant ranging (from memory) fro 3 to 174 years.

w.

• RERT says:

If this from Google is a good source http://unfccc.int/resource/brazil/carbon.html, then subject to me understanding correctly the time constants are 2.6, 18, 171 and infinite, as of TAR. (infinite I think because a(0) is non-zero).

Your response raises the question as to how these parameters are fitted if there is not enough data to distinguish from a much simpler model. Either someone has better data (?unlikely?) or most of these parameters are not statistically significant. Given there are 7 of them, hardly a surprise I guess!

R.

33. halftiderock says:

I question the assertion that the total global CO2 input is a known. The Carbon Satellite preliminary indicates that natural sources are significantly higher than have been theorized. In addition the interpretation of the Keeler curve in terms of anthropogenic contribution is interesting but most likely flawed since the atmospheric CO2 increase is +/- linear and there is a difference between correlation and causation particularly if the larger sources are imperfectly understood. You can tune anything that reflects your bias. Note the immediate divergence between the IPCC models and reality when the models shifted from hindcasting to prediction. CO2 lags temperature in geologic time. Therefor it can not be the cause. The weeds are interesting and inform but maintaining a 30,000 foot view is essential.

• Willis Eschenbach says:

halftiderock April 20, 2015 at 6:06 am Edit

I question the assertion that the total global CO2 input is a known.

In that case, please have the courtesy to quote whoever it was that said it was known. As it stands, there’s no clue what you object to.

w.

34. TonyL says:

As I look at it, Figure 7 actually is very interesting. It compares the 14C curve and the Bern model, which seem to be different. On second look, the 14C curve is First Order kinetics for the uptake of CO2 by the oceans. This is simply the A -> B forward reaction, without any contribution from the reverse reaction. The Bern curve shows the sum of the forward and reverse reactions together.

The difference between the two curves provides a cautionary tale about how you model things, and how you check your starting assumptions.

• Mike says:

I suggest you look at Pettersson’s papers. He is chemist and understands reversible reactions and explicitly deals with this in his work. His paper 5 shows Bern and C14 and is not very different from Salby’s graph, except that it stops around y2k.

You seem to be talking the same language, so it should make sense to you.
http://www.false-alarm.net/

35. Bill Illis says:

I agree with what Willis said here. Salby is missing some of the key issues.

My own view is that there is a natural equilibrium level of CO2 of around 270 to 280 ppm. CO2 has been around that level since C4 grasses evolved in the few million years leading up to 24 million years ago. The evolution of C4 grasses increased the Carbon balance held in vegetation because C4 grasses could now grow in dry areas where all the remaining C3 bushes, trees, plants and (C3) grasses couldn’t grow before. CO2 fell to 280 ppm, for perhaps the very first time, 24 million years ago.

In the ice ages, CO2 declines with temperature by about 18 ppm per 1.0C as the oceans absorb more CO2 but also because the vegetation on the planet dies back significantly and there is less annual Carbon cycle from vegetation occuring. CO2 has been as low as 185 ppm which means trees and bushes could only grow where there is very high rainfall such as the tropical rainforests. in the ice ages, Africa’s rainforests decline to just a few small areas. The Amazon rainforest declines by two-thirds. For some reason, probably higher rainfall, the US southeast and Indonesia seem to hold onto to their trees. The rest of the planet is either grassland, desert or tundra or glacial ice.

At the natural equilibrium level of CO2 in a non-glacial cycle, of 280 ppm, if CO2 rises above that level, vegetation gets more active and CO2 is drawn back-down to 280 ppm. If CO2 falls, vegetation gets less active and CO2 goes back up.

Net absorption of CO2 by natural processes as a percent of CO2 above 280 ppm going back to 1750 when human emissions started to actually matter. The natural processes completely overwhelmed our emissions until the 1950s. (Landuse and forest-clearing by early civilizations is a joke. The natural processes are many times higher than humans clearing forests. Orders of magnitude.)

Now let’s compare the natural absorption rates compared to human emissions. In the 1940s, CO2 levels actually fell. The natural sinks were more than 100% of our emissions. Before 1900, they were orders of magnitude higher than human emission rates.

Since about 1950, the natural absorption rate has been around 50%. As Willis noted, it is probably closer to 42% or 45% but it is actually hard to tell because there are some uncertainties here. But this is a fluke. It is more the total amount of CO2 in the atmosphere that governs net absorption by natural processes, not our annual emissions.

Since Plants, Ocean, and Soils are absorbing about 1.7% per year of the excess CO2 above the equilibrium 280 ppm right now (a rate which appears to be increasing slightly), if we stopped emitting today, it would take about 155 years to get back down to 285 ppm (and then a few more decades for 280 ppm).

• Joe Born says:

Thanks a lot for that background. Nuggets like that are the reason I visit this site.

Is there some source from which we could easily obtain those plots’ data?

Also: “Plants, Ocean, and Soils are absorbing about 1.7% per year of the excess CO2 above the equilibrium 280 ppm right now (a rate which appears to be increasing slightly).” How do we know that?

• Phlogiston says:

Illuminating as always, thanks. So Antarctica gave rise to C4 plants, who knew? Who cared? Well I do.

I’m sad but at the same time proud to be alive with the last generation of scientists.

• bw says:

Yes. The physicists underestimate the biology. C4 is a big evolutionary step. That includes that CAM and other structures improving the extraction of CO2/transpiriation in “dry” environments. Plants have been evolving for more than 24 million years, or even the entire cenozoic. The entire global biogeochemical carbon cycle has been evolving for hundreds of millions of years. Earth’s atmosphere today is entirely of biological origin (except Argon).
As for the amount of fossil fuel CO2 in the atmosphere, various approaches (eg. Segalstad) show that it is about 15 to 25 ppm. One approach (https://retiredresearcher.wordpress.com/) see figure 16 shows about 45 ppm. Adding CO2 to the atmosphere is entirely beneficial. A few thousand ppm would be great.
CO2 never “accumulates” in the atmosphere, from any source. Planetary biology guarantees that.

• bw,

The residual number of “human” CO2 molecules depend of the residence time which is ~5 years and is currently about 9%.
The residual mass of CO2 above equilibrium is 95% caused by human emissions as the decay rate of any extra injection of CO2 is over 50 years.

Two different decay rates without much connection between each other.

• Latitude says:

Bill, I agree 100% with you….
I don’t like the word “equilibrium” though…..while being accurate, it’s not descriptive

“Limiting” would be the word that better describes it.

36. David L. Hagen says:

Willis
I affirm the comments above by William Astley. Salby is an expert analytical mathematician. See Murry Salby, (2012) Physics of the Atmosphere and Climate
Salby develops the equations later in the presentation for a technical presentation level.
On C14, suggest distinguishing between the residence time in the mixed surface layer vs the deep/rest of the ocean.

• David,

Salby made the same mistake as many before him: the e-fold decay rate of the 14C bomb spike is way shorter that the decay rate of an extra injection of CO2 in the atmosphere: what returns out of the deep oceans is much less 14C than what goes into the deep oceans, while for 12CO2 that still is near the same amount…

• David L. Hagen says:

Willis Eschenbach and Ferdinand Engelbeen
Re: “He follows that up by not knowing the difference between airborne residence time and pulse decay time.”
Salby may not be presenting it well, but I believe he has gone far deeper into the equations and details that you give him credit for.
You argue “Airborne residence time (bomb test data): how long an individual CO2 molecule stays in the air.”
The bomb test data is NOT “an individual CO2 molecule” but a specific though very small (“infinitesimal”) pulse of CO2 with C14, other than it can be explicitly tracked. How is that infinetismal pulse that much different from a larger pulse under the Bern model? 0.5% does not make that much difference in total CO2.
Mathematically, the consequent absorption rate is very similar.
IF the ocean were under equilibrium BEFORE the pulse, why should the emission rates after be any different from the base absorption rate BEFORE the pulse?

Under his Cross-Correlation argument at 25:50 etc. Salby uses the C14 concentration decay rate to calculate the CO2 absorption rate is proportional to the abundance of CO2. ~ 30:20”-40”
From the bomb data C14 decay rate and his cross correlation analysis, I understand Salby to show the major difference from the Bern model is that the CO2 EMISSION rate is not constant, but varies with temperature.

Salby develops the atmospheric conservation equation to then find:

CO2 growth rate = Emissions rate (proportional to temperature) – absorption rate (proportional to CO2).

Note that he finds a correlation of 0.82 for changes of CO2 with temperature and a correlation of 0.93 when including moisture etc.

See: Janice Moore Notes on Dr. Murry Salby, London 2015 Lecture.
C14 section min 26:24 – 46.
Salby goes beyond your constant CO2 addition model to form a model of an increasing trend of CO2 emissions. Janice notes:

”35:25 Equilibrium level of human CO2 (equation for when emission = absorption) . . .
“After 2002, 300% increase in CO2 per year matched by absorption rate, eventually they will be in equilibrium differing only by a constant (two parallel lines) – 37:30 — net emission then becomes a constant, thus, CO2 growth (in abundance) is constant, increasing linearly like emission.”

What am I missing from your / Salby’s arguments from my rapid reading/listening?

• David,

As human emissions are about twice the increase in the atmosphere and steadily increasing, the variability in the increase is not in the temperature dependency of the source rate but in the net sink rate.
The correlation between the variability of temperature and CO2 increase rate shows that temperature variability is responsible for the variability around the trend, but says next to nothing about the cause of the trend, as human emissions are increasing rather monotonically without any measurable variability in the atmosphere and by taking the derivatives, you have effectively removed the trend…

• David L. Hagen April 20, 2015 at 6:34 pm

CO2 growth rate = Emissions rate (proportional to temperature) – absorption rate (proportional to CO2).

Note that he finds a correlation of 0.82 for changes of CO2 with temperature and a correlation of 0.93 when including moisture etc.

The problem with Salby’s balance equation is that he assumes that the temperature dependence is only due to the source terms, this is wrong. Both the absorption rate and the emission rate from the environment are dependent on both pCO2 and T, by not including the proper dependence he forces the result that he obtained.

The proper equation is:

d[CO2]/dt = Fossil Fuel emissions + Sources(CO2,T) – Sinks(CO2,T)

This balance equation is true at all timescales.

• Bart says:

Indeed, David. Anyone who actually reads Salby’s work and is capable of understanding it can see very clearly that the man is brilliant, and thoroughly immersed in his subject matter. The people casually lobbing potshots at him have no such demonstrated skill. It would be funny if it weren’t so annoying.

• Bart,

Dr. Salby is brilliant in his own field, but out of his knowledge on the increase of CO2 in the atmosphere.
I have read what he said about CO2 in ice cores (not repeated in this lecture anymore). That was simply physically impossible and would imply the death of all vegetation on earth during glacial periods…

• Bart says:

Ferdinand, you are not an ice core expert. You are just a guy who has read a few things about them, and internalized certain narratives about them.

• Bart,

I am not an ice core expert, but I know something about diffusion: if someone says that there is diffusion in ice cores which decimated the original peak values, that implies that the lowest values measured today would be a lot lower than measured at the original inclusion. Which is already problematic for most (C3) plants at the low levels found in the last glacial period.
As diffusion only stops when all levels are equal, finding similar peak levels each period 100,000 years back in time, implies even larger peaks in the past, which implies below zero CO2 values during the older glacial periods…
Salby’s comment on ice cores was not repeated in his last speech in London…

• Bart says:

“…if someone says that there is diffusion in ice cores which decimated the original peak values, that implies that the lowest values measured today would be a lot lower than measured at the original inclusion.”

Not necessarily. It depends on the duration. As an analogy, suppose you had 100 buckets with marbles in them. Buckets 1 through 99 have 20 marbles apiece, and bucket 100 has 120 marbles.

Over time, you take a marble out of the bucket with the most in it, and distribute it uniformly into the other buckets. After a very long amount of time, you have 21 marbles in each bucket. Your highest high has decreased sixfold, but your lowest has only increased 5%.

• Bart,

Your analogy is right, but the ice core figures are much more different. According to Salby, the measured peaks were a factor 10 (in another video a factor 15) underestimated due to migration.

The measured CO2 levels during interglacials are around 300 ppmv during ~10,000 years
The measured CO2 levels during glacials are around 180 ppmv during ~90,000 years

The original CO2 levels during interglacials according to Salby were 3000 ppmv. Thus 2700 ppmv was distributed over 9 times more years, that is 300 ppmv extra which is included in the 180 ppmv found, thus the original levels in the glacial periods were negative?

I don’t know if I have made an error somewhere, I found 30 ppmv last time I made the calculation… But one of both is a magnitude error.
Nevertheless, the 180 ppmv is already borderline the survival of C3 plants (that is all trees and a lot of other plants), thus any substantial migration means the killing of a lot of plants…

If you go back in time, one can find again a ~300 ppmv peak during the next interglacial back in time. As the decay rate of a peak by diffusion goes proportional to the difference in concentration, a double time needs a four times peak to give the same remaining peak, thus 12000 ppmv, etc…

• Bart says:

You are putting these out as though they were constant levels over, respectively, 10,000 and 90,000 years. But, e.g., it could have peaked at 3000 for a short span within those 10,000 years but been much less the rest of the time.

I don’t know the specifics of this particular analysis. Perhaps there is a puzzle to be solved here. Perhaps there isn’t. Based on his proven track record, I tend to believe Salby knows what he is talking about.

I don’t care enough to dig into it because whatever happened in the unverifiable long ago past isn’t needed to know what is happening right now. And, what is happening right now is that the rate of change of atmospheric CO2 is being driven by a temperature dependent process, and human inputs have little impact.

• olliebourque@me.com says:

” Based on his proven track record,”
..
Like his record with the National Science Foundation?
..
Or his record with Macquarie University?
..
Seriously, is he collecting unemployment these days?

• Bart says:

Seriously, what are you trying to accomplish? Do you think snarky ad hominem persuades anyone beyond your inbred, mouth-breathing, knuckle-dragging, choir of the brain-dead?

• olliebourque@me.com says:

You brought up his “record”

37. Phlogiston says:

The entire discussion on residence time can be summed up:

Lambda = t1/2 / 0.693

the end.

• Owen in GA says:

Funny, my textbooks always had it the other way… ln(2)/t1/2 or approximately 0.693/t1/2

• Phlogiston says:

Apply your equation for a moment to the half life of 238uranium, in seconds. Lambda (lifetime) comes out a tad on the short side?
In reality lambda is a bit longer than half life as per my equation above.

• Phlogiston says:

Sorry – residence time is tau, not lambda. I think. That’s the misunderstanding. (This was all another life a long time ago.)

38. c1ue says:

Mr. Eschenbach,

What is the source of your total fossil fuel emissions data? IPCC?

Also, I was wondering if the “Airborne CO2” numbers are specific to human derived emissions or is only a reference to increase in CO2 levels from 1960. What I can see on the internet makes it appear to be the latter.

If so, isn’t this another potential case of apples vs. oranges? The increases in global temperature since 1960 would itself increase CO2 levels from oceanic outgassing – which in turn implies the nearly perfect correlation between emissions and observed CO2 levels to be somewhat spurious. I don’t know the outgassing amount – it is probably in the single digits of CO2 emissions, but is non-trivial since it should consistently shift observed CO2 emissions to be higher than the human CO2 emissions + decay graph. Or in other words, there should be more red visible above the black. Of course, there are all manner of potential errors here: any estimates on human CO2 emissions are bound to have all manner of individual year or systemic errors.

I also recall that natural sinks increase CO2 intake as CO2 levels increase – I presume the decay referred to in your article is a function of this? The way the article is written, it implies some form of atomic phenomenon to atmospheric CO2 processing – if in fact the means is the behavior of said sinks, then a wrong impression is being made (at least to me).

• Willis Eschenbach says:

c1ue April 20, 2015 at 6:37 am says:

Mr. Eschenbach,

What is the source of your total fossil fuel emissions data? IPCC?

No, they’re from the CDIAC.

Also, I was wondering if the “Airborne CO2″ numbers are specific to human derived emissions or is only a reference to increase in CO2 levels from 1960. What I can see on the internet makes it appear to be the latter.

They’re the NOAA monthly Mauna Loa Observatory CO2 observations, converted to annual values.

w.

• c1ue says:

Mr. Eschenbach,

Thank you for the clarification.

You didn’t comment on my question regarding temperature derived oceanic CO2 intake/output – does this mean you don’t consider this a factor at all?

• Willis Eschenbach says:

c1ue April 21, 2015 at 7:04 am

Mr. Eschenbach,

Thank you for the clarification.

You didn’t comment on my question regarding temperature derived oceanic CO2 intake/output – does this mean you don’t consider this a factor at all?

CO2 levels increase by something on the order of 15 ppmv per degree C of warming. Over the 20th century we saw something on the order of 0.6°C of warming. This is about 10 ppmv of increased CO2 … or about a tenth of the observed increase in CO2 over the time period. So the ocean CANNOT be the source of the change in CO2.

w.

• c1ue says:

Mr. Eschenbach,
Thank you for the information on the outgassing effect.
You might note, however, that I did not dispute your conclusion regarding Salby’s views – merely that your taking of theoretical CO2 decay vs. measured CO2 levels conveys an accuracy that should not be there due to the outgassing. A 10 ppmv delta due to outgassing, subtracted from measured CO2 levels, would show much more difference between the 2 lines in question.

• wayne says:

Hi c1ue…

Careful blindly accepting Willis Eschenbach’s explanation without digging a bit deeper. Biological reactions such as co2 release with temperature increase is more than just abiological out-gassing for we do not live on an abiological planet. Here are a few links to get you going:

Just to use Arrhenius’ equation to guesstimate how our incredibly complex, chained and interconnected aspects of the entire ocean biology in relation to co2 release could be, as the author in one of those links mentioned… a catastrophic mistake, enzymes get involved steeply increasing in small temperature ranges. Also, it may not be the oceans but the tropics over land that is actual culprit as shown in the first view of co2 concentration over this globe. The high concentrations are over where you would not expect, over the tropic forests that are now flourishing per nasa pictures over the last decades.

• wayne,

We have a pretty good idea what the biosphere as a whole does: that is the oxygen balance.

Almost all life on earth produces and use oxygen, opposite to CO2 uptake and release.

Since 1990, the oxygen measurements in the atmosphere can be measured with sufficient accuracy. That shows that the biosphere is a net, increasing producer of oxygen, thus a net, increasing sink for CO2:
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf

In principle, the oxygen movements of bio-life in the oceans is included in the balance, as in most cases the ocean surface layer is close contact with the atmosphere for CO2 and O2 exchanges.

39. William Astley,

“As Salby’s notes anthropogenic CO2 continues to increase in rate post 2002 yet the rate of rise of atmosphere CO2 does not increase. That is a fact a paradox.”

This is the most important comment on this dubious thread. Mother Nature is not exponential!!! CO2 sources and sinks are quite complex, algae blooms, red tides, all of a sudden billions of jellyfish which weren’t around last year, millions and millions of bison which are not here any more, Stokes’ land use changes strangely confined to Europe, etc.

If Mother Nature were in equilibrium with atmospheric CO2 at around 280 ppm, and we suddenly began injecting CO2, atmospheric CO2 would rise 1-to-1 with our injection. This is not happening, hence, Mother Nature is NOT in equilibrium, and never has been. This Revel buffer business, I am not enough of a chemist to follow, but clearly there is some mechanism at work here which we have not detailed.

• Willis Eschenbach says:

Michael Moon April 20, 2015 at 6:59 am

William Astley,

“As Salby’s notes anthropogenic CO2 continues to increase in rate post 2002 yet the rate of rise of atmosphere CO2 does not increase. That is a fact a paradox.”

This is the most important comment on this dubious thread.

It is not a paradox at all. It is a result of Salby’s cherry-picking the dates so that he can falsely assert that one is increasing and the other is not increasing. In fact both are increasing. You’ve both been suckered by Dr. Salby. See my comment to Nylo above for the ugly details and the actual trends.

w.

• “The first thing to notice is that the total amount of CO2 from fossil fuel emissions is much larger than the amount that remains in the atmosphere.”

Willis,

Yes, and why is this? The idea that a mysterious new sink, equaling around half of whatever we put into the atmosphere in a year, miraculously appeared simultaneous to the beginning of large-scale CO2 emissions is absurd.

The idea that CO2 exponentially decays, similar to U238 becoming thorium and then lead, is equally absurd. CO2 does not decay! Plants eat it and make sugars, the source of all life on our Big Blue Marble. Increased CO2 does indeed promote extra plant growth, which makes this sink bigger, and then in a year or two these plants decay, making that source bigger.

CO2 does not have a half-life. Isotopes have half-lives. CO2 is a molecule, not an isotope. I look to ocean chemistry as the driver of increased atmospheric CO2, but no one on here is explaining it in any unassailable way.

• MM, for the record on an interesting but mostly dead thread, Gaseous CO2 most certainly has a half life both in the atmosphere and the oceans. Else coal, oil, natural gas, and corvonate rocks like limestone would not exist.

• Willis Eschenbach says:

Michael Moon April 20, 2015 at 1:30 pm

“The first thing to notice is that the total amount of CO2 from fossil fuel emissions is much larger than the amount that remains in the atmosphere.”

Willis,

Yes, and why is this? The idea that a mysterious new sink, equaling around half of whatever we put into the atmosphere in a year, miraculously appeared simultaneous to the beginning of large-scale CO2 emissions is absurd.

Thanks for the question, Michael. However, I’ve not read anyone saying there are any new sinks, so without a quote about new sinks I don’t know what you are referring to.

In answer to your question, the amount of carbon entering the sinks is dependent on the amount of atmospheric CO2 present. This is true of many chemical reactions, that the amounts of the reaction products are dependent on the concentration of the reactants. Similarly, for example, when you increase the amount of CO2 above the ocean, the amount absorbed by the ocean increases correspondingly.

In addition, increased CO2 yields increased phytoplankton growth in the ocean, which increases carbon fixation and loss to the deep ocean.

So … no “miraculous new sink”, just an increase in the existing action for both a combination of known physical, chemical and biological reasons.

Regards,

w.

• Late reply to this dubious played-out thread, but it is clear that some people are confusing forests and trees. Look, either Mother Nature WAS in equilibrium, or she WAS NOT. Since it is obvious that she was not, all these turgid analyses of half-life, e-folding time, tau, pulses, etc. are missing the point. We have not successfully analyzed the increase in CO2. Why does it not equal our emissions? Why less than half? This means that the entire thing has a huge random component, which is pretty typical of nature.

“CO2 has a half-life,” absurd. CO2 remains CO2 until the mysterious and wonderful quantum reaction known as Photosynthesis occurs. Then it turns into sugars. Some other things eat the sugars, and lots and lots of other things happen. CO2 in the atmosphere does not have a half-life either, as every year our planet looks and acts differently than it did the previous year. Trappers call them “Good Years” and “Bad Years.” Since our emissions are rounding errors in Mother Nature, there is no evidence whatsoever that our emissions correlate with atmospheric CO2. The correlation is so bad there is reason to believe something else, or a lot of something elses, are causing the recent increase.

• Ristvan,

“I don’t think that word means what you think it means.” “Half-life” refers to radioactive decay and only radioactive decay. Biological processes do not factor into half-life. Mother Nature is not exponential. Yes it is an important question, but, that being said, no one on here has shed much light on the increase in atmospheric CO2. What we do know is, equilibrium has nothing to do with it. All the mathematical procedures to analyze a change in an equilibrated system do not apply here.

What is happening instead? “Anyone? Buehler?”

• Peter says:

Michael there is one simple mechanism in work. After few million years all biosphere carbon would be sequestrated in carbonate rocks – limestone. No life existing. CO2 at low level not allowing life. So there must be mechanism to put CO2 back from limestone rocks to the atmosphere.
So actually CO2 and CH4 are seeping back from underground when limestone is cooked in depth with enough pressure, temperature, hydrogen.
If you agree with this, then you must also agree that amount of this CO2 and CH4 outgassing is exactly dependent on amount of limestone actually entering right “cooking zone”. And this is actually dependent on how much life there was let’s say wild guess 100 million years ago.
If low amount limestone area is due to enter “cooking zone” we are heading to ice age human emissions or not.

40. Alan Robertson says:

Where’s Bart?

41. I agree that Salsby should not have tried correlating CO2 accumulation with emission rates. On the other hand, comparing CO2 accumulation to total emissions is not the solution. You would get similar results comparing CO2 accumulation with global population for the same period. All three factors are covariant.
It is better to compare CO2 accumulation rates with emission rates. My statistical analysis does this. Click on my name and critique my work.

42. Martin A says:

“The 14C bomb test data (blue line) shows how long an individual CO2 molecule stays in the air.” WE

I assume it shows how long on average an individual CO2 molecule stays in the air.

I’m trying to understand what happens to a mass of CO2 instantaneously injected into the air (a “dollop” to coin a phrase) and to get my head around what is being discussed here.

I’d be grateful if someone would explain (in simple terms, please) if there is a difference between the average time an individual CO2 molecule of a dollop of CO2 stays in the air and the average time all of the molecules of the injected dollop remain in the air.

Is there a difference between this average time and the time for 50% of the injected dollop to have disappeared from the air?

• bw,

Indeed both are wrong: Gösta Pettersson uses the 14C bomb spike, which is diluted by the deep oceans return and therefore much too fast, while the IPCC uses a model which includes rapid saturation of the deep oceans, for which is currently not the slightest indication… But the Bern model -for now- is nearer to observations than Petterson’s model…

• paqyfelyc says:

pretty simple : suppose that you have 60 liters of water in your body, and an average intake of 2 liters a day.
* the average residence time of a H2O molecule in your body is 2/60 = a month
Now, what happens if, some day, you drink 3 liters instead of 2 ? how long will you have 1 extra liter of water, that is 61 liters, in your body ? will you keep it forever ? piss/sweat it away in half an hour ?
This is the average residence time of the injected dollop. And it can be shorter or longer than the previous 1 month residence time of the average molecule.
Please note that the average residence time will be marginally affected by the 1 litter dollop : no matter what, it will stay pretty much around 1 month

• Martin A:

Here the curve for a 100 GtC injection of fossil CO2 into the atmosphere, where the difference between residence time and decay rate of the total peak is clear:

Where FA is the “human” fraction in air, FL in the ocean surface layer, nCA is “natural” CO2 in the atmosphere and tCA total CO2 in the atmosphere.

While all the excess CO2 over the full period is caused by the human injection, the original human molecules are fast exchanged with natural CO2 molecules and the fraction of human CO2 is reduced to zero after a few decades. The peak itself goes down at a much slower rate and after 160 years still is measurable, 100% caused by the human injection…
Tau’s used: 5.3 years for the residence time, 51 years for the e-fold decay rate.

• Martin A says:

Thank you.

• David L. Hagen says:

Ferdinand
Please clarify if you kept the natural EMISSION (Source) rate constant at the base previous equilibrium rate and only pulsed increased the atmospheric concentration.

• David,

In the above graph, the natural carbon cycle remained constant at ~150 GtC/year in and out, assuming no change over the whole time span. That is what does give the fast drop of “human” CO2 in the atmosphere while the drop in excess mass above equilibrium is going down much slower…

43. Dave in Canmore says:

Nick Stokes says ” The initial rise (of CO2) was due to the forest clearing that came with European colonisation.”

When you cut down trees, the land doesn’t lie dormant. Nature abhors a vacuum as they say and other plants take their place about as fast as you cut them down (much to my chagrin as I work in forestry.) In the northern hemisphere, the grasses that replaced the harvested trees sequester just as much carbon. The notion that colonials cut down trees and nothing replaced them(?!) comes from an armchair speculator unfamiliar with the real world. Perhaps in poor tropical soils this idea might have more traction but the unfamiliarity with how actual biospheres work puts this statement into my bs bin.

• “In the northern hemisphere, the grasses that replaced the harvested trees sequester just as much carbon.”

They can’t. Where is it? Trees have vastly greater mass density per area. That is where the carbon is sequestered.

• Phlogiston says:

Where is it?
Its in the animals that eat the grass. If the land is turned over to agriculture, that means us.

• milodonharlani says:

The area of the earth in forests diminishes greatly during NH glaciations, while CO2 crashes from balmy interglacial levels. Grassland and tundra spread, supporting vast numbers of massive megafauna. Similarly, American Indian burning of the long- and short-grass prairies improved habitat for tens of millions (at least) of bison & other large herbivores.

O2 produced per unit of ground area is a measure of photosynthesis. For ground area covered by a single plant species, a tropical grass or other species with C-4 photosynthesis, such as the crop corn (maize) would win. For natural ecosystems per unit ground area, tropical rain forests generally have the highest annual rates of photosynthesis. Bear in mind however that while forests store a mass of carbon in wood, grasslands, depending upon conditions, are capable of adding more mass per year.

• c1ue says:

I can believe that mass tree clearing would increase CO2 levels *if* all/most of the wood was then burned. The question I would have is what proportion was used to build stuff like ships, fences, buildings, and so forth. Wood used to build wouldn’t contribute to CO2 levels because the decay associated with them is organic – termites, mold, and what not.
At least in Europe in the latter stages, old growth trees were primarily cut down to build ships.

44. Mike M. says:

Willis,

Nice post, but there is a bit of the problem. As others have pointed out, you start out with an over simplistic model that assumes first order kinetics (common, but by no means universal) for a single process (uncommon) that is irreversible (very rare). I think your discussion of Salby’s nonsense makes it clear that you know better. The problem is that the oversimplified reasoning of Figures 1, 2 and 3 is what leads to the type of errors that Salby makes. So I think it ends up undermining your eventual point.

45. David in Texas says:

For Clarity: “Sadly, Dr. Salby has proven to me that regarding this particular subject he doesn’t understand what he’s talking about.”

Here is what I disagree with: being arrogant and abusive (my perception) does not strength your argument, but rather distracts from it. It says that you are prone to fallacious reasoning – name calling. Do you really care about the “science, science, science” or is name calling equally important?

I hope that you don’t feel like I’m picking on you. I admire your abilities, just not your demeanor.

“He’s comparing ppmv of CO2 per year to plain old ppmv of CO2, and that is a meaningless comparison.” Actually, he is comparing the rate of growth in ppmv of CO2 in atmosphere to the rate of growth in human emissions of CO2 (in GtC). I would content, respectfully, that is not meaningless. You gave an example of a “constant annual pulses of amplitude 1” with a “Half-life = 6.9 years”. Question if the pulse size changes in year 50, wouldn’t the slope of your Figure 2 change in year 50 also?

Now I too have a problem with what Dr. Salby was saying. I calculated the growth rate of CO2 emission from 2001 through 2013 (data here, ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_annmean_mlo.txt) and get 2.0572 ppmv/yr which agrees with his 2.1 ppmv/yr. I next calculated from 1991 through 2001(his green line seems to start in 1991) and got 1.661 ppmv/yr growth rate. Choosing other intervals (1995 to 2002 yields 1.7708 ppmv/yr) does not help much.

So I was not able to reproduce Dr. Salby’s results, but I don’t feel compelled to beat my chest and call him names. It is just possible that I may be wrong.

For Clarity: “Airborne residence time (bomb test data): how long an individual CO2 molecule stays in the air.”
I disagree. This is how long a whole class of CO2 (with many billions of molecules) stays in the air. It seems to be a reasonable way of calculating tau. Again, I may be wrong.

You did say “I can only agree” to the decay of CO2 is proportional to abundance of CO2, but your tau is is 59 years. Questions: is Dr. Salby’s tau wrong? Is his approach wrong? If so, why?

• Mike M. says:

David in Texas wrote: “It says that you are prone to fallacious reasoning – name calling.” Not so. First Willis demonstrated the fallacies in Salby’s reasoning, then he drew a conclusion. Perhaps unkind, but not fallacious.

• David in Texas says:

Perhaps, I missed the theme of the essay. If it was to discuss the hypotheses that the rate of CO2 accumulation in the atmosphere should be related to the rate of human emissions of CO2, name calling is fallacious reasoning. Not all hypothesis espoused by intelligent people are true, and not all hypotheses espoused by slow people are wrong. Aristotle was the first to systematize logical errors into a list. He used broad groups. This would fall under the category of an ‘irrelevancy’.

On the other hand, if the main theme of the essay was to launch a person attack on Dr. Salby, you may be right. Or perhaps, Willis was multitasking.

As I said, I respect Willis. I just wish that it would not do personal attacks.

46. As usual Willis uses data in a way that muddles the reality of the situation.

In contrast Dr. Salby, laid it out in the data a very clear cut concise manner.

47. CORRECTION- In contrast Dr. Salby , presented the data in a very clear concise manner.

• Salvatore,

But as Steve McIntyre ought to say: “watch the pea under the thimble”: he uses all the increase in the atmosphere as “temperature induced”, not only the variability, but also the offset and slope of the derivative. Of course that shows that temperature is the only cause of the increase as he backintegrate that… But as 95% of the offset and slope is caused by the human emissions, that is like a magician before his public: pure illusion.

One point is sure: either he publishes what he has done here at WUWT or anywhere on the net for comment, or he is the next unreliable source of a lot of wrong statements, which gives all skeptics a bad name…

48. BFL says:

Why did fossil fuel emissions suddenly take a turn for the better in 2002? Is this another “adjustment” like for the temp record?

• Leo Smth says:

I think that was the global economic crash…but I could be wrong…

• richard verney says:

The global economic crash was not in 2002, it was in 2007/8.

Since the crash was caused by financial bubbles, it is unlikely that industrial activity slowed down prior to then, and indeed it is probable that global industrial activity slow down lagged, especially as China industrial output initially continued to grow See .http://www.business-in-asia.com/china/images/2008graph_gdp.jpg

• BFL says:

No, no, figure 4 shows emissions going UP drastically about 2002 (GtC/Yr) (that’s what I meant by a turn for the better, because that’s the way I feel about it). However CO2 levels didn’t follow. That’s one of the reasons for the suggested adjustment (like they do with temp data).

• @ Richard Verney…there was a big market crash at the end of 1999. The market dipped down close to Dow 5,000 at the time.

• Mike M. says:

“Why did fossil fuel emissions suddenly take a turn for the better in 2002?”

There are a lot of wiggles in the data. I think emission growth in the 80’s was higher than in the 90’s. Salby seems to be cherry picking the start and stop points he uses for his analysis.

• Mike says:

Indeed it does. Rate of change of CO2 was actually reducing from 1995-2002, something he tries to fudge over with his single straight line on the CO2 time series with enough annual wiggle so that it is not apparent in a slide show.

If we look at rate of change and filter out the wiggle we get a clearer picture.

Now we see that average rate of change since 1995 has been higher than most of the earlier period of the MLO record. Very close to 2ppmv / year.

But what is also see is that rate of change has pretty flat in that time in contrast to the earlier period. Now does that remind us of any other climate parameter that is having a little pause ??

Ah yes, hasn’t temp kinda slowed down too. Maybe there’s a link.

We also see the 1998 El Nino and the “hot” years of 2003, 2005 and 2010. There is also a dip matching the cooling effect of Mt Pinatubo.

So despite Sably’s presentation being a bit like that of a double-glazing salesman, that does not mean he is wrong.

It certainly looks a lot more like the global temperature record than it does the ever increasing human CO2 emissions.

• Mike,

The high variability in the sink rate doesn’t allow any conclusion. Take the period 1976-1996, or even cut it before the Pinatubo: a decrease in rate of change, while temperatures had their maximum increase in that period and human emissions were increasing too…

That is all natural variability in the sink rate.

49. Is there a difference between this average time and the time for 50% of the injected dollop to have DISAPPEARED from the air?

Technically it’s the median time for removal of the C14, the average (mean) will be somewhat longer because the distribution of the lifetimes is skewed towards longer lifetimes..
The difference between the C14 lifetime and the lifetime of CO2 in the atmosphere is as explained above the difference between first order decay and near equilibrium kinetics. In the case of the bomb test the level of C14 in the atmosphere doubled from the natural level and the amount absorbed made virtually no difference to the concentration in the ocean, consequently it’s a one-way process. However, the annual increase in pCO2 is less than 1% so you’re looking at the response to a small perturbation from equilibrium so it’s a slightly unbalanced two-way process.

50. paqyfelyc says:

Well, of course The 14C data and the Bern Model, “are measuring entirely different things”, but, still, we can compare and plot them on the same graph. In fact, as stated by
TonyL April 20, 2015 at 5:26 am,
any model (including the Bern Model) is the sum of a A -> B process, which C14 data shows to be an exponential decay law [at the month/year span of time] , and a B -> A process. (“A” : atmosphere; “B” : biosphere, land, ocean, etc. but NO new CO2 injection : no volcanoes, human use etc.)
So all you have to do is to plot them on the same graph to plot the difference, which will be the B -> A process according to the Bern Model.
I guess this will show how hilariously and counter-physically the parameters of this model have been chosen. The model itself is pretty trivial ; its parameters are not, and they make it nonsense; because it essentially state that the more CO2 in the atmosphere, the more the “B” part of the cycle will send back in the atmosphere (when chemical and biologic law says the exact opposite).
I guess the Bern Model has a hidden build-in warming process : the more atmospheric CO2 the more warming, and the more warming the more ocean and biosphere carbon release …

in short : it makes MUCH sense to plot the decay plot and the Bern Model on the same graph.

• paqyfelyc,

The problem is that for the 1960 situation, at the peak of 14C from the bomb tests, the return from the deep oceans was quite different for 12CO2 and 14CO2:
100% 14CO2 going A -> B, 45% going B -> A
100% 12CO2 going A -> B, 97.5% going B -> A

Which gives a hell of a difference in decay rates…

The main problem with the Bern model is that it expects a rapid saturation of the deep oceans (for which is no sign), that means that vegetation must take over, which has a much slower sink rate…

• paqyfelyc says:

i missed your first point (different B -> A rate for C12 and C14), thanks. But does it change the whole picture ?

• paqyfelyc,

While the decay process is similar, the 14CO2 decay rate is much faster than for 12CO2, the difference is at least a 3-fold…

51. nickreality65 says:

Speaking of conflating – 1 lb of carbon produces 3.67 lb of CO2. (12+2*16=44/12=3.67)
Not all carbon ends up as CO2.
Another case of apples and pomegranates, pls.

52. LesterVia says:

Even the IPCC reports acknowledge that the natural sources and sinks of CO2 dwarf man’s contribution and that most of the natural sources and sinks are related to vegetation rather than the oceans. It seems to me that, as far as CO2 is concerned, the atmosphere should behave more like river than a reservoir with the partial pressure of CO2 indicating its flow rate. Using the IPCC’s figures for the 1990’s, if all sources of CO2 suddenly ceased while the sinks continued removing CO2 at their present rate, the CO2 in the atmosphere would completely disappear in less than 3 years

We know that plants respond to higher levels of CO2 by growing faster thus removing CO2 at a faster rate and consequently, producing CO2 at a faster rate when they decay. Both processes are highly temperature/weather dependent and should not be considered as constant as the alarmists seem to do. If the earth’s climate has been warming, then the atmosphere’s CO2 flow rate has been increasing from all natural sources thus resulting in a higher atmospheric CO2 partial pressure. There should be a numerical factor that expresses the relationship between flow rate and partial pressure of CO2. Then that factor can also be used to express man’s contribution to the flow rate of CO2 through the atmosphere and its contribution to the partial pressure increase.

It seems to me that man’s contribution would be rather small compared to natural changes if one assumes an increase in CO2 flow rates are the cause of the recent increase in atmospheric CO2 partial pressure. The alarmists, however, would have you believe that a flow increase from a small stream feeding a large river will cause a major flood by assuming the river’s flow rate will remain constant.

53. Willis, nice post. Figure 3 is particularly informative, showing that about 42% of produced CO2 is sinked.
I agree Salby’s arguments are wrong. Perhaps better said, ‘not even wrong’. But on different grounds. The essence of his theory that ‘natural’ CO2 contributions swamp a minor anthropogenic contribution is ‘fast’ (he calculates 10 months) temperature sensitive carbon cycle source-sink changes. There are two fundamental observational falsifications.

First, CO2 has continued to rise in the 21st century while temperature hasn’t. That is a lot longer than his deduced 10 month natural response time.

Second, there are only two large potentially temperature sensitive source-sinks. Land is acting as a net sink. This is proven by ground truthed satellite observation of NDVI. CO2 ‘fertilizes’ C3 plants, which comprise >85% of terrestrial biomass. Greening the Sahel because C3s transpire less water with more CO2. Oceans are also acting as a net sink independent of probable increases in biomass/calcification, simply via Henry’s Law. Surface water (mixed layer) pCO2 is increasing in lockstep with atmospheric CO2 concentration, measured at station Aloha north of Oahu and station BATS west of Bermuda. So the observed CO2 rise is anthropogenic, since vulcanism is not changing much in either annual eruptions or their VEI (as you previously posted).

• Mike says:

“Lockstep” is one of those words, like “robust” that usually indicates it’s anything but.

Could you point out the “lockstep” in the graph I posted above. Looks more like lockstep with SST to me. To be realistic both effects are mixed but the short term variability is certainly temperature driven.

• Mike says:

To put some numbers on that : from 1960-1995 the fitted slope was 2ppm/y increase in a century; from 2011-2015 give 0.44 ppm/y in a century.

The rate of increase has fallen dramatically while the human emissions are getting ever stronger. Clearly some other factor is at play.

• Search Google images for pCO2 Aloha (or BATS). The charts are self evident and one click away.

• Mike says:

Yeah, right. That is usually what I find “lockstep” means, like I said.

That India Ocean buoy actually makes a good case for an inverse relationship. Thanks for the tip. ;)

• Mike says:

opps, seems to have lost the Indian one :

• Mike, you are looking at the second derivative of a noisy system. So you can prove anything and nothing by choosing the right start and end dates: taking 1976-1996 even shows a negative rate of change with maximum increasing temperature and increasing CO2 emissions…
And looking at a fraction of a year is like looking at daily temperatures over a month to deduce something like a temperature trend over years…
The longer term trends of several sea stations can be seen at:
http://www.tos.org/oceanography/archive/27-1_bates.pdf

One can try the other way out and look at what the increase in the atmosphere should be if that is a direct function of human emissions minus a function of the pCO2 difference between atmosphere and equilibrium pCO2 for the ocean temperature of each year. That gives the following graph (the red line):

Still widely within natural variability, so why all the fuss over a few years of a not increasing rate of change in the atmosphere, but by far still more natural sink than natural source, thus mostly human?

– Equilibrium base 290 ppmv pre-industrial.
– Equilibrium change: 8 ppmv/K.
– Linear sink rate: 2.15 ppmv at 110 ppmv pCO2 above equilibrium.

• Peter says:

Ristvan, are you sure that land is acting as net sink? Measurement of CO2 source/sink is taking to account also Methane? Methane is part of carbon cycle and one would expect that Carbon is returning from land cycle as Methane not CO2. It enters atmosphere and changes to CO2 later on some other place.

• Peter,

Methane is rapidly oxidized into CO2 with a half life of ~10 years, it presents less than 1% of the annual CO2 cycle. Total biomass is a proven sink, thanks to the oxygen balance: more O2 release than use, thus more CO2 uptake than release, the earth is greening…

54. Scott says:

I am not qualified to say in all the above posts/arguments who is right and who isn’t.
The part that gives me hope is that what we have had here is a TRUE DEBATE!
This is in stark contrast to the “Warmist/Religious” Climate sites that state, “The Science is Settled”….
Congratulations to us all!….

Looking forward to a peer reviewed paper on Dr. Salby’s work.

• Scott says:

Obviously…….after he finally publishes it.

55. Very well explained Willis. I totally agree but I could not have explained it with such eloquence.

I have seen many people making the same errors as Salby though.

56. Brian says:

About your digression, pondering “exactly how a cheap sheet metal and plywood desk would ward off an atomic bomb”

Studies of Hiroshima and Nagasaki have shown us exactly how. Most of the energy from the bomb is carried away by the flash – at the speed of light. The concussion of the blast propagates at about the speed of sound. Some survivors of the Hiroshima flash were less than 300 metres from the “ground zero” of detonation, but were shielded from the flash and the blast by a building they were inside of. Post-detonation photographs revealed that some buildings still stood.

It depends upon how far away one is, from the detonation. Within a certain radius, of the places with direct line-of-sight to the flash, Hiroshima blast survivors illustrated that even a single plant-leaf, between the flash of the explosion, and the victim, made a noticeable difference in the burns they received. A sheet of paper made a difference. These were survivors – not the dead. A cheap wooden desk, between the flash, and the victim, could very well make the difference between the victim surviving, or perishing.

Word spread, of the experience of skies full of formations of bombers. People were not afraid of an unescorted, high-flying, single airplane. As the shiny aluminum bomber was flying by, lots of people stopped and looked up – many were shirtless, labouring outside on a hot day. People who were inside, came to the windows, and watched the single, silent, falling object – and were pelted with shattered glass as a result. Had these people sought shelter –any shelter- instead of standing, with a sheet of glass between them and and the blast, many, many more would have survived.

Really simple “duck and cover” techniques can make the difference between long life and painful death… don’t discount it. With the “mutually assured destruction” stand-off between the USA and the Soviets, any discussion of “duck and cover” or any survival behaviour, or preparation, was cast off – and a prevailing opinion of “if it happens, we’re all gonna die” propagated. This is patently false. That attitude reinforced both goverments’ “assured destruction”, which would be less assured, if the governments kept up the training of how to survive a nuclear war. I cannot say that the USA government perceived this, and rationally decided to de-emphasize “duck and cover”, or, that the ordinary people (employed by government) suffered the same psychological depression that “we’re all gonna die” … but the government of the USA clearly did reduce and eliminate “duck and cover” and other preparedness. One does not need an extensive, underground cave, with years of food and water, preserved… there are simple behaviours that people could do that would double their chances of living long after a nuclear explosion.

• Willis Eschenbach says:

Yeah, I know that “duck and cover” is actually a good strategy … it just seemed like it was all running on the assumption that you’d have time to duck or cover. In any case, as you point out, being prepared to survive is a good thing and is valuable whether it’s an earthquake, a hurricane, or any other catastrophe.

w.

• Catherine Ronconi says:

Anyone surviving the flash would have time to duck and cover from the blast effects, unless very close to GZ. Some have suggested that the drive to suburbanize during the ’50s was partly from fear of living near the center of a likely target city.

• James Strom says:

Seems to me the ridicule of “duck and cover” came mainly from the left. Seems I recall songsters of the Pete Seeger, Tom Lehrer, etc., type mocking the idea. In popular culture there was a movie, “Matinee”, wistfully remembering the Cuban Missile Crisis, in which the female lead, a girl from a leftist family, expressed just the opposite of Brian’s views (which I believe are correct). The film is probably anachronistic, but its attribution of skepticism to the left is probably right.

• Steve P says:

Please list the simple behaviors you’d recommend in the wake of a nuclear blast.

• Steve P says:

Brian April 20, 2015 at 10:01 am

there are simple behaviours that people could do that would double their chances of living long after a nuclear explosion.

• Steve P says:

Bart April 20, 2015 at 7:06 pm

Thanks. Apparently, iodine is commonly administered orally with tablets of Potassium Iodide.

I participated in those duck & cover drills as a schoolboy back in the 50s, My faded recollection is that the drills were held for only a couple years before the nation’s attention drifted to Elvis and tail-fins, but we also had the occasional Civil Defense movie that had some of the kind of information I hope that Brian would provide because, you know, these are not our father’s bombs.

Downstream, evanmjones April 20, 2015 at 9:11 pm

If there is one thing in the world more misunderstood than climate it is nuclear war.

No argument there. evanjones also offers further valuable insights on thermonuclear war, including a link to a sampling of a book of the same name for which he wrote the introduction. I’ve just had a peek at it…

Well anyway, getting back to those simple steps, and assuming I’ve managed to have something, anything between me and the blast, so the flash is blocked & I’m not fried, and owing to my lightning reflexes and superior athletic conditioning, I’ve ducked and covered like, well, an old hand, and I’ve popped a few Potassium Iodide tablets, and start fishing around for the remote…

What next? I’d offer that the next decision would be to sit tight or evacuate. Of course, in So. Cal. we have all those great freeways like the 10, 405, 210, 101, and 5 that make getting around such a breeze that evacuation would have to be a prime option if you happen to be one of the lucky survivors, but everything in your vicinity has been blown away.

Just zip on out to your cool & well-stocked mountain hideaway/bachelor pad with the barbwire fence, and wait out the catastrophe in comfort.

Disclaimer: I was a Boy Scout, and I agree with Willis:

Be Prepared

57. Michael D says:

You say: Before I move on, please note that the amount remaining in the atmosphere is not a function of the annual emissions. Instead, it is a function of the total emissions, i.e. it is a function of the running sum of the annual emissions starting at t=0 (blue line).
Not strictly true – obviously because t=0 is not a meaningful time. The amount remaining in the atmosphere is a band-pass filtered version of the annual emissions. If you ignore saturation effects and increasing absorption coefficient as forests grow, it is the very simplest form of low-pass filter: (k1 / (k2 + s)) where s represents the frequency, k2 is the inverse of the time constant, and k1 is the integration coefficent. Thus if emissions rise or fall, the amount remaining will gradually rise or fall over a period of about ten years.

58. Thank you willis, the litmus test of the Bern model will be the behaviour of the airborne fraction in the coming forty years. I personally think the sinks won’t saturate as the Bern model predicts.
Compare the pulse response for a constant airborne fraction:

• I personally think the sinks won’t saturate as the Bern model predicts.
Is it that the Bern model is showing sink saturation or is it sink starvation?

If plant life thrives in 500 ppm CO2 compared to 300 ppm, then the decay rate must be a function of the concentration.

What does the Bern model say if we start from a 600 ppm CO2 concentration?
Do we get to 320 ppm in 100 years instead of 80?
is the hyperbolic asymptote at 310 ppm (sink starvation level ?) regardless of starting CO2 concentration?

• Hans, you might want to distinquish sinks. Possible that terrestrial sinks might saturate at some point if decay caught up to growth. Not possible in the oceans for two reasons. In physical chemistry, Henry’s Law does not saturate. In biology, calcification by diatoms and coccolithophorids does not saturate; whether and by how much it might be affected by ‘acidification’ thanks to Henry’s law is uncertain given the ocean’s chemical buffering.

I find change in sink rate, but not saturation, plausible.

• What I do observe when tundra turns forest, is that biomass keeps increasing until a tropical rainforest equilibrium has been reached. The mild temperature increase stimulates plantgrowth also in midlattitudes where the greening of the earth is dominant. We are a long way from saturated plant growth, i think we need to go back to Carboniferous conditions for that to happen.

• ristvan,

The terrestrial sink doesn’t saturate, but the oceans do. Henry’s law is only for the small part of dissolved CO2 in seawater (1%) not for bicarbonates (90%) or caronates (9%).
That makes that a 100% change in the atmosphere gives a 100% change in free CO2 in seawater, but that is only 1% of all CO2 (DIC: dissolved inorganic carbon) in seawater.

Thanks to the following chemical reactions, the total extra CO2 uptake in seawater is about 10% of the change in the atmosphere. That is the Revelle/buffer factor. See:

The ocean surface is readily saturated with an exchange speed of 1 year in the Bern model, but saturated at 10% of the change in the atmosphere. That also is observed
The next sinks are the deep oceans, which have much more capacity but a much slower exchange rate. The saturation of the deep oceans is the main problem in Hans and my opinion: no saturation in sight, while there should be, according to the IPCC.
The next sink rate is vegetation with near unlimited capacity, but much slower than the oceans (currently ~1 GtC sink rate for ~110 ppmv above equilibrium)…

59. The Bomb C14 data is a valid Lower Bound on residence time for the Bern model.

But what is a valid upper bound on residence time to check the Bern model?

It is all well and good to realize that the C14 isotope is sequestered at one rate, but reemitted into the atmosphere at another much more diluted rate. Sure.

But is there an assumption is that C14 re-emitted rate is zero since C14 comprises so small a fraction of total non-atmospheric carbon? That I think may be in error.

Carbon cycle respiration, atmosphere to biosphere to atmosphere may better be viewed as a LIFO system, Last In, First Out. So much of the carbon load goes into leaves and food plants that it decays in the fall and gets consumed by animals.

If one gram of C14 goes from atmosphere into plant leaves in the spring, wouldn’t one expect that a sizable fraction (50%) of the C14 goes back into CO2 when it decays in the fall? If so, then what the Bern model represents really should be much closer to the Bomb curve with a half-life of 20-40 years.

• Peter Sable says:

If one gram of C14 goes from atmosphere into plant leaves in the spring, wouldn’t one expect that a sizable fraction (50%) of the C14 goes back into CO2 when it decays in the fall?

Wrong mechanism, or at least wrong constants in the equation.

You are mixing two reservoirs of isotopes of C – the reservoir that’s the atmosphere, and everything else. “everything else” is much larger and has very little C14, so the level of C14 in the air decays quickly just like a little bit of smoke quickly disappears into a large room, because air molecules exchange places in the room. (and between the two reservoirs).

The Bern mechanism is different, and therefore has a different time constant. Not sure why this continues to be so hard to grasp…

• No. You are committing the error of which I write.

If you took a fraction of C14 and absorbed it into the ocean and mixed thoroughly, then you would be right, the CO2 that returned to the atmosphere would have no C14 to speak of. But that is not what happens.

Take the Mauna Loa C02 signal, and annual sinusoid with an amplitude of four years of average gain. Absorbtion and release. A sizable percentage of C14 must be in a LIFO, Last In First Out inventory. C14 is absorbed by leaves as they grow. Surely some of it goes into bark and wood, but much of the C14 remains in the leaves to decay or be eaten as food and be released back into the atmosphere. Leaves and crops are an obvious mechanism, but the mixing of surface ocean waters isn’t thorough either.

If a gram of C14 is absorbed by the biosphere in the spring, certainly not 1.0000 gram will be released back within the span of the four seasons. But it is not reasonable to believe in a planet with a sinusoidal CO2 signal that little of the C14 returns. Life is LIFO.

• Stephen,

One can assume that most of the 14C that is absorbed in the seasonal cycle is re-emitted in the second halve of the same cycle. That is the case for the ocean surface and the fast growth and decay of leaves.

The main difference is in the exchanges with the deep oceans: what goes in is the isotopic ratio of today (with some shift at the atmosphere – ocean border). What comes out is the isotopic ratio of ~1000 years ago, which is a lot lower (for any 14C peak), be it not zero.

That makes that the decay rate for a 14CO2 spike is much shorter than for a 12CO2 spike.
Some more detailed fluxes and concentrations for the different isotopes can be found here.

• That makes that the decay rate for a 14CO2 spike is much shorter than for a 12CO2 spike.
I do not doubt that. What I doubt is the 12CO2 spike is as long a Bern model says.

The 14CO2 spike is as short as it with a presumed seasonal re-emission of 14CO2 from decay of leaves and digested food, That argues for a faster uptake of 12CO2 than if you maintain that re-emission of 14CO2 is near zero because it constitutes such a small fraction of total CO2.

I don’t buy the Atmosphere – Deep Ocean fluxes in the model in your link. They seem to state that CO2 flux from atmosphere to Deep Ocean is of a rate similar with Atmosphere to surface. A better model would be Atmosphere to Surface, Surface to Mid layer, Mid Layer to Deep Ocean. Furthermore, that model doesn’t allow for increased mass in vegetation and soils and CO3 sinks in ocean.

• Stephen,

I agree with you that the Bern model has much too early saturation and at much too high levels. The Bern model is right for the ocean surface, which does absorb only 10% of the change in the atmosphere, but it does that also for the deep oceans, where there is no sign of saturation and it is impossible for vegetation, where there are hardly any limits in uptake (at the current or far future rates).

The deep ocean – atmosphere fluxes are quite defined to less than 5% of the oceans surface each way: downwelling is mainly in the NE Atlantic, upwelling mainly in the Equatorial Pacific. That largely bypasses the surface – mid layers, which for 90% of the surface have little exchange in heat and CO2 (directly, indirectly some 6 GtC/year from organic and inorganic drops of dead plankton, fish excrements,…) with the deep oceans.

The 40 GtC exchange rate between deep oceans and atmosphere can be deduced from the “dilution” of the low-13C CO2 from human emissions: what is measured in the atmosphere is only 1/3rd of the theoretical value. As most from the ocean surface and the deep oceans returns the next season, you need some 40 GtC continuous exchange between atmosphere – deep oceans – atmosphere to have that 2/3rd reduction in 13C/12C drop.
Moreover, the remaining ~50 GtC/year seasonal CO2 exchanges between ocean surface and atmosphere, and the opposite ~60 GtC seasonal exchanges between vegetation growth and decay (mainly in the NH), give a slight dominance of vegetation over the seasons: +/- 10 GtC which is measured as a +/- 5 ppmv global change over the (NH) seasons…

60. Yirgach says:

An interesting paper on tracking the various CO2 isotopes in the atmosphere: Modern Records of Carbon and Oxygen Isotopes in Atmospheric Carbon Dioxide and Carbon-13 in Methane

Introduction
This page provides an introduction and links to records of carbon-13 (13C), carbon-14 (14C), and oxygen-18 (18O) in atmospheric carbon dioxide (CO2), and also to 13C in methane (CH4) in recent decades. We emphasize large data bases each representing many currently active stations. Records have been obtained from samples of ambient air at remote stations, which represent changing global atmospheric concentrations rather than influences of local sources. Fossil carbon is relatively low in 13C and contains no 14C, so these isotopes are useful in identifying and quantifying fossil carbon in the atmosphere. Although the 14C record is obfuscated by releases of large amounts during tests of nuclear weapons, this isotope is nonetheless useful in tracking carbon through the carbon cycle and has limited use in quantifying fossil carbon in the atmosphere. Oxygen-18 amounts are determined by the hydrological cycle as well as biospheric influences, so they are often harder to interpret but are nonetheless useful in hydrological studies. Oxygen-18 and deuterium (2H) in polar ice cores provide information about past temperature long before the beginning of instrumental records. A gateway page to chronologies of isotopes in ice cores is here.

……………………..

Trends

Carbon-13 in CO2 is decreasing, as the fraction of atmospheric CO2 that is realized from combustion of fossil carbon is increasing. Ratios of 13C/12C in CO2 tend to be lower in the Northern Hemisphere, suggesting a fossil-fuel source that resides mainly in the Northern Hemisphere.

Carbon-13 in CH4 has decreased since 2008, but the short record (only back to 1998) combined with multiple sources precludes any simple explanation at this point. Ratios of 13C/12C in CH4 tend to be lower in the Northern Hemisphere.

Carbon-14 in CO2 is decreasing, and 14C/12C ratios are lower in the Northern Hemisphere than in the Southern Hemisphere, suggesting a northern hemisphere source of 14C-depleted carbon (e.g., fossil fuels). However, things are not quite that simple; although 14C from bomb testing has largely been removed from the atmosphere by the biosphere, the biosphere is now giving some back, precluding any simple interpretation of the rate of 14C decline. For more on this topic, see Levin et al. (2010).

Variations in 18O in CO2 reflect not only the carbon cycle, but the water cycle as well. Oxygen-18 evaporated from the oceans will eventually fall out as precipitation and make its way into CO2 respired from the biosphere. Therefore these variations reflect complex processes and are not always easily interpreted, although 18O is useful in hydrological studies. Oxygen-18 in CO2 has an annual cycle but has otherwise tended to stay constant in recent decades. Like the other isotopes discussed above, ratios of 18O/16O tend to be lower in the Northern Hemisphere.

61. Willis Eschenbach says:

Nylo April 19, 2015 at 10:39 pm

I agree that “we would not expect the two graphs to have the same shape or the same trends”. As I didn’t watch Salby’s video, I don’t know if that’s what he claims that should be happening, I will assume yes. If so, he is wrong. However, once one assumes that the increase in CO2 concentration is totally our fault, and IMO it is (as your figure 3 shows, Nature is actually working to try to counter it rather than adding more), then a significant increase of how much CO2 we emit should be followed by a significant increase in how much CO2 concentration rises. If we were, in 2013, emitting to the atmosphere the equivalent of 11*0.14=1.54 ppmv of CO2 more per year than we were in 2002, even if Nature partially counters that increase, we should be seeing an increase of the speed at which CO2 increases, of probably not as much as 1.54ppmv/year, but still SOME increase. Half of it? 40% of it? I don’t know. But we would NOT expect it to remain the same that it was in 2002. And that’s significant, in my opinion.

Thanks, Nylo. You are right that we would NOT expect the trend in airborne CO2 to remain the same … and in fact, it didn’t stay the same. It increased, exactly as you expected it would.

I fear you’ve been hornswoggled by the good Doctor. As I pointed out in the head post, he’s carefully cherry-picked his time periods to give him the “no change in trend” claim. If you use the same time periods for both the variables, here’s what you get for the trends:

Emissions 1990-2002: 0.03 ppmv/yr^2
Emissions 2002-2013: 0.13 ppmv/yr^2

Airborne concentration 1990-2002: 1.63 ppmv/yr
Airborne concentration 2002-2013: 2.05 ppmv/yr

Note that when the calculation is done in the proper manner, the CO2 concentration trend does NOT stay the same, it increases by about 30% … in other words, it did just what you said that you expected it to do. Well spotted.

I say again, Dr. Salby’s claims can NOT be trusted. My advice is to grab each and every one of his claims, turn them over, shake them hard, check the fastenings, hold them up to the light, and don’t take even the smallest thing for granted.

w.

PS—Please bear in mind that the two trends (emissions trend and airborne concentration trend) CANNOT be directly compared, because they measure different things. This is confirmed by the fact that they have different units.

• Mike says:

Emissions 1990-2002: 0.03 ppmv/yr^2
Emissions 2002-2013: 0.13 ppmv/yr^2

Fitting a straight line to d/dt of MLO CO2 ( 12mo cycle removed ) I get: in ppm/yr^2 :
using the same periods you used for emissions:

fit [2002:2013]
m = -0.0252396 +/- 0.01131 (44.83%)
fit [1990:2002]
m = 0.0616526 +/- 0.01655 (26.85%)

That is the complete opposite to what is happening to emissions.

• Nylo says:

Thanks for the reply Willis. Yes, I went to the Mauna Loa data and verified what you say. And the increase is increasing :) In the last 5 years Mauna Loa CO2 concentration has averaged an increase of 2.3 ppm.

It is very interesting to see how the rise in atmospheric CO2 is modulated by El Niño and La Niña effects. This is specially noteworthy in the years 1997-2002. 1998 and 2002 (El Niño) see increases similar to what today is normal, but the others (La Niña) see only tiny increases. I guess this is partially due to the change in ocean’s surface temperature affecting its capability to absorb CO2. But given the size of the change, that’s probably not the only effect in action.

• Nylo,

The increase in (seawater) temperature, especially during El Niño episodes gives higher temperatures in the tropical forests, which show less uptake and partly drought and wildfires. That makes that it is mainly land vegetation which is the cause of the decreased sink capacity. That land plants dominate the uptake deficiency can be seen in the opposite CO2 and δ13C changes:

62. Paul Milenkovic says:

You all know the old joke about a mathematician designing a labor-saving chicken-plucking machine: “Assume a spherical chicken.”

For sake of discussion, assume a simple model where the atmospheric reservoir holds 1000 units of CO2, the surface ocean reservoir holds 1000 units of CO2, and there is 100 units of CO2 emitted per year from the ocean into the atmosphere and 100 units of CO2 emitted per year from the atmosphere back into the ocean. Forget, for now, where these numbers come from, and forget the terrestrial biosphere and the human emission of fossil fuel combustion gas. This is just a “thought experiment” model.

In this simple model, the emission of CO2 from the surface ocean into the atmosphere is balanced by the sequestration of atmospheric CO2 back into the surface ocean. It is like Willis’ factory inventory, where there are large flows of materiel into and out of the factory, but the level of inventory in the factory stays more or less constant, or if it does change, it takes a long or longer time to change.

Now, assume an entirely hypothetical model where there are 1000 units of CO2 in the atmosphere and 0 units of CO2 in the surface ocean. According to what is called Henry’s Law, the sequestration of atmospheric CO2 at least starts out at 100 units per year whereas the emission of the ocean starts out at zero. Again, this is in the entirely hypothetical model, so please don’t get ahead of me here on what this model should do.

In the hypothetical model, and because there is 100 units of CO2 leaving per year and 0 units entering, the amount of CO2 in the atmosphere should diminish at an initial rate of 10% per year. This initial situation, however, can only go on for about 5 years until there are only 500 units of CO2 left in the atmosphere, 500 units of CO2 now in the surface ocean. According to Henry’s Law, the sequestration rate from atmosphere to ocean is reduced to 50 units per year, the emission of the ocean back into the atmosphere is increase to 50 units per year. Even though there is turnover of the actual carbon atoms in the atmosphere at this point, the atmosphere and ocean are in equilibrium and do not change further in concentration. Also, the equilibrium state is finally reached, for practical purposes, at some multiple of the 5 year interval, because as the difference in CO2 between surface ocean and atmosphere decreases, the net rate of change of their respective concentration diminishes according to the exponential law given by Willis, where 5 years is what Willis and others calls the “e-folding” time duration.

Again, consider another hypothetical case, properly distant from reality so don’t call me on this quite yet, where the surface ocean and atmosphere each hold 1000 units of “normal” CO2 and where 10 units of radioactive CO2 is added to the atmosphere. It really doesn’t matter that much whether this small amount (1 percent in this pretend example) is added CO2 to make a total of 1001 units of total CO2 in the atmosphere or whether it replaces some CO2 to make 999 units of “normal” CO2 and 1 unit of radioactive CO2 for a sum of only 1000 units.

There is this theoretical concept called “partial pressure” that another commentator referred to. Partial pressure was a scientific breakthrough in the understanding of chemical reactions (consider gas dissolving in a fluid as a “chemical” reaction for now according to Henry’s Law). The way partial pressure works is that this new system is indistinguishable from one where the atmosphere is at a near vacuum, the 10 units of radioactive CO2 are the only gas molecules in the air, and there is zero radioactive CO2 in the surface ocean.

Even through there is still 100 units of normal CO2 going from ocean back to air and 100 units going from air back to ocean per year, there is one unit (10 percent of 10 units equals 1 unit) of radioactive CO2 going from the air to surface ocean and zero units of radioactive CO2 going from ocean back to air. That has to be because initially there are zero radioactive CO2 atoms in the ocean water for any to leave back into the atmosphere. This state of affairs continues until there are 5 units of radioactive CO2 in the atmosphere, giving a flux of .5 units of radioactive CO2 per year from atmosphere to ocean, and there are 5 units of radioactive CO2 in the surface ocean, giving a flux of .5 units of radioactive gas back to the atmosphere, establishing a new equilibrium where half of the “new” CO2 is in the air and half of it is in solution in the surface ocean.

In fact, this is what people observe with CO2 emitted from fossil fuels. About half of the cumulative fossil CO2 ends up in the air and half of it is presumably in the ocean or other “sink.” But this is not what we observe with the bombtest radioactive CO2 (from neutron activation of nitrogen to make fresh C14). Were the bombtest C14 to level out halfway between the pre-bombtest level and the peak level shortly before the Partial Nuclear Test Ban Treaty, this would be strong evidence that equilibrium between the air and the “fast” reservoir (surface ocean in my very simplified treatment) is reached rapidly, but the CO2 emitted from fossil fuel combustion gets divided evenly between air and surface ocean. According the the Bern model, the rate of mixing of the surface ocean and the much larger deep ocean is much less rapid, which means the CO2 we emit that ends up split between air and surface ocean will be there for 200 years.

But this is not what the bombtest curve shows. The radioactive CO2 has dimished to way past halfway between the bombtest peak and what people believe to be the pre-bombtest floor. Remember, the initial rate of decline of the radioactive CO2 is between the air receiving a pulse off radioactive CO2 and a reservoir without that excess radio-CO2. It appears that not only does CO2 transfer rapidly between the air and that reservoir, it also appears that this reservoir, whatever it is, is larger than simply equal in size to the atmospheric reservoir.

Well, how do you explain that the total CO2 in the atmosphere is rising at the rate it does? Murry Salby conjectures that it is coming from the ocean reservoir because the oceans are warming, not necessarily due to fossil fuel CO2 but because of natural trends. How do you explain that the concentration of non-radioactive C13 in the atmosphere is diminishing according to its own “Keeling Curve” measured in Hawaii? Murry Salby conjectures something about (small) differences in mobility of carbon isotopes for CO2 between reservoirs — water warming ejecting CO2 is well known. Data on C13 and its actual concentration over time in air and ocean and plants is harder to come by an an initial search on this topic.

But according to the fictional Sherlock Holmes, if every other hypothesis is ruled impossible, the remaining hypothesis, however improbable, must be the truth.

If the bombtest carbon is diminishing at a much greater rate than the “200 year lifetime of anthro CO2 claim”, and (should I put this in caps as AND?) it has diminished to well below half the pre-1950 radio-carbon concentration in the atmosphere, there must, has to be, a reservoir, with a rapid transfer time, for all CO2, that has a much higher capacity than the atmosphere. Even with (slight) isotopic differences in diffusion rate, CO2 equilibrates with this reservoir at a much higher rate than thought (more like 10 years than 100-200 years), which means that not half but most of the historical fossil fuel combustion CO2 is in that other reservoir and not in the atmosphere, and the apparent “half of the cumulative increase in atmospheric CO2” is an artifact of the assumption that there is no natural discharge of CO2 into the atmosphere of that magnitude.

• Mike M. says:

Paul Milenkovic,

You wrote: “For sake of discussion, assume a simple model where the atmospheric reservoir holds 1000 units of CO2, the surface ocean reservoir holds 1000 units of CO2”. But it seems you just made those numbers up. Then you wrote “But this is not what the bombtest curve shows. The radioactive CO2 has dimished to way past halfway …”. But halfway is only significant because of the numbers you made up.

So I don’t think that you have demonstrated anything.

• Paul,

The difference is in what returns out of the deep oceans: here for the year 1960:
– 100 parts 12CO2 go out the atmosphere into the oceans, 97.5 parts come out the oceans into the atmosphere.
– 100 parts 14CO2 into the deep oceans, 45 parts 14CO2 come back from the deep oceans due to the practical disconnection between ins and outs.

That makes that the e-fold decay rate of a 14CO2 peak is a lot faster than of a 12CO2 peak…

63. How is it your graph matches exactly the measured CO2 when the only variable is Anthropogenic CO2?
Is it that every other source and sink of CO2 is flat and neutral? Do you seriously expect that we will believe this?
Cold sea water accumulates CO2
Warm sea water discharges CO2
The lag time is between 200 and 800 years from start of warm period to large CO2 Atmospheric increases.
We know the Oceans have been warming as well as the atmosphere since the little ice age.
The little ice age consists of 2 periods of cold. 1300s to 1400s and 1600s through 1800s.
Time since… 500 years and 200 years.
Now, tell me again that your graph which uses a single variable of anthropogenic co2 creation matches exactly the measured co2 at mauna loa… Then explain why you think there is absolutely no other sources which are changing…

• astonerii,

Humans emitted twice the amount of CO2 as measured in the atmosphere. If there was another natural source of extra CO2, the increase in the atmosphere would be larger than from the human contribution alone…

Nature is a net sink for CO2 for every single year of the past 55 years, not a source.

• Bart says:

Not so. Your clinging to the discredited “mass balance” argument calls into question your judgment elsewhere.

• He Bart,

I like to see your calculation of the mass balance for human emissions, the increase rate in the atmosphere and the net contribution of natural in and out fluxes over the past 55 years…

• Bart says:

This is ridiculous, Ferdinand. You claim it is “mass balance”, but it is not. You do not know the entire carbon cycle, and you are only looking at pieces of it. The “mass balance” argument, as you and others proffer it, is a trivial observation regarding the size of the human input and the observed rise which has no bearing on the attribution question.

It all depends on the power of the sinks, Ferdinand. I have shown this many times, most recently here at Bishop Hill. This silly “mass balance” argument is naive beyond measure, and its continued repetition idiotic. Your continuing devotion to it only reveals in stark relief your mental block regarding the evolution of dynamic systems.

• astonerii says:

Fortunately there is evidence that this is not the case. CO2 levels change with out human contribution. Sometimes nature is a net sink and other times nature is a net source. This has been the case ever since the Earth Formed and especially since plants and animals have roamed the planet.
With the argument that we are in an era of increasing temperatures, it stand to reason that we should be in an era of naturally increasing CO2 levels without needing human contribution.
Then the argument for the last 18 years of a pause in the warming being that all that heat is hiding in the Oceans only doubles down on this, because if all of the heat is in the Ocean then the CO2 is leaving the Ocean even faster.
But looking at the chart, there is no increase in the rate of change of CO2 in the Atmosphere.

Natural Variation argues that the level of CO2 in the atmosphere changes. Billions of years of fossil records say so. Millions of years of ice core records say so.
The argument that all of a sudden, that at 2:37:02.592039571 on January 2nd, 1950 the entire universe and the Earth became in total and solid lock equilibrium is just nonsense. That the only contributors to the changes in climate are now wholly human is insane. That the current net sink status of the natural environment was neutral right up until that point in time is ludicrous.

• astonerii,.

We have ice cores with a resolution of less than a decade which covers the past 150 years which can measure CO2 with a repeatability of 1.2 ppmv (1 sigma). That means that any change of 2 ppmv, sustained over 10 years or a one-year peak of 20 ppmv can be detected in that ice core.
The past 1,000 years is covered by ice cores with a resolution of ~20 years and the past 70,000 years with an ice core with a resolution of ~40 years.

The current increase of 110 ppmv in 160 years would be detected in all ice cores, even the worst resolution, 800,000 years back in time.

The overall ratio between CO2 and temperature is 8 ppmv/°C over the past 800,000 years, which is visible in the 1,000 years Law Dome ice core as a drop of ~6 ppmv between the MWP and LIA. The increase in temperature since the LIA is thus good for ~6 ppmv. That is all…

The measured variability in the past 55 years is +/- 1 ppmv around the trend, lasting 1-3 years with an amplitude of 4-5 ppmv/°C. That is all. The rest is from human emissions, which were and are about twice the measured increase in the atmosphere.

• Bart,

Whatever you think of the mass balance, it can’t be violated at any moment. You can’t have ánd a net contribution of human emissions ánd a net contribution of natural sinks and still see an increase in the atmosphere which is less than the human emissions alone.

Except if you have extreme fast sinks and a fourfold increase of the natural fluxes, for which is not the slightest indication…

• Bart says:

If there’s an exception, then it isn’t generally true. So, you need to make up your mind.

You can, indeed, have a “net contribution of human emissions ánd a net contribution of natural sinks and still see an increase in the atmosphere which is less than the human emissions alone.” You can easily have a feedback which takes out up to all of the human input, and whatever is left must be due to natural sources.

It isn’t negotiable. It is elementary in the theory of dynamic feedback systems.

• Bart, the sinks don’t make any differentiation between human and natural input. Over the past 55 years:

There is a 4-fold increase in human emissions
There is a 4-fold increase in rate of change in the atmosphere
There is a 4-fold increase in net sink rate
There should be a 4-fold increase in natural circulation to get the same result with only a small contribution of human input
There is no evidence of an increased natural circulation, to the contrary: several estimates of the residence time show a relative constant natural circulation in an increasing atmospheric content.

• Bart says:

I’m going to declare victory. Let us declare once and for all, the so-called “mass-balance” argument is dead and buried.

Ferdinand agrees that it is not dispositive, that the mere fact that the observed rise is smaller than the sum total of human inputs is not enough to assign attribution for the rise to humans. It all depends on the power of the sinks.

Ferdinand still thinks it is unlikely that the sinks are powerful enough. I disagree. And, that is that for now.

• Bart,

The difference between you and me is that I look at all the available evidence. You only look at one “match” which is based on an arbitrary factor and offset, declare it all proof you need to conclude that temperature is the one and only cause of the increase of CO2 in the atmosphere.

All contra-evidence is put aside, ridiculed or simply d*nied…

You didn’t supply any shred of evidence that the natural cycle increased in the past 55 years, which is necessary for any role of the natural contribution to the increase in the atmosphere. Only theories and theories again, which one by one violate all available evidence…

Thus I agree that the mass balance alone is not sufficient to be sure that humans are the main cause of the increase of CO2 in the atmosphere, if and only if the natural cycle increased in lockstep with the human emissions in exactly the same timing and increase rate as human emissions did.

For which is not the slightest indication…

• Bart says:

The difference, Ferdinand, is that I recognize what is surmise, and what is imperative.

Your counter-evidences are merely surmise of how you think things ought to be. But, the recognition that the trend in dCO2/dt is due to the trend in temperature is not surmise. It is imperative.

• Bart,

But, the recognition that the trend in dCO2/dt is due to the trend in temperature is not surmise. It is imperative.

Nothing imperative here: there is no link between the trend in temperature and the trend in dCO2/dt. The link is between T and CO2 and between dT/dt and dCO2/dt…

BTW, the only possibility for a natural increase is when the carbon cycle increase is exactly the same as the human and the “airborne fraction” increase: a fourfold increase over the past 55 years. Not a threefold or a fivefold…

• Bart says:

Sorry, no. The trend in temperature necessarily causes the trend in dCO2/dt. It is imperative due to the lack of phase distortion. There is no way around it.

And, no, the rest is your usual static analysis silliness.

• @ Bart and Ferdinand,

You two have given me one of the most interesting sparring sessions I’ve seen on WUWT for a long time, and without the decent into yahboo we so often see. It’s been a fascinating exchange and I think I understand both your positions. If I could try to characterise:

Ferdinand is taking the view that there is no evidence that natural sinks have increased to the extent that a simple mass balance won’t explain human contributions to atmospheric CO2 concentration. He accepts (I think) that it’s possible that the “mass balance” argument is not adequate because it is possible that sinks and sources can vary to the extent that it won’t explain attribution, but that he doesn’t see evidence that there has been this variation, so mass balance is sufficient.

Bart is of the view that one shouldn’t even start with the mass attribution argument because it is misleading, and that we simply do not know whether or not those sinks have sources have completely dominated ACO2 or not. I suppose absence of evidence is not evidence of absence…? Because it is possible that natural sinks and sources can vary in response to ‘whatever’ (temperature possibly alla Salby) then it isn’t valid to invoke the mass balance argument in the first place. Bart’s is a ‘dynamic system’ argument.

How have I done? Would both of you concur with these very rough characterisations?

Once again, these exchanges are incredibly valuable. As frustrating as it might seem to both of you at times I am sure, I really do thank you for them.

• Sounds good to me. My analysis favors Bart over Ferdinand but Quantifies the relative contributions using a dynamic mass balance. Click on my name for details.

• Bart says:

agnostic2015 @ April 23, 2015 at 1:54 am

More or less. The “mass balance” argument is proffered by others than Ferdinand. It is claimed that it settles the debate, and proves beyond any doubt that humans are responsible for the rise.

But, as I have described, it is an utterly fatuous argument, with no rigorous foundation. To one such as I, who deals with the control of dynamic systems on a daily basis, it is like a child getting red in the face and insisting that 2 + 2 = 3. It makes me blink and shake my head to blot out the stupid.

I have tried laying out all manner of four objects, separating and counting them, dividing them into two sets, and describing step by step how the law of addition dictates that the same number of matches is evident in both cases, and all I get in reply is, “No! 2 + 2 = 3!” It is surreal. The child simply cannot learn.

If I have not adequately conveyed my consummate contempt for the argument, please fill in the gaps in your own imagination of how you would react to the most stultifyingly wrong thing you can imagine anyone pushing on you, all with earnestly smug assurance of their unassailable position.

• agnostic2015,

Bart is brilliant in the theoretical knowledge of systems, I was quite good in my working life for implementing the theoretical knowledge of smart people like Bart into real world chemical processes, including solving practical problems like feedback systems which should react in fraction of seconds, but need 10 seconds to get the right reaction and material cycles through a factory that returns after 6 days to disturb the input…
Thus in short: the theoretical knowledge of Bart is by far superior, but I think my practical knowledge is a lot better.

Your summary is quite good, the reaction of Bart is that he knows better, but he never gives any evidence that his theory is backed by any observation. If I then shows that his theory violates about all observations, then the observations are no good or it is my interpretation of the observations which is wrong…

Nevertheless, as I can’t convince Bart, I can try to convince others that Bart is wrong.

Part of what follows is a repeat from here:

Take his main theory:
The temperature dictated equilibrium is a moving target, which evolves according to
dCO2/dt = k*(T – T0)

Which violates all physics and all observations, including Henry’s law which says that
ΔCO2 = k*(T – T0)

In the real world, a step increase of seawater temperature will increase the pCO2 of the seawater with ~8 μatm. That gives an extra CO2 input from the equatorial upwelling zones and a decreased output into the polar sink zones. That results in an increase of CO2 in the atmosphere, but that increase reduces the pressure differences with the oceans at the upwelling zones and increases the pressure difference at the sink zones. Net result: at 8 μatm (~8 ppmv) the pressure increase in the atmosphere equals the pressure increase in the oceans and the original in/out fluxes are restored. Thus Bart’s theory that with a small sustained temperature jump the extra input of CO2 goes on until eternity without any feedback from the increased pressure in the atmosphere seems quite impossible…

Another violation is the δ13C evolution: δ13C measurements all over the oceans show a deep oceans δ13C level around zero per mil. the ocean surface gets 1-5 per mil, due to bio-life, which extracts preferentially 12CO2, thus leaving more 13CO2 behind. Thus any increase of in/out fluxes from the oceans would increase the δ13C level of the atmosphere.

The atmosphere was at -6.4 +/- 0.2 per mil over the Holocene up to 165 years ago. Then it starts to decline in lockstep with human emissions to below -8 per mil today. Fossil fuels have a δ13C level which is average around -24 per mil.

δ13C measurements are as accurate as CO2 measurements in the atmosphere, there is no way that these can be manipulated as a lot of people of different organizations are involved at different places on earth.

Thus if the natural cycle between atmosphere and oceans changed over the past 55 years, that would be measured in the δ13C changes. Nothing to see there: only a monotonic decrease in ratio with human emissions… If the natural cycle increased a 4-fold over the past 55 years via the oceans, that should give an increase in δ13C of the atmosphere, despite the human input:

The 40 GtC/year exchange is the estimated deep oceans – atmosphere exchange rate, based in the 14C bomb spike decay rate and the dilution of the δ13C changes from human emissions. If the natural emissions from the oceans were the cause, then these need to increase a 4-fold in general, but for the oceans as sole case, they should increase from 40 to 290 GtC/year, but then the increase of δ13C changes would violate the observed δ13C changes…

• olliebourque@me.com says:

Mr Engelbeen, I agree with your assessment of Batt’s brilliance, however, one does not balance a checkbook with calculus. The simplicity of the mass balance argument, the fact that a high school student can grasp it, and the supplemental evidence we have that the origin of atmospheric CO2 is anthropogenic seems to be unable to penetrate Bart’s preconceived religious belief system.

• Bart says:

“Which violates all physics and all observations, including Henry’s law which says that
ΔCO2 = k*(T – T0)”

That’s not what the data say. They say directly that dCO2/dt = k*(T – T0). As clearly and directly as looking at the sky, and determining that it is blue.

Ferdinand appears to be engaging in the classic fallacy of attempting to make the data fit the theory, rather than the theory the data.

• olliebourque@me.com says:

Bart, Salby is making the same mistake John McLean made, and you are defemding this same error.

• Bart says:

There’s been no mistake Ollie. The so-called “mass balance” argument is pathetically naive, and the evidence is very clear to one who is intimately familiar with the design and control of dynamical systems. There is no doubt about it. It isn’t even a close call.

• olliebourque@me.com says:

Your mistake is to confuse a temperature induced variation of a linear trend with causation.
..
The variation is not the cause of the trend, it’s an artifact.
..
You are free to make the same mistake John McLean made, but rest assured, if Salby attempts to publish, it will be humorous.

• olliebourque@me.com says:

To paraphrase,
.
“Foster et al examine the filtering process that McLean et al applied to the temperature and ENSO data. This filtering has two steps – they take 12-month moving averages then take the differences between those values which are 12 months apart. The first step filters the high-frequency variation from the time series while the second step filters low-frequency variation. The problem with the latter step is it removes any long-term trends from the original temperature data. The long-term warming trend in the temperature record is where the disagreement between temperature and ENSO is greatest.

Why do McLean et al remove the long-term trend? They justify it by noting a lack of correlation between SOI and GTTA, speculating that the derivative filter might remove noise caused by volcanoes or wind. However, taking the derivative of a time series does not remove, or even reduce, short-term noise. It has the opposite effect, amplifying the noise while removing longer-term changes.”
..
So Bart, the Mclean error is well understood within the scientific community. Salby and you are repeating this flawed analysis.
..
You really should pay attention to the what has been published.

• Bart says:

There has been no mistake on my or Salby’s part, Ollie.

• Bart,

That’s not what the data say. They say directly that dCO2/dt = k*(T – T0). As clearly and directly as looking at the sky, and determining that it is blue.

Bart the data are composed of two completely independent processes: one that causes the variability and one that causes the trend. The first is proven caused by the temperature influence on vegetation. The second is proven not caused by the temperature trend influence on the vegetation trend (which is opposite). Thus there is no proof that the trend in the data is caused by temperature, none at all.
Your formula is not based on the data, it is based on you artificial match of two straight lines with an arbitrary factor and offset.

Moreover, the integration is between T and CO2, per Henry’s law and by extension between dT/dt and dCO2/dt and not between T and dCO2/dt.

• Bart says:

64. Mike says:

Thanks to a tip from ristvan I went to check out some of the buoy data. Most is a bit messy and broken but most suggest the same thing. About the only fairly complete data set was CRIMP2 : Kaneohe Bay, located on the eastern side of Oahu, Hawaii

http://www.pmel.noaa.gov/co2/story/CRIMP2 Click “show full dataset”

Sea water CO2 is consistently about 50 to 100 umol/mol HIGHER the atmospheric CO2 and it clearly close to being in anti-phase with it. Hardly “lockstep” maybe lock anti-step would be a better term.

Also bear in mind there are contemporaneous surface readings from the say buoy, this is not MLO on the top of a volcanic mountain.

With swings in SW CO2 being of about 200 umol/mol it’s hard to argue this is being caused by the piffling changes in atmospheric CO2.

What seems odd is that SW CO2 peaks in late summer when I would assume that water is warmest.

• Yirgach says:

What seems odd is that SW CO2 peaks in late summer when I would assume that water is warmest.

Maybe that’s due to out gasssing from the warmer water?

• I think SW means salt water, and thus, outgassing is correct, but if it outgassed, why the higher measure?

• Mike M. says:

What is your point Mike? That CO2 is taken up and released by living organisms? Duh. From the link you provided: “Kaneohe Bay, located on the eastern side of Oahu, Hawaii, is a complex estuarine system with a large barrier coral reef, numerous patch reefs, fringing reefs, and several riverine inputs.” So obviously there is a lot happening there leading to diurnal variations that, on short time scales, are very large compared to any anthropogenic changes on those time scales. But the uptake and release by organisms goes around in a circle, while the burning of fossil fuels goes in one direction, at least on a human time scale.

• Mike says:

“But the uptake and release by organisms goes around in a circle”

Is that comment supposed to suggest that it all averages out? Well if that was the case there would be coal and no oil , so you stuck out with that idea.

“So obviously there is a lot happening there ..”

Well if it’s all so obvious to you how is that you can’t explain it and just make meaningless hand waving statements?

There surely is an explanation but it is not SW pCO2 being “in steplock” with atmospheric CO2 , it is not I much higher, much larger in variation and in the wrong sense: anti-phase.

ristvan’s comment was totally ill-informed but useful in inciting me to find a data source I had not seen before.

“That CO2 is taken up and released by living organisms? Duh. ”

Taken up from where? What organisms? How does that explain the observations? Duh, indeed.

• Mike M. says:

Mike,

“Is that comment supposed to suggest that it all averages out?

Yes, it all averages out. Well, to within 99.999999% or so.

“Well if that was the case there would be coal and no oil , so you stuck out with that idea. ”

That would be the 0.000001%. (OK, I am just guessing as to how many zeros.) But every year we burn the fossil fuels that took millions of years to form.

It is not obvious to you that there is a lot happening in “a complex estuarine system with a large barrier coral reef, numerous patch reefs, fringing reefs, and several riverine inputs.”? Then I am afraid I can’t help you.

“ristvan’s comment was totally ill-informed but useful in inciting me to find a data source I had not seen before. ”

If you mean his use of the word “lockstep” I agree with you. But if you want to refute him, you need to use appropriate data, such as Aloha and BATS.

“Taken up from where? What organisms? How does that explain the observations?” You’ve heard of photosynthesis? Respiration? Calcification? Corals? Phytoplankton? etc.

Duh. Indeed.

• Mike M. says:

Mike,

After following the link, I found this for the site in question: “the seawater conditions reflect changes in seawater properties driven by both organic productivity/respiration and carbonate calcification/dissolution”. The former is your diurnal variation. The latter would be:
CO2 + H2O + CO3– = 2HCO3-
Ca++ + CO3– = CaCO3
Calcification (presumably growing coral in that environment) is the second reaction in the forward direction. It removes CO3– as causes the first reaction to shift to the left, releasing CO2. So, thanks to corals, the area is a net source of CO2 to the atmosphere.

You wrote “What seems odd is that SW CO2 peaks in late summer when I would assume that water is warmest.” I guess corals like it warm.

• Mike M. says:

Of course, carbonate should have a double minus sign, but that seems to show up as a dash.

• Mike says:

Thanks, that sounds like a credible explanation on the face of it.

“The former is your diurnal variation. ” , I don’t recall ever mentioning a “diurnal” variation here.

There is an annual variation that is out of phase with atm CO2 , so while coral is pumping up pCO2 landbased plant life is dropping atm CO2, hence the approximate antiphase relationship.

In conclusion neither has any directed relationship beyond covariability caused by warmer conditions affecting different parts of the biosphere in different ways.

Lock-step and suggested causation seems to be a non starter, then.

• Mike, stations Aloha and BATS were deliberately located in barren ocean where confounding biological activity is minimal (but not zero). Of course biological activity overwhelms Henry’s law In biologically rich suface waters. Seen not just in pCO2 but also in resulting pH. Estuarine pH swings of 1.4 in the Pacific are known. Please use the Aloha and BATS data supervised by WHOI. Rumor has it those oceanographers know what they are doing. Using data from an estuary shows you don’t. See essay Shell Games (oysters posted at Climate Etc suffices; corals and oysters in the full Blowing Smoke ebook version) for more on the biology part.

• Mike says:

I started with Aloha above, it barely has 2.5 years of data and shows the same anti-phase relationship.

It certainly does not show anything in “lockstep” as you claimed.

Maybe you could explain where you see evidence of your “lockstep” relationship.

• Mike M. says:

Mike,

Aloha has about 30 years of data. Here is a somewhat old graph of it: http://www.pmel.noaa.gov/co2/file/pH+Time+Series

The atmospheric and oceanic trends agree, but unlike ristvan I would not use the word “lockstep” unless there is more than a similarity in trend.

• Mike, my WHOI charts for station Aloha start in 1988. Try harder. I even explained how to do so.

• Mike says:

Well I’ve spend over 20min today trying to find the data that you insist is “just a click away” yet abstain from providing any more “guidance” than “use Google”.

However, it is pretty clear from the limited segment that I found on NOAA’s PMEL and what Mike M found, that your claim that they are in “lockstep” is complete BS. That probably goes a long way to explaining why you being so unhelpful about about either of us finding the data.

The two do seem to be generally trending upwards, a fact that is as unsurprising as it is uninformative. Thanks for the waste of time.

• Mike,

Many equatorial (upwelling) waters are permanent sources of CO2, polar sink waters are permanent sinks of CO2. The area weighted average pCO2 difference over the ocean is about 7 μatm higher in the atmosphere than in the oceans. See for the graphs:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtm

The anti-phase is mainly a matter of vegetation: the ocean surface warms and emits more CO2 in summer, but land vegetation in the NH is a stronger sink for CO2 in summer than the oceans are a source.

• What is measured is the pCO2 of seawater, which is a function of at one side temperature and at the other side bio-life. That is better seen in more detail for six fixed stations at different parts of the oceans:
http://www.tos.org/oceanography/archive/27-1_bates.pdf

The pCO2 difference with the atmosphere is the driving force for the uptake or release of CO2 to/from the ocean surface. In the case of the buoy, it is clear that for most of the time, the waters are releasing CO2 into the atmosphere.

The momentary releases of one buoy are not important. What is important is the increase in DIC (total inorganic carbon) and pCO2 over time. The six stations above and several other fixed points show that the increase in DIC and pCO2 follow the increase in the atmosphere, which for the buoy is drowned in the wide variability of the observations. Hawaii makes regular measurements farther away from the islands and as it seems far less variable.

65. Steven Mosher says:

Good work Willis.

It never ceases to amaze me how many skeptics will waste their precious brain power and time of Salby’s busted idea. In the beginning it was kinda fun to see all the various challenges to the science. Skeptics didnt need to be pinned down to one or two good arguments they could scatter shot the approach, but as time goes on the lack of focus on the part of skeptics is a problem.

1. It allows your opponents to paint you all with the same broad brush as science d&niers.
2. Like it or not you are judged as a group, the same way climate scientists are judged as a group

in the end there are two good paths for skeptics to follow

A) the heretic path– you lay that out pretty well
B) the Nic Lewis path

66. Mike says:

Indeed, SW is seawater, that is why I find it surprising.

67. Salby, in the video from London recorded on 17th March 2015, offers this discussion:

“The research I describe goes to the core issue of climate change. [The core issue is] Why is atmospheric CO2 increasing?

The IPCC says increasing atmospheric CO2 results from anthropogenic emission entirely.

[. . .]

[Yet the observations show in the period 2002 to 2014 compared to the period 1990 to 2002] The growth of fossil emission increased by a factor of 300% [whereas] the growth of CO2 didn’t blink.

How could this be? Say it ain’t so. [Salby said with a wry ironic expression on his face]”

[. . .]

The above Salby remarks, which were focused on the IPCC claim, were in reference to this slide of two charts from the 9:00 minute mark of his recent London talk:

In the period shown, it appears to me that there is an observed insensitivity of growth of CO2 levels to significant changes in the rate of anthropogenic emissions of CO2. The IPCC claim needs to explain the observations.

John

• Mike says:

John, there is divergence at the beginning of that fitted line that Murry Salby wilfully ignores. This is less distinct than in his first graph because the first one is rate of change , the second is the cumulative sum of CO2 emissions.

That is the crux of Willis’ objection and difference of units. It’s a valid point.

However, when we look at ppm/yr^2 for both emissions and atm CO2 , there is indeed a mis-match. Salby has a legitimate point but presents it badly. Emissions are accelerating and CO2 is decelerating.

There’s a problem.

• Mike on April 20, 2015 at 1:36 pm

– – – – – – –

Mike,

As to difference of units in the two graphs, Salby is taking observation data as given in the original sources. One can see that one graph is in rate of change of change (an ‘acceleration’ of sorts if you will) and the other is in just change in time (a ‘velocity’ of sorts if your will). Those do not invalidate the observations when taken together show inconsistency wrt the IPCC claim of 100% anthropogenic attribution of change in CO2. Hell, even I can take an eyeball time derivative of the green ppmv/yr line and find it to be essentially zero in value which fits Salby’s point about the IPCC claim.

As to the fit of the green ppmv/yr line on the ‘CO2 observed’ chart, I took it originally as merely a visually useful eyeballed trend, which looks sufficient to point out the IPCC has a problem explaining the two graphs in the context of its claim. It looks like Salby’s point is reasonably valid and reasonably displayed.

NOTE: As to your point about presenting badly; between you, me, Salby and Willis (or anybody) does it matter who is presenting their case badly? Badly? Screw badly or not badly. This isn’t a style point competition. It only matters whether the essential thrust of the case being made is reasonably consistent wrt both observations and logic while having some discernable level of circumspection on the nature of the EAS (Earth Atmospheric System).

John

• Mike and John,

Looking at the second derivative in an extremely noisy system is just not done: you can’t prove or disprove anything with that. Take the 1976-1996 period and you have a negative rate of change growth with increasing temperatures and CO2 emissions. According to Salby’s reasoning, does that prove that CO2 is not increasing in the atmosphere – it still is – or that humans are not responsible for the increase? As long as the increase is above zero and smaller than human emissions, humans are responsible for near all emissions (besides a small temperature factor).

• swood1000 says:

As long as the increase is above zero and smaller than human emissions, humans are responsible for near all emissions (besides a small temperature factor).

But suppose that annual reductions are a percentage of the amount in the atmosphere. For example, suppose 150GT natural additions, 5GT human additions and X% of CO₂ is taken out, resulting in 150GT being taken out for a 5GT net increase. Is all the net increase attributable to humans just because that was the amount of the human addition?

• Ferdinand Engelbeen on April 20, 2015 at 3:17 pm

– – – – – – –

Ferdinand Engelbeen,

The Mauna Loa observed CO2 curve has small annual variations sure but that curve looks pretty well defined since measurements began in the middle of the 20th century. It looks pretty close to a straight line in shape over the period Salby is considering, so the rate of change of that line is pretty close to zero.

The Anthropogenic Emission plot in the later part has definitely significantly increased rate of change of the emission however you want to plot the change.

The two can reasonable be compared for purposes of evaluating the IPCC claim.

What does the comparison of the two show? It shows that there is a reasonable expectation that there is a plausible significant level of insensitivity of observed CO2 to large changes in the rate of change of anthropogenic CO2 emission. That is sufficient to justify and stimulate further formal science community dialog and research that runs counter to IPCC’s claim that observed CO2 increase is entirely anthropogenic.

John

• Mike says:

There you go again calling anything that does not fit your warmist hypothesis “noise”.

Where is it written : thou shalt not look at the second derivative, only use cumulative integrals and turn everything into nice straight lines that are a “function” of CO2 ?

You point to 1979-1996, there’s strong similarity in that period too, both in the decadal rise and the inter-annual variability. Carry on dismissing that a coincidental similar of the “noise” in both signals and claiming we should not even be looking at it is not going to build your credibility much.

• @ Ferdinand… that is definitely not true. There is something wrong with input of co2 and the corresponding amount of co2 in the atmosphere before and after the Industrial Revolution. The before is that decreasing amounts of co2 was a slow slide towards plant death (actually not so slow). The carbon sinks are much larger than a balanced co2 system. In fact since 2006 to 2013 half of the entire co2 output was sunk, meaning not in the atmosphere, that’s 4 years of total man made carbon emissions that went somewhere. Additional, my math shows a difference in the math used to convert carbon into a molecule of co2. I show a much higher level of molecule growth that should be there but isn’t (I show 3.48 molecules for 18.4 billion metric tons in 2010. half of the 38 bmt released. ) . One other thing to consider, there are no negative numbers. Tell me the importance of no negative numbers. Additional, the temps are not really growing in line with accumulated co2 and the atmospheric co2 is not in line with output. This year they are forecasting a growth of 4 molecules. And if the sun goes quite that will probably happen ( maybe 3 something) . That’s what I show, even for this year. But what happened last year or the year before? For 2013 NOAA shows an increase of just 2.05 and 2013, 2.13, both years should have been closer to 4. If I’m right fully 70% of the released carbon isn’t coming back or even around. Since 2007 to 2013 at least 32 billion metric tons of carbon( Multiply by 3.67 to get co2) has vanished. ( 117 billion metric tons of co2) That’s the official numbers. My numbers would put that amount 20 – 30 % higher ( and I lowballed that on purpose) . And you are implying that the earth was somehow different in the 1960’s and couldn’t swallow half of what’s being produced today? Aside from the amount in free form from year to year, there shouldn’t have been any increase. Negative numbers anyone? AGW, too much wrong, too little right. While we are at it, tell what happened in 1992 that resulted in a 0.48 increase? From 1987 the numbers slowly slid till 1992 then slowly rebounded and hit what is still an all time high of 2.93 in 1998. Care to explain that?

• swood1000,

Indeed: if the increase in the atmosphere is larger than human emissions, the increase is a mix of human and natural emissions. If the increase is less than the human emissions, human emissions are fully responsible. If there is a decrease, natural sinks are larger than human emissions…

• John,

Please have a look at the influence of the variability at Wft on the total increase in the atmosphere…

There is no measurable variability in the emissions, all variability in the rate of change is in the sink rate, which is heavily temperature dependent. But that is only variability: +/- 1 ppmv around the trend. The CO2 trend caused by the small increase in temperature is not more than 5 ppmv over the past 55 years.

It may be of academic interest where the origin of the variability is (mostly in tropical vegetation), but its influence on the total increase of CO2 in the atmosphere is small…

• Mike,

Have a look at the influence of temperature on the total increase of CO2 over the full period (in the precious message to John): 5 ppmv over a period of 55 years. Natural variability in rate of change: +/- 1 ppmv/year. Average increase rate in the past 55 years: 1.5 ppmv/year. Human emissions over the same period: 3 ppmv/year. Total measured increase over the past 55 years: 80 ppmv.

Even if the increase rate was 0% of the emissions one year, 100% in the next year, average 10% in one decade and 90% in the next. What does that prove about the cause of the increase? Nothing.
You are looking at the derivative, where most of the trend is eliminated. The trend is in the offset (1 ppmv) and the slope (+1 ppmv over the full period). That is where the human emissions are. The influence of temperature is in the variability, hardly in the offset (0.09 ppmv/year) and not at all in the slope.

Thus whatever the cause (temperature on tropical forests) of the variability in the rate of change, that has hardly any influence on the total increase in the atmosphere.

• rishrac,

Your questions are difficult to follow…

To start with: humans emit about 9 GtC/year as CO2. For the current CO2 level of 395 ppmv in the atmosphere the emissions reflect about 9 / 2.13 = 4.2 ppmv/year as input to the atmosphere.
It may be more or less, depending of the total mass of air, etc. but that is not the main point.

As far as I have seen, CO2 and CH4 levels during the Holocene increased slightly over time, although temperatures did go downward, maybe as result of the growing need for food and feed of a growing population, but I don’t know and let the anthropological historic scientists fight that out.

Anyway the pre-industrial increase of CO2 and CH4 was small and for me of no interest for what happened over the past 55 years.

since 2006 to 2013 half of the entire co2 output was sunk, meaning not in the atmosphere, that’s 4 years of total man made carbon emissions that went somewhere.

The sinks act in ratio to the increase in the atmosphere, that is average over the past 55 years slightly over half the average yearly emissions: ~0.5 ppmv/year for the ~1 ppmv/year emissions in 1960 to ~2.15 ppmv for the 4.2 ppmv of human emissions today.

Where does that go? Based on a lot of measurements, of the 9 GtC extra mass per year:
– 1 GtC/year goes into vegetation.
– 0.5 GtC/year goes into the ocean surface layer.
– 3 GtC/year goes into the deep oceans.
– 4.5 GtC/year remains in the atmosphere.

Human emissions increased over time and so did the increase in the atmosphere and the natural sinks. Natural sinks are very variable and change year by year and decade by decade. Temperature is one cause, light scattering (Pinatubo) is another cause and since about 1990, the biosphere (vegetation) changed from a small CO2 source into a small, but increasing sink for CO2, thanks to the extra CO2 in the atmosphere. That may be one of the reasons that the sink rate in the past decade increased. The flat temperatures since 2000 may be another reason. Good stuff to be investigated, but doesn’t make any difference for the cause of the increase in the atmosphere…

• swood1000 says:

Ferdinand Engelbeen –

Indeed: if the increase in the atmosphere is larger than human emissions, the increase is a mix of human and natural emissions. If the increase is less than the human emissions, human emissions are fully responsible. If there is a decrease, natural sinks are larger than human emissions…

Suppose we start with 750GT, have 150GT natural addition and 5GT anthropogenic addition for a total of 905GT. Suppose that during the year 0.165746 of the 905 is removed naturally, which is 150GT removed. This results in a 5GT net increase but 0.165746 of the anthropogenic 5GT addition was removed so 0.83GT of the 5GT net increase was not anthropogenic. Or suppose the same scenario with zero anthropogenic addition. Of the 900GT total 0.165746 would be removed, which is 149.17GT removed, still showing the 0.83GT net natural increase.

• swood1000

You make the same error as many before you: a part of the human contribution (near 20%) is removed, but that amount is simply exchanged for natural molecules. That doesn’t change the total amount in the atmosphere, it only changes the concentration of the human contribution. That has nothing to do with the origin of the increase in mass, which is fully from the human contribution.

In the second case, the entire increase is from the natural unbalance.

For a better insight, have a look at Willis “blue painted CO2“.

• Mike M. says:

sword1000

“Is all the net increase attributable to humans just because that was the amount of the human addition?”
Yes.

“have 150GT natural addition and 5GT anthropogenic addition for a total of 905GT. Suppose that during the year 0.165746 of the 905 is removed naturally, which is 150GT removed.”

That is 150 GT natural addition, 150 GT natural removal, net 0 GT natural change. You might not like the semantics, but that is how it is defined. I believe it is the only logically consistent way to do it given the constraints of mass balance and the fact that we can not tell which CO2 molecule came from where.

In fact, as the amount of CO2 in the atmosphere goes up, the amount removed naturally goes up, so that the natural removal exceeds the amount of natural addition. That is why only a fraction of the anthropogenic CO2 emitted remains in the atmosphere. If we suddenly stopped burning fossil fuels, the amount of CO2 in the atmosphere would start going down.

• swood1000 says:

Ferdinand Engelbeen and Mike M. –
Do I have my math right here? Dave added 0.83. After Tom added his amount the total was 5. How much did Tom add? Answer: 5?

• swood1000 says:

Ferdinand Engelbeen and Mike M. –

Is this a true statement:
Dave took action X and Tom took action Y. The total at the end was 0.83 higher than it would have been had Dave not taken action X. Therefore, 0.83 of the final total is attributable to the action of Dave.

• swood1000 says:

Ferdinand Engelbeen and Mike M. –

Is this a valid way of determining the anthropogenic contribution: (a) determine what the total would have been without any anthropogenic contribution, (b) determine what the total is with the anthropogenic contribution, (c) the difference is the anthropogenic contribution.

68. Bernie Hutchins says:

Willis – good post

With regard to your first two graphs. Your Fig. 1 and Fig. 2 are equivalent to a discrete time system (“difference equation” or simulation of a continuous system) of:

y(n) = gy(n-1) + x(n)

where y(n) is the output, x(n) is the input, and g is the positive feedback of just a hair over 0.9. For Fig. 1 we have an “IMPULSE response” (technically a “unit-sample response) of g^n. That is, x(n)=1 for n=0 and x(n)=0 thereafter. Fig. 2 is thus the “STEP response” of the same system. That is, x(n)=1 for n=0 AND thereafter. This is a constant or “DC” input. The final level of the step response is (asymptotically) 1/(1-g) which is a hair over 10, as you show.

What we have is a simple system responding to a SHIFT of a constant in the input. It’s the same system, and the characteristic time is unchanged by the input chosen. I can get the time constant from g from the step response, but most easily, directly from the impulse response (g^1).

And the response to a step, or any general shape, (which would be a convolution of the input with the impulse response – likely looking quite different from each other), just makes it a bit harder to determine the unique time constant (but still trivially – often by Prony’s method). So I think that your apples/oranges is fundamental, if one uses a conventional impulse/step terminology, even before we consider that the system isn’t even Linear Time-Invariant.

69. @ Willis, I used a different method for estimating the ppmv of co2 from carbon. For the year 2010 I got 4 ppmv when 2.42 ppmv were reported. (Not on the total amount since half went into the ocean and land) I did overweight the atmosphere by at least 10%. Less weight and the ppmv would have been higher. It is worth noting that several of the years when the growth of co2 molecules was less than what would be projected provided again that each year half of the carbon was sinking and not being pulsed back into the climate system. It appears that the carbon that is sinking is NOT being pulsed back. Am I mistaken that the amount of co2 for 2013, for example, 9.9 billion metric tons of carbon is purely man made or is it a composite of man made and natural? If 9.9 bmt is all man made, and using 2.13 as the relationship, then for a number of years now the sink has been 50% and not being pulsed back. (from 2006 to 2013 a 18% rise in carbon, there is no corresponding rise in the co2 being pulsed back) In any case, some co2 must surely be being pulsed back, wood fires, vegetation etc. , what percentage of the growth rate of co2 would that be? More troubling, what was happening before the advent of burning fossil fuels to maintain the 283 or 270 ppmv as has been reported over time? !000 years is too long of a time frame without major inputs of co2, and then why isn’t it showing up in the records?

70. Bart says:

FTA:

“The clear inference of this is that various natural sequestration processes have absorbed some but not all of the fossil fuel emissions. “

No. The clear inference is that various natural sequestration processes have absorbed anywhere from 42% to all of the fossil fuel emissions. There is nothing in this that says it did not take all, and that the growth cannot be almost entirely due to natural processes.

“Next, as you can see, using an exponential decay analysis gives us an extremely good fit between the theoretical and the observed increase in atmospheric CO2.”

It is a superficial, low order polynomial fit, which is not at all difficult to get by chance.

The poorness of the fit can be seen when you look in a domain where you don’t get superficial, low order polynomial resemblances. In the rate domain, it is very clear that CO2 has been at a steady rate since the advent of the “pause”, whereas emissions have kept marching ever upward

http://s1136.photobucket.com/user/Bartemis/media/CO2_zps330ee8fa.jpg.html?sort=3&o=13

“…they are different because there is no reason to expect that apples and oranges would be the same…”

You do, in fact, expect them to track proportionately. In the rate domain, they are not tracking. See above plot.

“In fact, as Figure 3 shows, the observed CO2 has tracked the total human emissions very, very accurately.”

It hasn’t. It is quite poor. See above plot. This is a better fit, and it fits the rate domain as well.

“Airborne residence time (bomb test data): how long an individual CO2 molecule stays in the air.

Pulse decay time (Bern Model): how long the increased atmospheric concentration from a pulse of injected CO2 takes to decay back to pre-pulse conditions.”

And, these two times must approach one another when sinks are very active. The indications are that the sinks are very active. That is why atmospheric CO2 can be very closely calculated by integrating the temperature relationship, with little to no consideration of human inputs necessary.

• Bart says:

Forgot to link directly to the jpeg…

• Bart,

As usual you have the same misleading plot: using similar variables, but plotting them with different units and an offset for one of them. The same comparison, but properly plotted on the same scale without offset gives a complete different impression:

where the red line is the theoretical increase in the atmosphere with similar coefficients as calculated by Willis.
As the calculated increase rate still is widely within the natural (caused by the sink rate) noise, there is nothing to fuss about…

No. The clear inference is that various natural sequestration processes have absorbed anywhere from 42% to all of the fossil fuel emissions. There is nothing in this that says it did not take all, and that the growth cannot be almost entirely due to natural processes

Of course, if you bring \$ 1,000 per month to your savings account at the local bank and the bank shows in its balance at the end of the year a net gain of \$ 6,000, all that gain is from other clients, not from your money.

In such a case, I prefer to get my money as fast as possible out there and look for some more solid investment…

• Bart says:

Your plot is misleading, Ferdinand, because you have scaled it in a manner to mask the divergence in the trend since the “pause”. But, even in your plot, the emissions are accelerating, and the atmospheric concentration is not.

“Of course, if you bring \$ 1,000 per month to your savings account at the local bank and the bank shows in its balance at the end of the year a net gain of \$ 6,000, all that gain is from other clients, not from your money.”

If your wife has been withdrawing \$1000 per month, but you had \$120,000 to begin with and an interest rate of 5%, then yes, all that gain is from other sources.

• Mike says:

“As the calculated increase rate still is widely within the natural (caused by the sink rate) noise, there is nothing to fuss about…”

The data only vaguely fits your proposed relationship but you dismiss any deviations as “noise”. That just about typifies the AGW argument.

The “noise” in question, my friend, is temperature: SST. The big spike in 1998 should be a clue, emissions dropped but there was a big “noise” spike followed by a couple of low years where emissions were rising.

• Mike,

All extra input is from the one-way human emissions, 100% input, no sink.
Natural emissions are more than compensated for the past 55 years: more sink than source.
All the variability in the rate of change of CO2 in the atmosphere is caused by the influence of temperature variability on the sink capacity, not the source contribution, which based on the opposite δ13C and CO2 movements is the influence of temperature and drought on (tropical) forests.

The biosphere is over time a net, increasing sink for CO2. That means that variability and trend in the rate of change are from two independent processes, where the variability is temperature dependent, but the trend may or may not be temperature dependent.

Even if the increase in the atmosphere was 99% of human emissions during a decade and the next decade it was only 1%, still humans are fully responsible for the increase in the atmosphere, as long as the increase rate is between zero and all human emissions/year…

• Bart,

My plot misleading? You surely should read that book “How to Lie With Statistics”, which shows that your plot is misleading. All I have done is plotting the calculated CO2 increase rate at the same zeroed scale as the emissions and the measured increase rate in the atmosphere. The 53% line is just coincidence, but shows that the rate of change of CO2 simply follows human emissions over the past 55/115 years.
In the period 1976-1996 the atmospheric increase is even decelerating with increasing temperatures and emissions…

If your wife has been withdrawing \$1000 per month

Bad analogy, as there is hardly any CO2 withdrawal by humans at all…

• Bart says:

“All extra input is from the one-way human emissions, 100% input, no sink.”

Wrong. The human input induces sink activity all its own. It is a dynamic system. The sinks expand due to the extra pressure humans put on them. For all intents and purposes, that is artificial sink activity.

“All I have done is plotting the calculated CO2 increase rate at the same zeroed scale as the emissions and the measured increase rate in the atmosphere.”

And, scaled it arbitrarily by 53%, to make it appear that they are coincident. In a few more years, you will have to scale it by 50% to keep it on track. Then 40%. Then, less. It’s a moving target, because the two are not tracking. Eventually, you are going to have to give it up.

The period 1976-1996 matches the temperature record, as does the current lull.

“Bad analogy, as there is hardly any CO2 withdrawal by humans at all…”

Wrong. The human input induces sink activity all its own. For all intents and purposes, that is artificial sink activity.

• Bart,

Wrong. The human input induces sink activity all its own.

OK, let’s do the calculation: Humans emit 4.2 ppmv/year. That adds to the 110 ppmv CO2 above equilibrium for the current temperature, thus making it in first instance 114.2 ppmv, of which 3.7% pressure increase caused by fresh human CO2. The last year sink rate was 2.15 ppmv caused by the 110 ppmv extra CO2 pressure. The new sink rate for the increased pressure difference will be 2.15 * 114.2 / 110 = 2.23 ppmv, of which 0.08 ppmv is caused by the fresh human CO2 emission… Big deal. In fact half of that, as about half the extra increase (in mass) gets into sinks before reaching a higher CO2 level.
Simply said: besides a negligible extra sink, all human CO2 as mass is one-way added to the atmosphere.

And, scaled it arbitrarily by 53%, to make it appear that they are coincident.

Do you have some reading comprehension problem? I have repeatedly said that the 53% line is only the average increase in the atmosphere, by coincidence also within the natural variability.
It is NOT about the 53% line, it is about the calculated trend which is only based on the emissions minus the sink rate based on the CO2 pressure in the atmosphere vs. the equilibrium pressure.
No scaling, no offset, no manipulation by using different units, just straight-forward calculation.

The period 1976-1996 matches the temperature record, as does the current lull.

Of course, again by cherry picking the temperature series (a different one for different purposes) and the start-end date you can match the data. In this case the UAH series, which starts in 1979, not 1976. But even there, the CO2 rate of change trend is negative with a small increase in temperature…

Take your beloved HadCRU SH series, or the 30N-30S series and plot the trend lines: they are strongly opposite…

• Bart says:

“OK, let’s do the calculation

You do not know the sink rate. This is where you err. You implicitly assume the answer before you make your calculations, framing the problem on that basis, and then seem to think it is compelling evidence when you get the answer you preordained. It is circular logic.

There is a full range of natural inputs and sink rates which satisfy the observations, beginning with 110 natural input and half of everything, including human inputs sunk, to all but a fraction of natural and human inputs sunk, with the fraction of natural input remaining by happenstance being roughly 1/2 of human emissions.

If the sinks are active, and they are, then almost everything coming in will be removed. Which means that the natural inputs simply have to be enough that the remaining fraction is the rise observed, whatever it may be.

“No scaling, no offset, no manipulation by using different units, just straight-forward calculation.”

You have assumed the 53% because it produces results you like. But, there is no fundamental reason for 53%. None at all. You are engaging in a cicular argument, yet I just cannot seem to make you see it.

“…even there, the CO2 rate of change trend is negative with a small increase in temperature…”

You are keying off of outlier data, fitting a trend to noise. This is meaningless, obfuscatory legerdemain. A cursory look at the plot is enough to see directly that there is close agreement. The picture is worth 10,000 trend lines.

• Bart,

You do not know the sink rate.

We do know the sink rate: whatever the natural and human fluxes, the current increase in CO2 pressure in the atmosphere is 110 ppmv above the equilibrium for the current temperature, whatever caused it, dynamic or static.
The sink rate for a linear process is proportional to the pressure difference in the atmosphere. In this case, the 110 ppmv pressure difference gives a net sink of 2.15 ppmv/year.

That is independent of how much CO2 gets in and out in general or for any individual flux, human or natural. Of course the year by year variability does play a role in the sink rate: it varies from one year to the next, in this case the 2.15 ppmv/year is the value for the linear trend in sink rate for last year, in whatever compartment the CO2 may sink.

You have assumed the 53% because it produces results you like.

Bart, I didn’t assume anything, I just plotted two lines: the 53% line, as that was the average “airborne fraction” over the past 55 years and the calculated trend, based on emissions and the calculated sink rate caused by the pCO2 difference between current and equilibrium level. Here the same graph without the 53% line:

Without scaling, offset, or any other manipulation, just straight-forward calculation based on a simple linear equilibrium process.

The picture is worth 10,000 trend lines.

A picture which pretend to prove that the trends do match without showing the trends is highly misleading. You make a lot of fuss about the last decade where the sink rate increased somewhat, but don’t refer to other periods in the past where the trends are opposite to each other, even over longer periods (even if you truncate before the 1991 Pinatubo)…

• Ferdinand,

Please re-visit my blog(which I have revised after additional analysis) and critique it there. Just click on my name. Others who have been following the arguements between Bart and Ferdinand may find it interesting.

• Bart says:

“We do know the sink rate”

No, you do not. There is a continuum of sink rates and natural inputs which will produce the same observations. Only by a priori assigning attribution do you impose a unique solution. You are engaging in circular logic.

“Bart, I didn’t assume anything, I just plotted two lines: the 53% line, as that was the average “airborne fraction” over the past 55 years…”

Yes, you did. You assumed a 53% airborne fraction. You assumed, at the very beginning, that 53% of emissions are staying in the atmosphere. Small wonder that you then conclude that 53% of emissions are staying in the atmosphere. Circulus in probando.

“A picture which pretend to prove that the trends do match without showing the trends is highly misleading.”

A least squares trend line is merely a calculation. When performed on stochastic data, it becomes a stochastic variable, with mean and variance and other statistical properties. It is not a fundamental measurement of “truth”. It is not magic. It cannot confer the ability to “see” beyond the noise.

Your eye is a highly refined instrument. It can take in a mass of data which your wondrous brain can process in massively parallel fashion. Just by looking at the plot, you can see immediately where the outlying deviations occur, and that they are a small portion of the overall excellent fit.

Sometimes, in borderline cases, the eyes and brain can fool you into seeing things that are not there. But, this is no such borderline case. This is very straightforward. The temperature record is an excellent fit across the entire time series. Very obviously a much better fit than emissions scaled for an assumed “airborne fraction”.

• Bart,

We know the NET sink rate, as that is the difference between human emissions and increase in the atmosphere, whatever the individual sinks and source did.

That is the result of the increased pressure in the atmosphere. That is the driving force. The net sink rate is the result for the driving force. That force tries to re-establish the dynamic equilibrium after a disturbance, whatever the origin of that disturbance: ocean upwelling, volcanoes or humans…

If it is simple first order process, then the net sink rate is proportional to the disturbance, in this case the increased pressure in the atmosphere above the temperature dictated equilibrium.
That it is indeed proportional can be seen in the fact that the increase in the atmosphere and the increase in net sink rate both is a fourfold over the past 55 years.

Yes, you did. You assumed a 53% airborne fraction. You assumed, at the very beginning, that 53% of emissions are staying in the atmosphere.

Bart, you are completely lost on that. I never, ever assumed anything about the airborne fraction. For me that may be 1% or 99% of human emissions, that still shows that human emissions are the main cause of the increase in the atmosphere. The only reason that I plotted the 53% line (which I regret by now), is that it was the slope of the measured airborne linear trend over the past 55 years.

Again, there is not the slightest hint of the 53% in the calculated trend which is:

CO2(yr) = CO2(yr-1) + CO2(humans) – 2.15 * (CO2(atm) – CO2(eq)) / 110

Where CO2(eq) = 290 + 8 * (T – T(1850))

The temperature record is an excellent fit across the entire time series. Very obviously a much better fit than emissions scaled for an assumed “airborne fraction”.

Of course, the variability fits, as temperature variability is the direct cause of the CO2 variability and in this case, you have synchronized them by taking the derivative, but my calculated trend follows the average variability without problems…

• Bart says:

“We know the NET sink rate…”

Sorry. Not enough to calculate what you want to calculate.

“If it is simple first order process, then the net sink rate is proportional to the disturbance, in this case the increased pressure in the atmosphere above the temperature dictated equilibrium….”

The temperature dictated equilibrium is a moving target, which evolves according to

dCO2/dt = k*(T – T0)

“I never, ever assumed anything about the airborne fraction.”

This is really tiresome. You processed the data to convert the blue line into the red line in a manner which gives a superficial, low order polynomial, agreement. It is accelerating since 2000 while atmospheric concentration has not been accelerating. Meanwhile, the temperature relationship fits that time period, and all other time periods since at least 1958.

“…but my calculated trend follows the average variability without problems…”

It doesn’t fit at all. It is particularly bad since temperatures stopped climbing. The temperature relationship

dCO2/dt = k*(T – T0)

fits better.

• Bart:

Sorry. Not enough to calculate what you want to calculate.

No problem at all: we have extra information from the 55 years of data:
– Human emissions are known: a factor 4 increase over the full time span
– The levels in the atmosphere are known: a factor 4 increase over the full time span.

That gives some extra equations (H = human and N1…N3 = natural contribution, L1…L3 = level in the atmosphere):

L1 = tau*N1
L2 = tau*(H + N2)
L3 = tau*(4*H + N3)
where
L3 – L1 = 4*(L2 – L1)
or
tau*4*H + tau*N3 – tau*N1 = tau*4*H + tau*4*N2 – tau*4*N1
or
N3 – N1 = 4*(N2 – N1)
which has two and only two solutions: the natural cycle increased a 4-fold, in lockstep with human emissions or there was no increase in the natural cycle at all and N1 = N2 = N3 = N4.

There is not the slightest indication that the natural cycle increased a 4-fold, there are several indications that the natural cycle didn’t change much over time…

The temperature dictated equilibrium is a moving target, which evolves according to
dCO2/dt = k*(T – T0)

Which violates all physics and all observations, including Henry’s law which says that
ΔCO2 = k*(T – T0)

For a step change in temperature, the CO2 levels integrate towards a new equilibrium whatever the time it may cost.

You processed the data to convert the blue line into the red line in a manner which gives a superficial, low order polynomial, agreement.

This is getting extremely annoying: I didn’t assume anything, I only used the very long term influence of temperature on the CO2 equilibrium rate and the measured increase in the atmosphere and the measured net sink rate based on the ΔpCO2 between measured CO2 level and equilibrium CO2 level.

It is accelerating since 2000 while atmospheric concentration has not been accelerating.

So what? Even if the rate of change increased between 1% and 99% one year or between 10% and 90% from one decade to the next: that is the natural variability in sink rate (not the source rate) which is only +/- 1 ppmv around the trend which is already 110 ppmv above equilibrium.

• Bart says:

“That gives some extra equations…”

I am afraid I am going to have to be harsh, and call this out as pure mathematical gibberish, on a level that should cause embarrassment. One cannot solve for two variables with one equation. It is not possible. No matter how you try to rationalize it, you can’t get something for nothing.

“Which violates all physics and all observations, including Henry’s law which says that
ΔCO2 = k*(T – T0)”

That is not what the data say. Clearly, directly, and distinctly. Your theory must fit the data, not the data the theory.

Henry’s law applies to a steady state, closed volume. This system is not in steady state. To the degree it is, it is not closed. You are stuck with your mental block of imagining everything to be part of a static system, when there are dynamic flows involved.

“I didn’t assume anything, I only used the very long term influence of temperature on the CO2 equilibrium rate and the measured increase in the atmosphere and the measured net sink rate based on the ΔpCO2 between measured CO2 level and equilibrium CO2 level.”

You assumed in that one sentence:

1) the long term influence of temperature on the CO2 equilibrium rate
2) That the measured increase in the atmosphere was due to human emissions
3) that you could even measure the net sink rate without knowing the natural equilibrium to which the net is referenced
4) the equilibrium CO2 level

You do not see it, but I continually read what you write, and wince at every unwarranted assumption that you blithely toss out as established fact. There is no end to the lengths you will go to rationalize what you want to believe, and ignore the data which tell us clearly that it is a fantasy.

• Bart,

One can not solve for two variables with one equation.

The fact that both the increase in the atmosphere (whatever the cause) and the net sink rate increased a 4-fold in lockstep with human emissions gives extra equations which solved the equations with two and only two solutions:
– either the natural emissions increased a 4-fold in lockstep with human emissions
– or the natural emissions didn’t increase.
But as you don’t like the result, you just are hand waving that there still is only one equation left without even looking at the math or saying where the math is wrong…

That is not what the data say

Bart your data are composed of two completely independent processes: one that causes the variability and one that causes the trend. The first is proven caused by the temperature influence on vegetation. The second is proven not caused by the temperature influence on vegetation. Thus there is no proof that the trend in the data is caused by temperature, none at all.

Henry’s law applies to a steady state

Pure nonsense. Henry’s law applies to static and dynamic processes alike. No matter if a lot of CO2 continuously circulates through the deep oceans and returns via the atmosphere: a change in temperature of 1°C will give a change of ~8 ppmv in the atmosphere. That is all:

(the graph still is made for 17 ppmv/°C, but the principle is the same)

And I “assumed” many things which all were measured:
1) The long term influence of temperature on CO2 levels was measured (for CO2) and calculated (for temperature proxies) in ice cores over the past 800,000 years: 8 ppmv/°C
The influence of temperature on CO2 levels in the atmosphere above seawater according to Henry’s law is between 4-17 ppmv/°C, where the above 8 ppmv is in the middle of the ball park.
The influence of temperature on CO2 levels according Henry’s law was confirmed by millions of direct measurements of seawater in laboratories and in the field.
2) I didn’t assume that human emissions were the cause of the increase, the sink rate is directly proportional to the difference in measured pCO2 of the atmosphere (whatever the cause of the increase) and the equilibrium pCO2, which is based on 1).
3) Human emissions are known, the increase in the atmosphere is measured.
Net sink rate = human emissions – measured increase. Simple math, something like 2 = 4 – 2.
Nothing to do with any equilibrium.
4) The equilibrium CO2 level is what the CO2 level would be for the current average seawater temperature per Henry’s law, see 1).
You see, every “assumption” is based on physical laws and simple math…

Bart, you are a brilliant person, but completely blinded by your one theory based on an artificial match of two straight lines, which violates all known observations…

• Bart says:

“The fact that both the increase in the atmosphere (whatever the cause) and the net sink rate increased a 4-fold in lockstep with human emissions gives extra equations which solved the equations with two and only two solutions:”

Nope. You are imposing an arbitrary constraint. Sure, a constrained solution to an underdetermined set of equations can be unique. But, if the constraint is arbitrary, so is the solution.

You can’t get something for nothing, Ferdinand. You can’t get something for nothing.

“Thus there is no proof that the trend in the data is caused by temperature, none at all.”

You are in a flight against reality. There is proof. There is, indeed, no doubt.

“Henry’s law applies to static and dynamic processes alike.”

Nonsense. It is steady state only. It takes time for CO2 to diffuse. Henry’s law only tells you where everything will end up when equilibrium is achieved.

“The long term influence of temperature on CO2 levels was measured (for CO2) and calculated (for temperature proxies) in ice cores over the past 800,000 years: 8 ppmv/°C”

A) Assumes the ice core measurements are a valid proxy with perfect fidelity
B) Assumes that conditions that hold today are the same as in the past

“I didn’t assume that human emissions were the cause of the increase…”

You said “and the measured increase in the atmosphere and the measured net sink rate”

To get sink rate, you have to make an assumption about what is driving the increase.

“Net sink rate = human emissions – measured increase”

And, Net sink rate = Net sink rate due to human emissions + Net sink rate due to natural inputs. Again, you keep trying to solve for two variables with one equation. Sorry. That does not work.

“The equilibrium CO2 level is what the CO2 level would be for the current average seawater temperature per Henry’s law…”

Henry’s law does not apply directly. See above.

• Bart,

You are imposing an arbitrary constraint

A “constraint” which is measured…

Henry’s law is for all static and dynamic processes. If the pCO2 in the atmosphere is below the steady state level of the oceans, then the oceans will be a net source of CO2. If the atmosphere is above the steady state level of the oceans, then the oceans will be a net sink for CO2.
At this moment the atmosphere is 110 ppmv above the steady state of the oceans for the current average ocean temperature.
Moreover, as the CO2 sinks/sources are extremely sensitive to temperature changes, according to your theory, there is no problem to sink all extra CO2 which is nowadays in the atmosphere above steady state…

A) assumes that ice core measurements are a valid proxy with perfect fidelity

Besides the fact that ice cores are not a “proxy” for CO2, if I take Henry’s law for the oceans today, that gives values between 4-17 ppmv/K, which give between 3-11 ppmv extra for the 0.6°C warming over the past 55 years at steady state, hardly a difference in the 110 ppmv increase which is measured.

B) Assumes that conditions that hold today are the same as in the past

The ratio seen in ice cores are the same over each interglacial 100,000 years back in time. They are the same including high resolution ice cores over the MWP-LIA transition. And they are in the middle of the range which Henry’s law dictates over the past 212 years…

To get sink rate, you have to make an assumption about what is driving the increase.

Not at all, the net sink rate is caused by the total increase of CO2 above steady state, whatever the cause of the increase.

Net sink rate = Net sink rate caused by human emissions + Net sink rate due to natural inputs

Again, not at all, it doesn’t matter what did drive the increase in the atmosphere, neither the composition of the sinks. The calculation was based on total net sink rate driven by total increase in the atmosphere.
Besides that, the human caused extra sink rate is negligible.

• .

The evidence is there that the entire GHG effect thus CO2 is a consequence of natural variability from biological processes ,to ocean processes ,to forestation, to geological processes and last but not least global temperature.

This is not to say human emissions do not contribute to the rise in CO2, but rather to suggest they will be overwhelmed by natural variability. That is what past data shows but with the advent of AGW theory , all data
which does not support this theory is either inaccurate, not good or needs to be revised.

This is evident today and is always the case no matter what that data may be if it does not support AGW theory.

The case Dr. Salby, makes is quite convincing as are so many of the skeptic arguments.

All AGW offers and those who support it is speculation. They can never present data which does not meet up with opposing data to suggest otherwise. How could that be? The answer is because their data is built on a house of cards that is being used to further their absurd theory which is in the process of being proven wrong.

I know, the data the skeptic’s use is not correct. We shall see.

71. William Astley says:

ristvan April 20, 2015 at 9:51 am
Some rain on your abiotic petroleum parade. Its ‘not even wrong’. Gold’s book is crackpot speculation.

William,
You obviously have absolutely no knowledge concerning the abiotic theory and have obviously have not read Gold’s book or research the subject. There is no point asking if you have any logical points to support your name calling as you are have no knowledge of the subject.

Gold’s theory that the origin of black coal, CH4 and petroleum is CH4 that is released from the core as it solidifies is not a new theory. It is the standard theory for the formation of petroleum in Russia and in the Ukraine. There are more than a hundred peer reviewed papers that support the abiotic theory.

http://www.gasresources.net/plagiarism%28overview%29.htm

This page is written in order to clear up certain misunderstandings connected with the provenance, and authorship, of the modern Russian-Ukrainian theory of deep, abiotic petroleum. Everything about the modern Russian-Ukrainian theory of deep, abiotic petroleum origins is extraordinary. Not only has this extensive body of scientific knowledge permitted the Russian nation, which had been previously petroleum-poor, to achieve energy independence, but also modern Russian petroleum science has been the subject of the most daring attempt at plagiarism in the history of modern science.

Sometime during the late 1970’s, a British-American, one-time astronomer named Thomas Gold discovered the modern Russian-Ukrainian theory of deep, abiotic petroleum origins. Such was not difficult to do, for there are many thousands of articles, monographs, and books published in the mainstream Russian scientific press on modern Russian petroleum science. Gold could read the Russian language fluently.

Gold has more than 50 observations in his book that support the abiotic theory over the organic theory.

The deep earth hypothesis can explain super deposits of petroleum in the Middle east – Why Saudi Arabia has 25% of the planet’s oil reserves half of which is contained in only eight fields. Half of Saudi Arabia production comes from a single field the Ghawar.

Excerpt from this wikipedia article on Oil Reserves

http://en.wikipedia.org/wiki/Oil_reserves

Saudi Arabia reports it has 262 gigabarrels of proven oil reserves (65 years of future production), around a quarter of proven, conventional world oil reserves. Although Saudi Arabia has around 80 oil and gas fields, more than half of its oil reserves are contained in only eight fields, and more than half its production comes from one field, the Ghawar field.

The following is an excerpt from Thomas Gold’s book the Deep Hot Biosphere which that outlines some of the observations that unequivocally supports an abiogenic origin (non-biological, primeval origin), for petroleum and natural gas.

(1) Petroleum and methane are found frequently in geographic patterns of long lines or arcs, which are related more to deep-seated large-scale structural features of the crust, than to the smaller scale patchwork of the sedimentary deposits.

(2) Hydrocarbon-rich areas tend to be hydrocarbon-rich at many different levels, corresponding to quite different geological epochs, and extending down to the crystalline basement that underlies the sediment. An invasion of an area by hydrocarbon fluids from below could better account for this than the chance of successive deposition.

(3) Some petroleum from deeper and hotter levels almost completely lack the biological evidence. Optical activity and the odd-even carbon number effect are sometimes totally absent, and it would be difficult to suppose that such a thorough destruction of the biological molecules had occurred as would be required to account for this, yet leaving the bulk substance quite similar to other crude oils.

(4) Methane is found in many locations where a biogenic origin is improbable or where biological deposits seem inadequate: in great ocean rifts in the absence of any substantial sediments; in fissures in igneous and metamorphic rocks, even at great depth; in active volcanic regions, even where there is a minimum of sediments; and there are massive amounts of methane hydrates (methane-water ice combinations) in permafrost and ocean deposits, where it is doubtful that an adequate quantity and distribution of biological source material is present.

(5) The hydrocarbon deposits of a large area often show common chemical or isotopic features, quite independent of the varied composition or the geological ages of the formations in which they are found. Such chemical signatures may be seen in the abundance ratios of some minor constituents such as traces of certain metals that are carried in petroleum; or a common tendency may be seen in the ratio of isotopes of some elements, or in the abundance ratio of some of the different molecules that make up petroleum. Thus a chemical analysis of a sample of petroleum could often allow the general area of its origin to be identified, even though quite different formations in that area may be producing petroleum. For example a crude oil from anywhere in the Middle East can be distinguished from an oil originating in any part of South America, or from the oils of West Africa; almost any of the oils from California can be distinguished from that of other regions by the carbon isotope ratio.

72. William Astley says:

Additional support for the abiotic theory for the origin of petroleum, black coal, and ‘natural’ gas.

http://www.sciencedaily.com/releases/2009/09/090910084259.htm

“There is no doubt that our research proves that crude oil and natural gas are generated without the involvement of fossils. All types of bedrock can serve as reservoirs of oil,” says Vladimir Kutcherov, who adds that this is true of land areas that have not yet been prospected for these energy sources.

According to Vladimir Kutcherov, the findings are a clear indication that the oil supply is not about to end, which researchers and experts in the field have long feared.

He adds that there is no way that fossil oil, with the help of gravity or other forces, could have seeped down to a depth of 10.5 kilometers in the state of Texas, for example, which is rich in oil deposits. As Vladimir Kutcherov sees it, this is further proof, alongside his own research findings, of the genesis of these energy sources – that they can be created in other ways than via fossils. This has long been a matter of lively discussion among scientists.

http://www.nature.com/ngeo/journal/v2/n8/abs/ngeo591.html

Methane-derived hydrocarbons produced under upper-mantle conditions

There is widespread evidence that petroleum originates from biological processes1, 2, 3. Whether hydrocarbons can also be produced from abiogenic precursor molecules under the high-pressure, high-temperature conditions characteristic of the upper mantle remains an open question. It has been proposed that hydrocarbons generated in the upper mantle could be transported through deep faults to shallower regions in the Earth’s crust, and contribute to petroleum reserves4, 5. Here we use in situ Raman spectroscopy in laser-heated diamond anvil cells to monitor the chemical reactivity of methane and ethane under upper-mantle conditions. We show that when methane is exposed to pressures higher than 2 GPa, and to temperatures in the range of 1,000–1,500 K, it partially reacts to form saturated hydrocarbons containing 2–4 carbons (ethane, propane and butane) and molecular hydrogen and graphite. Conversely, exposure of ethane to similar conditions results in the production of methane, suggesting that the synthesis of saturated hydrocarbons is reversible. Our results support the suggestion that hydrocarbons heavier than methane can be produced by abiogenic processes in the upper mantle.

Answer 6. [to the question: “What is the strongest evidence that you have in your own drilling in the U.S.S.R. to support the deep gas [sic] theory ?”]

Some of the strongest evidence which support the U.S.S.R.’s drilling for deep oil and gas of abiotic mantle origin may be considered to be the following:

1.) The existence of 80 oil and gas fields which occur partly or completely in crystalline basement rock in the west Siberian basin, including such as the Yelley-Igai and Malo-Itchskoye fields from which all of the production of oil and gas occurs entirely and solely in the aforesaid rock from depths between 800-1,500 meters below the roof of the crystalline basement, respectively.

2.) In the year 1981 on the basis of the modern theory of abiotic petroleum origins, a group of Ukrainian geologists proposed the drilling of 10 wells for oil and gas in the Precambrian crystalline basement of the Dnieper-Donets basin (Ukrain.S.S.R). The analyses and results of this proposal were published as follows:

29.) Porfir’yev, V. P., V. A. Krayushkin, V. P. Klochko, M. I. Ponomarenko, V. P. Palomar and M. M. Lushpey, 1982, New directions of geologic exploration work in the Akhtyrka oil-gas-mining district of the Dnieper-Donets basin, Geol. J., Vol. 42, No. 4, p. 1-11. (In Russian).

30.) Krayushkin V. A., 1987, On the oil and gas content of the precambrian rock in the Dnieper-Donets basin, Lectures of the Acad. Sci. of U.S.S.R., Vol. 294, No. 4, p. 931-933. (In Russian).

The exploration drilling in the Dnieper-Donets basin for oil and gas in the crystalline basement continues presently and will be continued during the next several years.

3.) In Tatarstan, (A.S.S.R.), the well 20009-Novoyelkhovskaya is now being drilled, having been begun November 1989. Its target depth for oil and gas is 7,000 m in the Precambrian basement rock of the southern Tatarian arch (the maximum height of the basement). The well is currently drilling at a depth of approximately 4,700 m, and the roof of the crystalline basement rock has been observed at the depth of 1,845 m. Significant petroleum shows in that well have been observed in the basement granite at depths of 4,500 m and below.

• WA, first army rule of holes is, when in one stop digging. So, there are no fossils in all coal beds? No fossils in any source rocks? And, most important, any petroleum deposits without organic biomarkers? Now, the last was ‘explained’ by Gold via contaminating deep bacteria. You and he need to try better, since those bacteria have been shown to exist in sedimentary formations only. Metamorphic and igneous rock is too hot for life, by definition. Get a grip.

• Gloria Swansong says:

The Integrated Ocean Drilling Program, if successful, will reach the boundary between thin oceanic crust and the mantle. Extremophilic microbes have been found living at 120 degrees C, which is the temperature of oceanic crust between seven and eight km deep, ie near the boundary. IODP will discover how far down microbes can exist.

Microbes can live in metamorphic and igneous rock after the hot formation of the rock and its transport to cooler regions. They’re mainly autotrophs, primarily living off hydrogen gas.

73. Phlogiston says:

On the subject of atmospheric CO2 increase, this article is about greening of an arid part of north Ethiopia. While it is attributed to laudable land modification efforts, one suspects a helping hand from the EPA’s favourite pollutant gas:

http://m.bbc.com/news/magazine-32348749

74. David L. Hagen says:

Willis
Please see my post above following Ferdinand’s comment, which I meant to put here to both of you.

75. David L. Hagen says:

Willis and Ferdinand. For the Bern model, see the UNFCCC post:
CO2 Impulse Response Function of Bern SAR and Bern TAR models
18 March 02, F. Joos, University of Bern, 3012 Bern, joos@climate.unibe.ch
under
Parameters for tuning a simple carbon cycle model
The major difference from the bomb test results and the Bern model appears to be some very long duration exponentials, not the short term ones. e.g. Note the Tau of 407 years. How was that obtained? Contrast Salby’s finding natural CO2 emissions varying as temperature.

• Willis Eschenbach says:

Thanks, David. As I mentioned, the Bern Model presumes a fractionation of the emitted CO2 into a number of different boxes with different exponential decay times. The number and length of the decay times have changed with various IPCC reports. Initially we had five decay streams with values for tau (time constant) from 1.3 to 371 years. The IPCC Third Annual Report used three decay streams with values for tau of 2.6, 18, and 171 years. In addition, some 15% of the emissions is presumed to never decay. None of these are anywhere near the bomb test results tau of 8.6 years, nor is there any physical reason why they should be.

As I mentioned above, I’ve run the numbers for the Bern Model. The problem is that we have less than fifty years of data, and that’s not enough to tell whether the Bern Model fits any better than my simple model. At present the rms error is almost identical for the two models (0.63 vs 0.62 ppmv), so we can’t say which one is a better fit.

w.

76. KevinK says:

Willis wrote;

“Observed CO2 is NOT a function of annual CO2 emissions. It is a function of total emissions, as discussed above and shown in Figure 3. The total amount remaining in the atmosphere at any time is a function of the total amount emitted up to that time. It is NOT a function of the individual annual emissions.”

Regarding the first part; “Observed CO2 is NOT a function of annual CO2 emissions. It is a function of total emissions,”

Total emissions is the integral of annual emissions, last I checked performing an integral classifies as a “function of”. And performing the integral function effectively removes the (1/dT) portion of the units which makes the apple equal to the orange. The slope of the integrand output (i.e. the derivative) is the annual emission, the bit that is added during each time interval to determine the current total.

Regarding the second part; “The total amount remaining in the atmosphere at any time is a function of the total amount emitted up to that time.”

As Willis points out the “total amount remaining” is a continuous integral function with some CO2 entering and some leaving with postulated “half lifes” which may or may not mean anything.

Dr. Salby’s main point is that the CO2 integral function is rising much faster (~2ppmv/yr) than the human component is rising (~0.14ppmv/yr), sounds like a sound observation to me. (looks like the graphics have units of 1/yr^2, not sure what a square year looks like exactly ??)

Quoting Dr. Salby; “The growth of fossil fuel emission increased by a factor of 300% … the growth of CO2 didn’t blink. How could this be? Say it ain’t so!”

OK, I’ll be among those to say “It ain’t so”.

If one input to the integrated total changes by 300% and the derivative (slope) of the integrand does not change then that input to the integrand is miniscule.

Or, the sequestration processes have taken EXACTLY the opposite direction and matched the changes from mankind, in which case we are in happy times, Mother Earth knows how to exactly use up all the “extra” CO2 we evil humans can make. No need for any corrective action at this point is there ???

Of course, when hunting unicorn’s predictions of great massive herds approaching ready to plunder and pillage makes for good headlines. Look out, “THERE BE CO2” and it will cause a catastrophe, or at least some inconvenience, or at the very least the temperature might go up by a thousandth of a degree before all you young folks die….

Or, worst of all, your tastes in music might change WHILE you start seeing larger spiders, holy guacamole Batman, whatever shall we do ???

TRUST US, WE ARE SCIENTISTS….

Cheers, KevinK.

• Kevin,

Dr. Salby looks at the second (!) derivative of the CO2 rate of change to show that the IPCC has a problem. I think that looking at the second derivative in a very noisy system has not the slightest interest, as that doesn’t say anything about the cause of the increase in the atmosphere.
Moreover, depending of the chosen period, the second derivative goes up, flat or down (the latter in the period 1976-1996).

Over the whole period, the average rate of change in the atmosphere is 53% (at Mauna Loa), thus humans are emitting twice what is seen as increase in the atmosphere. The year by year variability for yearly averages is less than +/- 1 ppmv around a trend of ~2 ppmv/year and human emissions of ~4.5 ppmv/year. The variability fades within 1-3 years and integrates to zero around the trend. It is proven that the variability is caused by the influence of temperature variations on (tropical) forests, but also proven that vegetation is a net, increasing sink for CO2 over the past 1.5 decade. Trend and variability have different causes…

The futility of the variability around the trend is clear if you plot the long term influence of temperature on CO2 levels (~8 ppmv/°C) vs. the measured increase…

77. evanmjones says:

and somehow in all of that we never pondered exactly how a cheap sheet metal and plywood desk would ward off an atomic bomb

Well, here is some Pondering the Imponderable on that.

The bombs were inconceivably more powerful. Their targeting was quite primitive and unreliable by today’s standards (high near and not-so-near miss possibility).

Now, if you are hit “directly” by the bomb, then nothing will avail. But there is a much larger amount of area that suffers not total destruction, but moderate to minor damage. Anyone on the outer reaches of the blast damage radius might very well be sufficiently or at least partially protected by any sort of cover at all. That desktop might have saved your life.

Not picking on you, Willis. If there is one thing in the world more misunderstood than climate it is nuclear war. 97%+ don’t know beans about it. (I am an old hand at this. I wrote the introduction to the new edition of On Thermonuclear War. link: https://books.google.com/books?id=EN2gtPTjFd8C&pg=PR1&lpg=PR1&dq=on+thermonuclear+war+evan+jones&source=bl&ots=ZMVes2
(page xi.)

78. Willis Eschenbach says:

David L. Hagen April 20, 2015 at 6:34 pm

Willis Eschenbach and Ferdinand Engelbeen
Re:

“He follows that up by not knowing the difference between airborne residence time and pulse decay time.”

Salby may not be presenting it well, but I believe he has gone far deeper into the equations and details that you give him credit for.
You argue “Airborne residence time (bomb test data): how long an individual CO2 molecule stays in the air.”
The bomb test data is NOT “an individual CO2 molecule” but a specific though very small (“infinitesimal”) pulse of CO2 with C14, other than it can be explicitly tracked. How is that infinetismal pulse that much different from a larger pulse under the Bern model? 0.5% does not make that much difference in total CO2.

Thanks, David. Let me see if I can explain it in a different way.

Carbon is constantly cycling around the planet, to/from being in solution in fresh water and the ocean to being in plants to being in the atmosphere to being in the soil and so on.

Suppose that we could instantaneously paint all the airborne CO2 molecules bright blue. How long would the atmosphere stay blue? This question involves atmospheric or airborne residence time. The bomb test experiments show that molecular overturning happens exponentially, with a half-life of six years (tau = 8.6 years). Note that we are measuring how long molecules stay in the air, and that the question does not relate to the atmospheric concentration of CO2 in any manner. The CO2 concentration stays the same because the blue molecules of CO2 constantly cycling out of the atmosphere are replaced by CO2 molecules constantly cycling into the atmosphere from the various reservoirs.

Next, consider a very different process. Suppose we take a whole bunch of brand-new CO2 molecules, paint them bright blue, and add enough of them to the atmosphere to measurably raise the atmospheric concentration. Here’s the new question—how long will it take for the increased atmospheric concentration to decay half-way back to its equilibrium level? That is to say, what is the half-life of the decay of the concentration?

Note that this is different from the previous question. It is different because although the blue CO2 molecules only stay in the atmosphere with a 6 year half-life, they are constantly replaced by other CO2 molecules so the overall concentration remains high. Until the system responds and absorbs the extra added pulse of CO2 into the various sinks and reservoirs around the globe, the concentration will stay high regardless of how many blue molecules stay airborne.

As a result, the time for the increased concentration to decay back to pre-pulse concentration values is very different from the atmospheric residence time of individual CO2 molecules.

Hope that helps to clarify the difference between the two. If not, ask again.

w.

• Joe Born says:

If you’ll forgive my butting in, I’ll mention that this issue was the subject of a post in which I made the same point as Mr. Eschenbach but then discussed some factors that muddy the waters. The resultant thread wandered off into the weeds, I’m afraid, but I think the best explanations of those factors’ effects came in Mr. Engelbeen’s comments, of which perhaps my favorite was the one in which he provided a block diagram of sources and sinks.

79. Coldlynx says:

You write: “the Bern Model is estimating how long the increased atmospheric concentration from a pulse of injected CO2 takes to decay back to pre-pulse conditions.”
When in fact the Bern Model is estimating how long the increased atmospheric concentration from a pulse of injected CO2 takes to decay back to pre-pulse conditions, with a estimated climate sensitivities of 2.1 and 4.6 K for doubling of CO2. Yes the introduce a CO2/T feedback.
The model is temperature sensitive especially for ocean CO2 balance.

“The Bern Model has been designed to study the relationship between anthropogenic carbon emissions and atmospheric CO2 levels as well as the transient response of the surface temperature signal to a perturbation in the radiative balance of the Earth. ”

“The ratio of the climate sensitivities over land and ocean is chosen in order to obtain a 30 percent warmer equilibrium response over land than over the sea. As a standard, the global climate sensitivity is set to 2.5 K for an increase in radiative forcing corresponding to a doubling of preindustrial atmospheric CO2 (Delta-T(2xCO2)=2.5 K”

Bern model anticipate a temperature increase that increase CO2 level in atmosphere. And then they anticipate that increased CO2 change temperature. Of course will the result from the model be a higher temperature and CO2 levels.

Only a example of CO2/T feedback in models.

80. Coldlynx says:

From 2003 paper “The anthropogenic perturbation of atmospheric CO2 and the climate system”,

“Sea surface warming is by far the most dominant feedback with respect to CO2 uptake in our model,”

Simple CO2 radiative forcing feedback in models .

• Coldlynx,

Thanks for the reference, it is even worse than I thought… I had the impression that I was mainly saturation of the (deep) oceans that was in play, but if they even included temperature feedbacks…

81. William Astley says:

Ferdinand Engelbeen April 21, 2015 at 12:31 am
Dr. Salby looks at the second (!) derivative of the CO2 rate of change to show that the IPCC has a problem. I think that looking at the second derivative in a very noisy system has not the slightest interest, as that doesn’t say anything about the cause of the increase in the atmosphere.

Moreover, depending of the chosen period, the second derivative goes up, flat or down (the latter in the period 1976-1996).

William,
The second derivative is of course ‘relevant’ as it provides an indication of the direction of future changes and is required to prove or disprove the assertion that anthropogenic CO2 is the major reason for the recent rise in atmospheric CO2. Salby has found two simple independent analyses to determine what portion of the recent rise in atmospheric CO2 is due to anthropogenic CO2 and found them both to support the assertion that no more than 33% of the recent rise in atmospheric CO2 is due to the anthropogenic CO2 emissions, the remaining 67% is due to natural CO2 sources.

What is the mysterious ‘noise’ that makes it derivative analysis ‘irrelevant’? The planet is not getting warmer or colder. A running average will filter out the year by year temperature changes. Anthropogenic CO2 is increasing steadily. The sinks of CO2 out of the atmosphere are not noisy.

Salby’s analysis is based on the fact that the total sources of CO2 into the atmosphere minus the total sinks of CO2 from the atmosphere must equal the change in CO2 in the atmosphere.

Salby calculated the maximum possible sink of CO2 out of the atmosphere and then used that information and the known anthropogenic emissions of CO2 to calculate 33% as the maximum contribution of anthropogenic CO2 to the recent rise in atmospheric CO2. The remaining 67% is due to natural CO2 emissions which are due to deep earth release of CH4. Micro organism consume a portion of the CH4 and produce CO2 as waste product.

As I noted, CH4 levels in the atmosphere doubled, abruptly increased around 2002 and then stopped increasing. (Salby noted that fact in his lecture. Did you miss that part? Oh I forgot you only watch the first 10 minutes of Salby’s presentation as I can tell from your comments in this forum.) There is no biological explanation or man made explanation for a step increase in CH4.

As new and old CH4 is rapidly removed from the atmosphere (it is lighter than the major components of the atmosphere N2 and O2 it hence floats up to the stratosphere where it broken down by radiation to form CO2 and H2O, with a half life of around 2.5 years). There must hence be a steady discharge of CH4 into the atmosphere to maintain a level that is twice what it was before.

The mechanism that caused the increase of CH4 and natural new CO2 is starting to abate. (The source of Low C13 CO2, is derived from low C13 CH4. CH4 ‘natural gas’ is primarily low C13 however it varies greatly with the variance being caused by length of the path time from the deep core to the lower regions of the crust. You really need to read Thomas Gold’s book The Deep Hot Biosphere: The Myth of Fossil Fuels. and the related papers concerning the abiotic theory as they are fundamental to understand this subject.)

The rise in atmospheric CO2 is now slowing and will in the next few years become negative. I am truly curious how the cult of CAGW will respond to falling CO2 levels, falling ocean levels, and a cooling planet.

• Exactly. I expect at the very least CO2 concentrations will be leveling off because it has been and will continue to be a result of natural processes.

The case the opposition has been trying to make is baseless as is always the case with AGW.

• philincalifornia says:

I am truly curious how the cult of CAGW will respond to falling CO2 levels, falling ocean levels, and a cooling planet.

They will run around saying that they saved the planet, and give themselves lots of awards (and more money).

82. Since we are discussing all this, what happens to co2 hanging around for the next several hundred years that the ministers of doom have so brazenly predicted? They’ve made predictions on their own beliefs and regurgitated data from questionable sources that defy basic scientific inquiry. Misleading many people and political decision makers as a statement of fact. The simple and basic fact is that since the Industrial revolution none of the numbers are negative. Mutual exclusivity between the release of co2 and the planet’s ability to sink co2 and/or carbon is apparent in the numbers regardless of half life. Long term, short term or in the last 10 years. Or between any 11 year cycle. Most apparent between 1987 and 1998. The composite numbers between man made, natural carbon cycle, and/or carbon from earth sources not related (or in addition to) these two has not been factored. Nor has the chemistry of breaking down co2 back into its elemental parts by any other means been described or thought of. A mere 5% breakdown separate from the organic carbon cycle would leave the earth depleted of co2 in 20 years without an additional input of co2. Even at 1% whatever is being released today won’t be around 100 years from now.

Let me put the 5 % in perspective for you. Imagine the amount of co2 man is producing today, now double it to keep the same amount . Even at 1 % the loss on 400 ppmv is 4 molecules/year. We are contributing about 2 -3?

• Salvatore, always look at the comments, these are often more instructive than the article…

83. William,

Dr. Salby provided no proof whatsoever that the increase in the atmosphere isn’t mostly caused by human emissions. His estimate is based on the decay rate of 14C, which is much shorter than for any injection of fossil fuel CO2.
Looking at the second derivative may have merit if the whole change in the derivative(s) is regular. In this case, the first derivative is highly variable, as we as annual as over decennia. The first derivative is all over the scale, but still is within borders of ~1 ppmv around the trend in the first derivative. Nevertheless, looking at the second derivative in this case says nothing about the future evolution or the cause of the increase in the atmosphere. Neither does the rate of change itself: different processes at work for the variability and the offset and slope. The latter two give the increase in the atmosphere, where human emissions have twice the offset and slope of the increase in the atmosphere…

Dr. Salby also used some tricks to convince the audience that natural variability was the cause of the increase: he used the full rate of change, thus including the offset to back calculate the increase in the atmosphere, thus including the part caused by the human emissions…

Thus sorry, whatever natural methane, biogenic or not, may have caused, it is not responsible for the recent increase in the atmosphere (and the drop in 13C/12C ratio should have been much faster)…

84. At the 26.23 minute mark in the video of his March 2015 London talk, Salby discusses the next analysis in his talk,

{bold emphasis mine – JW}

“There is a third way to determine alpha [absorption time of CO2]. Ideally we’d like to perform a controlled experiment; to measure absorption; to remove all sources and watch what happens. But this system [the earth atmospheric system] we don’t control. It controls us. If we can’t remove the sources we must account for them. To do that we must follow CO2 in the atmosphere. We need a tracer in the atmosphere.

He uses the observations of C14 created from nuclear bomb testing as that tracer in his analysis of absorption time.

At the end of his C14 tracer absorption analysis Salby says at the 31:13 minute mark,

“For reference in [the color] mauve is the absorption time of CO2 in the world of models, which relies upon the so-called Bern model of CO2.

Notice the time range. It’s not 20 years, it’s 200 years. Even then almost 30% of the CO2 present initially remains in the model world. For comparison, here [the blue and green curves of C14 and observed CO2 respectively] is the observed absorption in the real world.”

[My attention goes to the coincidence that the period of the nuclear bomb testing (late 1950s to early 1960s) is also a period where global industrialization was in relatively rapidly increasing growth mode which means relatively rapidly increasing emissions rates compared to all times earlier in the century.] So, there was in effect a ‘pulse’ of anthropogenic CO2 during the nuclear testing period of interest. In that regard, it seems to be the case that the Bern model scenario curve shown also assumes the effects of a ‘pulse’ of anthropogenic CO2.

John

• This is an edit to my post above John Whitman on April 21, 2015 at 1:06 pm .

Remove this original sentence in my last paragraph,

“My attention goes to the coincidence that the period of the nuclear bomb testing (late 1950s to early 1960s) is also a period where global industrialization was in relatively rapidly increasing growth mode which means relatively rapidly increasing observed CO2 compared to all times earlier in the century.”

Replace it with this new sentence,

“My attention goes to the coincidence that the period of the nuclear bomb testing (late 1950s to early 1960s) is also a period where global industrialization was in relatively rapidly increasing growth mode which means relatively rapidly increasing emissions rates compared to all times earlier in the century.”

Thanks.

John

• John,

As said before, 14C is not a good tracer for an extra shot of CO2 in the atmosphere, its decay rate is much faster than for 12/13CO2. See the difference between 14C residence time and the decay rate of an extra shot “normal” CO2 as told by Willis.

• Ferdinand Engelbeen on April 21, 2015 at 3:10 pm

– – – – – – – – –

Ferdinand Engelbeen,

It is the CO2 tracer that we got, and we can view it critically. But can someone show me a better actual measured CO2 tracer for residency time/ absorption time during the industrial period.

John

• Ferdinand Engelbeen on April 21, 2015 at 3:10 pm

– – – – – –

Ferdinand Engelbeen,

As to your comment reference to a Willis comment, lets start from simple beginning.

Let us look at geologic timescale up to the year ~1850. How did atmospheric CO2 levels (determined by various proxies) change significantly sometime for extended geologic periods? We know that they did.

The geologic timescale’s (up to the year ~1850) significant changes in atmospheric CO2 levels sometimes remained a fairly constant levels for extended periods of decades or centuries (or longer) then changed significantly to other levels. Also there were periods where there were significant fluctuations. How did that happen?

It happened by significant short-term and sometimes long-term changes in magnitudes of CO2 sinks and sources.

That all happened without an anthropogenic source of atmospheric CO2.

I think the above in not disputed by anyone in the climate change discourse.

With the advent of industrialization, an anthropogenic source of atmospheric CO2 was started circa ~1850 (or a little later in the 19th century).

Question #1: What part of the change in atmosphere CO2 since the year ~1850 is due to adding the anthropogenic source into the mix of all the other sources and sinks?

I offer an alternate indirect question I suggest might help answer Question #1: Did the sequestration of atmospheric CO2 into the deposits of all fossil fuels in early geologic times cause significant and permanently reduced levels of atmospheric CO2 while the fossil fuel deposits remained in the earth? If they did so, then we might expect reintroducing that ancient sequestered CO2 by humans might permanently increase atmospheric CO2.

Let’s discuss the alt Q a little bit. I hope some geologists will contribute.

John

• John,

– The historical ratio between temperature and CO2 levels over the past 800,000 years was around 8 ppmv/°C. That includes changes in ice/vegetation area and deep ocean exchanges with the atmosphere.
That holds for the huge changes in temperature between glacial and interglacial periods.
It also holds for shorter periods like the warm(er) MWP and the colder LIA: a drop of 6 ppmv for a drop of ~0.8°C, again around 8 ppmv/°C.

As we may assume that the MWP was at least as warm as today, the increase in temperature since the LIA thus is good for not more than 6 ppmv CO2 increase. That is all.

There were warmer and cooler periods during the Holocene, but these were good for maximum +/- 10 ppmv over the whole past 10,000 years:

The increase since ~1850 is over 100 ppmv. That is not caused by natural variability, including the temperature increase since then.

Answer #1: 104 ppmv by human emissions, 6 ppmv from the warming.

Answer #2: Most of the ultra high levels in the far past were disposed in inorganic carbonate rocks like the white cliffs of Dover (UK) and many other places where once the sea bottom was. Some was disposed in organic deposits like coal, which we are using today.
Thus indeed, increasing our fossil fuel use is responsible for the current increase in the atmosphere. That doesn’t remain indefinitely in the atmosphere: about half of what we emit sinks in the oceans and vegetation. That does take time, but its half life time is about 40 years, not eternity. Of course, if completely mixed with the enormous amounts in the deep oceans, there will be a residual increase in the atmosphere and deep oceans of around 1% or about 3 ppmv extra in the atmosphere, if we stop all emissions today.
If we burn near all oil and a lot of coal (~3000 GtC), the residual CO2 will be around 10% or ~33 ppmv, as further reduction is again in carbonate shells and that needs much more time… That is what the residuals of the Bern model is based on, but that is not relevant for the current situation…

85. I’m a skeptic, as most of you are, about the whole global warming fiasco, but I thought that I might make the following calculation that seems (if relevant) to show that most if not all the carbon dioxide in the atmosphere is human caused. Forgive me if some of you have already done this.

For a concentration n, e-folding rate (1/time constant) α (1/years), rate of concentration input β (ppm/yr2), and with time t the differential equation for the concentration, assuming a linearly increasing input with time as shown by Fig. 4, is

dn/dt+α∙n=β∙t

This is readily soluble by finding an integrating factor (e^αt). We get for an initial concentration n_0

n=β/α^2 [α∙t-(1-e^(-αt) )]+n_(0 ) e^(-αt)

The initial concentration term dies off with time and, for long times, the concentration is

n=β/α t

Getting some numbers from the text, α=0.1/yr and from Fig. 4, β=0.14 ppm/yr2 the asymptotic result is n/t=1.4 ppm/yr, about 2/3 the observed rate of carbon dioxide increase given in Fig. 5, possibly close enough, given the roughness of the numbers, to attribute most of the increased gas to anthropocentric cause.

If the right-hand side is just β, i.e., a constant input rate, then the solution is simply β⁄α^2 ∙(1-e^(-αt) ) limiting as shown in Fig. 2

• Gloria Swansong says:

Most of the CO2 in the atmosphere is natural, to be precise, more than 285/400 of it, since some of the increase is natural as well. Perhaps you mean that most of the increase since AD 1850 or 1750 is man-made, which IMO is probably the case. Maybe 100 ppm out of the 115 ppm gain.

• I have calculated that the relative anthropogenic long-term contribution presently ranges between 9,6 to 12.3 %, 95% of the time. Click on my name for details.

• Yes, I was not clear about that. Thanks for the reply. Without human addition, as you note, you might expect a constant level of CO2 at ~285 ppm due to replacement (thank goodness) of gas (that has been absorbed with time constant about 10 years) by nature. Noting that the concentration is increasing, I tried to calculate the rate of increase by putting in the human rate of input to compare with Fig. 5. My differential equation (that has no assumptions of absorption details) must be correct iff the rate of input is linear with time and the lifetime is constant at 10 years. Fig. 4 shows that the input rate is linear only since 2002 and that is the figure I used. More likely the input rate fits a polynomial, since say 1950, a soluble case if we were to fit Fig. 4 with a quadratic and cubic and …. function and if I can find such a fit, I’ll upgrade the calculation.
But you are right, I should have included a natural rate in the right hand side of my differential equation as a constant. This would just add the 285 figure you indicate to the concentration. But this doesn’t affect the increase rate, the point of the calculation. Nonetheless, the result indicates that the human input yields an increase rate similar if not equal to the observed rate increase.

86. Roderic Fabian says:

Thanks to Willis for the explanation of the difference between decay of a tracer of CO2 and decay of a bolus of CO2. It seems to me, though, that the Bern model only holds up if we assume that all of the increase in CO2 is coming from anthropogenic emission. If there is an imbalance of natural emission and absorption then residence time of a bolus might be a lot shorter. Also, the lack of correlation of net CO2 emission and anthropogenic emission tends to support the idea that natural emission and absorption plays a role in the increase of atmospheric CO2. The idea that natural emission and absorption are so finely balanced that the relatively small amount of CO2 due to human activities was the whole cause of the increase in atmospheric CO2 is hard to swallow. Most other things in the climate have more variability.

• Roderic,

Indeed it is very surprising that the natural variability of the carbon cycle is that small: only 5 ppmv/°C over the seasons and 4-5 ppmv/°C over 2-3 year variability.

That means that the imbalance of the in total ~150 GtC going in and out is less than 10% of the fluxes, while many of the underlying natural changes are in the order of +/- 30% and even +/- 50%.

In my opinion, it may be caused by the fact that oceans and vegetation react opposite to temperature changes. And the fact that the rapid sinks (ocean surface) are already saturated at 10% of the change in the atmosphere (the Revelle/buffer factor). The next sink factor (in speed), the deep oceans, have a much larger capacity, but the bottleneck is the relative small exchange rate with the atmosphere…

• I have done a more detailed analysis using much more of the available CO2 data. I have esimated the relative contributions with statistical confidence limits. Click on my name and critique my work.

87. somehow in all of that we never pondered exactly how a cheap sheet metal and plywood desk would ward off an atomic bomb

1. many initial injuries were caused by the flash and the pulse of beta radiation, which a desk would protect marvelously against

2. many more were caused by the pulse of high-energy electrons — beta radiation — which followed shortly after the flash

3. hiding under a desk gave at least some protection against projectiles frivern by the blast, like glass from the classroom windows.

• It is interesting to note that most individuals who were protected from thermal and blast effects fairly near the detonation, but received large gamma doses and immediately left the area, were unscathed over the many years since. This has been attributed to the likelihood of only zero or one damage to each DNA element that is quickly repaired by the mechanisms present in the strand. Irreversible illness as in high cancer rate occurs when radioactive material is ingested since multiple damage to a DNA element may not be reparable.

88. Michael 2 says:

“how a cheap sheet metal and plywood desk would ward off an atomic bomb”

It wards off the thermal flash, blocks alpha and beta particles and protects from flying debris which is one of the most dangerous aspects of this whole thing.

Whether you’d want to be a survivor is a whole ‘nother question.

Anyway, thanks for a very good demonstration of equilibrium seeking in the presence of simultaneous decay and addition of new substances.

89. Willis Eschenbach says:

I’ve added a short update to the head post including this graphic:

w.

• Bart says:

• AJB says:

If you want to play linear pot shot with a dynamic system; not a 2nd derivative in sight …

Except it ain’t. The world economy had better slow or another strat cooling event show up to smack pCO2 around the head pretty soon.

Meanwhile, back to UAH.

• Bart says:

HADCRUT4 looks pretty good to me. HADCRUT4SH is even better, suggesting a dominant role of the oceans in the cycle.

These are stochastic variables, with random additional inputs and measurement error. Moreover, they are bulk measurements – if most of the action is occurring in specific places, then a weighted average of those places should provide a better fit than an overall uniform average. Determination of those places of activity and processing of the data is a job that would require more time than I have.

But, the high SNR which provides such an excellent fit of dCO2/dt with bulk averaged temperatures is a strong indicator that the driving relationship is between a temperature modulated process and atmospheric CO2. Human emissions are not temperature modulated. They are not the driver.

• AJB says:

Bart, I agree. My theory is that step-downs in strat temp cause the natural sinks to speed up. The interesting bit is that when a big volcano goes off strat temp initially lurches up before falling to a new level lower than it was before. But it takes a long time for the system as a whole to settle down to a new equilibrium. You can see that the rate of CO2 increase takes a dive immediately after such an event. Too fast for that to be due to trop cooling alone IMHO. It also seems that ozone recovery in the strat takes longer that the three years or so for SO2 to wash out in the trop and temperatures at the surface to recover. For the remainder of that period we seem to get increased ocean heating and the CO2 rate increases. That in turn leads to a big El Nino, now eight years or so years after the volcanic event. On that basis El Chicon and Pinitubo overlap, which is maybe why we had a monster El Nino in 98.

Contrast that with the bomb event. That seems to have had a lesser effect in the trop but a longer lasting effect in the strat, perhaps because it destroyed ozone over a larger altitude band. It’s interesting to compare strat temps for the northern and southern hemisphere after that thing went off. Anyone have better data?

• AJB,

Add human emissions and net sink rate and it is clear that humans are delivering most of the increase in the atmosphere and that the variability is in the sink rate, not the source rate…

The main cause of the variability in sink rate is in the SH tropical forests: short episodes of heat and drought caused by El Niño make the Amazon a temporary source of CO2. A little more down to earth than what happens in the stratosphere, but it may be a result of special forces (sun, volcanoes) on the ocean fluctuations, nobody knows what is triggering an El Niño…

As the trend is not caused by vegetation (the whole biosphere is a net sink for CO2) and the CO2 levels increase and the δ13C decrease in the NH first, the source is in the NH and has a low 13C level, thus not caused by the oceans either. The NH is where 90% of all low-13C human emissions are emitted…

90. Hoser says:

Willis, takes the stage again. Trots out another half-baked analysis. Adds to confusion, because he doesn’t get it, and doesn’t resolve anything. Please tell us more about the South Pacific instead. Several of my old posts here do address specifically these points.

The bottom line is, the off-rate of CO2 conclusively demonstrated by the bomb spike data is a benchmark you can use to analyze the rest. It tells half of the story. It doesn’t explain how different reservoirs work, or what sources of CO2 there are. That does not matter. It doesn’t address the total amount of CO2 in the air. That’s not the point of that data. We know how much LEAVES the atmosphere and how fast it goes. And that rate is steady for at least half of a century.

When you combine the IPCC estimate of anthropogenic CO2 releases since 1750, and start with the supposed non-anthropogenic CO2 level of 280 ppmv in 1750, we can try to blame ourselves for the increase in CO2. But if you do blame us, and you use the 5 year half-life of CO2 in the atmosphere, you find you can’t fit both the 1760 level and the rate of CO2 increase measured at Mauna Loa since 1960. Thus, Nature must be more responsible for the level of CO2 than we are.

So don’t come back with a goofy line like: “Oh, you are conflating… blah blah”. I’m telling you the CO2 level is a result of the on rate and off rate. We do know the off rate for sure, and it’s essentially constant. That puts limits on what the on rate can be, but since the CO2 levels are increasing, the on rate is larger than the off rate. My point is, you can’t explain the increase by just anthropogenic CO2 releases, but they try. They try.

Think about a balloon with a hole in it. The size of the balloon depends on how fast the air comes out and how fast you blow into the balloon. If you blow very hard, the balloon will inflate; less hard, the balloon will get smaller. Without knowing the amount of air going into the balloon, the rate of the balloon getting smaller alone doesn’t tell you how much air comes out the hole. In our balloon, the Earth, we know how big the hole is and how fast the air is coming out. We know how much we are blowing into the balloon. Between the two, we can figure out how much Nature is blowing in.

A complication is, we are blowing in more each year at an exponentially increasing rate, and may approach Nature’s rate. But even so, that doesn’t mean at that point having a higher CO2 concentration is a bad thing. That’s a separate question. First, we don’t know the global warming last century was due to CO2 as claimed. Next, we don’t know whether the Earth will cool with 500 or 600ppmv CO2, but I suspect it will. We are on the slow downhill ride to the next glaciation. If a higher atmospheric CO2 concentration could stop that process, what’s wrong with that? Well, I won’t hold my breath.

• milodonharlani says:

IMO the natural sinks are also growing, thanks to milder climate & more CO2 in the air, from whatever sources. If the increased sink rate due to more plant food is significant or not, I don’t know & it would in any case be hard to estimate.

91. Willis Eschenbach says:

Bart April 22, 2015 at 10:26 am says:

I’m going to declare victory. Let us declare once and for all, the so-called “mass-balance” argument is dead and buried.

Before you declare victory, perhaps you could tell us all what you think the “so-called mass-balance argument” is. The lack of specificity in this discussion is sometimes worrisome.

w.

• Bart says:

The argument goes thusly:

A = N + H – S

A = atmospheric change
N = natural inputs
H = human inputs
S = sink activity

A is about 1/2 of H, so N – S is less than zero, hence nature is a net sink, hence H is responsible for the rise.

The flaw in the argument is that sink activity S is the result of dynamic feedback. It is dependent on the total atmospheric concentration and thereby a function of both N and H, S = S(N,H). So, although the sinks are natural, their activity is partially dependent on H. The portion dependent on H is, fundamentally, artificially induced sink activity, or artificial sinks.

To truly state that nature, on its own, is a net sink, you would have to prove that N – S(N,0) is less than zero. But, we don’t have that information. All we have is N – S(N,H). It still has a human induced component in it.

If you removed H, then S would settle down to a lower value, and N – S(N,0) might well be greater than zero. There is no telling on the basis of this information alone. Other lines of evidence indicate that, indeed, positive N – S(N,0) would be the case.

• Willis Eschenbach says:

Thanks, Bart. That clarification is much appreciated. I think that we can say at present nature is a net sink.

However, that says nothing about nature in the past or the future.

Regards,

w.

• Bart says:

And, it has no impact on the question of attribution. Because even though the processes are natural, a portion them are induced by anthropogenic activity. Take away the anthropogenic forcing, and that portion disappears with it. It’s a classic feedback dynamic.

• olliebourque@me.com says:

A-H = N – S

Since we know both the values of A and of H ( A=H/2 ) we know

H/2 – H = N – S

or -H/2 = N – S

Since H is positive, we know -H/2 is negative.
..
Therefor N – S is negative.

Don’t care about the details of N or the details of S, we know N – S is negative or that S > N

Since S > N with rising T it shoots Salby down real fast.

• Bart says:

Really stupid, olliebourque.

• olliebourque@me.com says:

Mr Bart
.
Could you please be more specific as to what is your objection to my post?

• Bart says:

• olliebourque@me.com says:

“Don’t care about the details of N or the details of S, we know N – S is negative ”

Repeat: N – S is negative

• Michael 2 says:

olliebourque@me.com groks Bart.

Bart says: “…so N – S is less than zero”

Whereupon ollie says “Repeat: N – S is negative”

I think we have a consensus! Obviously this part is very important to some people.

• olliebourque@me.com says:

PS Bart
.
The quantity ( N – S ) is negative based on empirical measured quantities.

• olliebourque@me.com says:

Yes Mr. Michael 2 N – S is negative.
That says that natural sources are less than natural sinks.
..
Even when global T is increasing.
..
Poor old Salby

• Michael 2 says:

olliebourque@me.com “Yes Mr. Michael 2 N – S is negative.”

Still on that one?

“That says that natural sources are less than natural sinks.”

Well now at least I understand the claim. Whether it is true I am not so sure but this has been an interesting debate.

• Bart says:

No kidding. So what? It has no bearing on the attribution question. Can you read? How stupid are you?

• olliebourque@me.com says:

“It has no bearing on the attribution question”

How thick are you? It has everything to do with “attribution”
..
It says that the current rise in atmospheric CO2 is NOT coming from natural sources.

• Bart says:

No, Ollie. That’s not what it says at all. I explained in full up-thread. Can you read?

• olliebourque@me.com says:

Bart, I have high school algebra students that are brighter than you.
..
1) “The quantity ( N – S ) is negative” Your response: “No kidding”
.
2) ” ( N – S ) is negative ” means N – S < 0
.
3) N – S < 0 means N < S
.
4) N = natural inputs & S = sink activity

So N < S means sinks exceed natural inputs.

Thanks for playing Bart.

• Bart says:

When you graduate to calculus, maybe you will understand why your comments are so stupid. Read the above carefully. Maybe if you do so over and over, some of it will leach through by osmosis.

• olliebourque@me.com says:

You don’t balance a check book with calculus

• Bart says:

And, the Earth’s CO2 regulatory system is not a bank account. You need calculus there.

But, go on insisting 2 + 2 = 3. Dig yourself as deep as you please.

• Bart says:

It has no bearing on attribution, as I keep trying to get through your thick skull.

Why won’t you read the above and argue something even remotely germane? Yes, N – S(N,H). That is trivial. But, it has no bearing on the attribution problem.

Either address the argument, or I will assume you are a bot, and respond no further.

• Bart says:

Yes, N – S(N,H) is less than zero. That is trivial. But, it has no bearing on the attribution problem.

• olliebourque@me.com says:

“Yes, N – S(N,H) is less than zero”
..
Great, so according to that statement, natural sources are less than an/all the sink(s).
..
If that is the case, then the increase in atmospheric CO2 cannot be from natural sources
..
Thank you Bart.

• Bart says:

“If that is the case, then the increase in atmospheric CO2 cannot be from natural sources “

Non sequitur. (That’s Latin for, “it does not follow”).

• olliebourque@me.com says:

Since the sinks are larger than the natural sources Bart, why don’t you tell us where the CO2 that is accumulating in the atmosphere is coming from…. Keep in mind you have admitted that N – S is negative.

• Bart says:

Most likely the oceans. Human inputs are rapidly sequestered.

I’m sorry the argument is too subtle for you. It’s always a hazard when lay people get involved in scientific debates that are over their heads.

I assure you, young Ollie, that everything I am telling you is scientifically correct, and that the so-called “mass balance” argument is jejune. This is a dynamic system. The sinks respond to all CO2 in the atmosphere, not just naturally produced CO2.

This is why I take pains to point out that S is a function of both N and H. Yes, N – S(N,H) is less than zero. But, this does not compel that N – S(N,0) must be less than zero.

To establish human attribution, you would have to prove the latter, but you only have the former. It’s not enough. It is very easy to have N – S(N,H) be less than zero while N – S(N,0) is greater than zero.

Having N – S(N,H) less than zero has no bearing on the question of attribution. Only proof that N – S(N,0) is less than zero would do that.

Now, do you get it?

• olliebourque@me.com says:

Can’t be the oceans. The sinks are gobbling up everything the oceans are putting out.

N < S

• Bart says:

“Show me how you can reach a fourfold increase in increase rate in the atmosphere and a fourfold increase in net sink rate with a fourfold increase in human emissions and e.g. a threefold or fivefold increase in natural inputs…”

A loaded question. You don’t know the net sink rate. This is where you err. You make the assumption of your sink rate, and the rest follows. But, your assumption of sink rate is arbitrary, so your logic is circular.

“…the biosphere as a whole is a proven net sink for CO2 of ~1 GtC/year (~0.5 ppmv/year), based on the oxygen balance.”

Like a circle in a spiral, like a wheel within a wheel. It isn’t proven, Ferdinand. Just because an observation is consistent with an interpretation does not mean the interpretation is correct.

“If you ad low-13C CO2 from vegetation or fossil fuels to the atmosphere, that will lower the 13C/12C ratio in the atmosphere.”

No, it might lower it. A lowering of the 13C/12C ratio is consistent with low 13C/12C influx from a particular source, but it is not uniquely caused by it. There are other sources, and dynamic sinks, and the egress matters just as much as the influx.

“…and burning efficiencies and the net sink rate is the difference between these two.”

No. The net sink rate is the difference between total egress and total influx. Influx and egress of natural sources and sinks are part of it.

Totally, completely, utterly wrong. I have explained why until my face is blue, and my fingers are aching.

“And I have not the slightest problems with understanding dynamic systems which must obey all the same stuff like Henry’s law as good as static systems must do…”

You do. You do not get dynamic systems. You do not understand how to treat the continuous natural influx and egress of CO2 throughout the system. You think that temperature sensitivity must be in ppmv/K, when the empirical evidence clearly shows that it is in ppmv/K/unit-of-time. You do not seem to understand that we are never in equilibrium, and you mistakenly apply equilibrium laws to this system which is not in equilibrium.

I am tired, and I must prepare for more travel this week. Until we meet again….

• Bart April 25, 2015 at 1:41 pm

You don’t know the net sink rate. This is where you err.

What? The net sink rate is the difference between human emissions and what is measured in the atmosphere. That is a simple calculation, like 2 = 4 – 2. Maybe too simple for you…

Like a circle in a spiral, like a wheel within a wheel. It isn’t proven, Ferdinand. Just because an observation is consistent with an interpretation does not mean the interpretation is correct.

Bart, you are simply out of your depth: the oxygen balance is as solid proof that the biosphere is a net sink for CO2 as the CO2 measurements are proof that CO2 in the atmosphere is increasing. No way to have a different interpretation.

No, it might lower it. A lowering of the 13C/12C ratio is consistent with low 13C/12C influx from a particular source, but it is not uniquely caused by it.

Bart, you are again out of your depth: there are two unique sources of low-13C CO2: fossil organics and recent organics. All other important sources are inorganic which have a much higher 13C/12C ratio. That includes the oceans, volcanic emissions, rock weathering,…
Recent organics (including plants, bacteria, molds, insects, animals) are a net sink for CO2. Not a source. Thus not the cause of the measured decline of the 13C/12C ratio in the atmosphere. Neither are the oceans.
Thus the decline in 13C in the atmosphere is uniquely caused by the use of fossil fuels and nothing else.

No. The net sink rate is the difference between total egress and total influx. Influx and egress of natural sources and sinks are part of it.

At any moment in time, the mass balance must be obeyed: you can’t destroy or create carbon atoms. The evolution of what is in the atmosphere is the momentary amount plus the sum of the integrals of all individual influxes and outfluxes over the time span of interest.
Over a year, the total human input is ~9 GtC. The increase in the atmosphere is ~4.5 GtC/year. The net difference is ~4.5 GtC/year more natural sinks than sources (including 0.08 GtC/year extra sink rate caused by the human emissions of that year).

You do. You do not get dynamic systems. You do not understand how to treat the continuous natural influx and egress of CO2 throughout the system.

Wow, 34 years practical experience with dynamic systems ranging from seconds to days of response time and I don’t understand dynamic systems? Maybe not, but you clearly don’t understand natural systems. Probably too slow for you?

You think that temperature sensitivity must be in ppmv/K, when the empirical evidence clearly shows that it is in ppmv/K/unit-of-time.

Again, you are clearly far out of your knowledge here: the solubility of any gas in any liquid obeys Henry’s law which was established in 1803 and confirmed with millions of laboratory and field observations for CO2 in seawater. It is limited to ppmv/K and when the ppmv’s match the K’s, the total influxes and total outfluxes match each other and nothing happens with the CO2 levels anymore.
It is your misattribution of the slope of the CO2 rate of change to temperature change which is your problem.
That is 212 years of established physical science compared to 55 years of misattribution…

You do not seem to understand that we are never in equilibrium, and you mistakenly apply equilibrium laws to this system which is not in equilibrium.

Of course, nature is never in equilibrium, but the dynamic behavior will follow the same physical laws as for static systems.
No way that a small increase in ocean temperature will give a continuous net influx of CO2 without negative feedback from the increased CO2 pressure in the atmosphere on the net influx.
That is i*m*p*o*s*s*i*b*l*e.

• Bart says:

Wrong. N – S(N,0) is not less than zero.

• olliebourque@me.com says:

Empirical measurement disagrees with that statement.

• olliebourque@me.com says:

You previously said “Yes, N – S(N,H) is less than zero”

• Bart says:

Right. N – S(N,H) is less than zero. N – S(N,0) is not.

• olliebourque@me.com says:

Doesn’t matter if it’s S(N,H) or S(N,0) the sinks both are greater than natural sources.

N < S for all sinks.

• Bart says:

Wrong. N – S(N,0) is not less than zero.

• olliebourque@me.com says:

Empirical measurements say it is less than zero

• olliebourque@me.com says:

The sum of all sinks are greater than N
..
N < S
..
See my post at April 24, 2015 at 8:40 am

• Bart says:

Stuck on stupid. What can I say? The child cannot learn.

• olliebourque@me.com

Thanks for the help, to no avail for Bart, as he is completely blinded by his theory which violates about all known observations.

Nevertheless, there is one and only situation where the natural cycle can overwhelm human emissions: if the sinks are responding extremely rapid on disturbances and the natural cycle increased in lockstep with human emissions (a factor 4 in the past 55 years).

For which is not the slightest indication: the biosphere is a proven sink for CO2 and the oceans would increase the δ13C level of the atmosphere, while we see a firm decline in δ13C in the atmosphere…
Neither does the residence time show a 4-times decrease…

• Bart

I have calculated the extra sink caused by the human contribution here, which shows that the human contribution has a negligible influence on the net sink rate. That shows that near all sinks are natural and only respond to the total increase in the atmosphere, whatever its source, and that the sinks will go on for a long time after the last human emissions with an e-fold decay rate of over 50 years.
Thus N – S(N,0) is less than zero for years after all human emissions have ceased, because S doesn’t depend on N, S depends on the total increase in the atmosphere above the equilibrium, which was a function of N and H.

• Bart says:

“Nevertheless, there is one and only situation where the natural cycle can overwhelm human emissions: if the sinks are responding extremely rapid on disturbances and the natural cycle increased in lockstep with human emissions (a factor 4 in the past 55 years).”

This is completely and utterly false. There is an entire continuum of solutions for natural inputs and sink activity which would be consistent with the observations.

“…the biosphere is a proven sink for CO2…”

It isn’t. Again, you slip implicitly into the discredited “mass-balance” argument, the acceptance of which calls into question your entire ability to judge what is happening.

“…and the oceans would increase the δ13C level of the atmosphere…”

You think. But, there is no proof.

“I have calculated the extra sink caused by the human contribution here…”

You have calculated it based on an assumption. It is a constrained solution, and the constraint is arbitrary.

“Thus N – S(N,0) is less than zero…”

Nope. The only observation is N – S(N,H) is less than zero. You arbitrarily constrain N – S(N,0) is less than zero by your assumptions. It is circular logic.

I’m sorry you do not see this. Regrettably, as I have pointed out before, you just don’t have the maths. You keep trying to stuff everything into a static analysis framework, and it fails, because this is a dynamic system.

• Bart:

This is completely and utterly false. There is an entire continuum of solutions for natural inputs and sink activity which would be consistent with the observations.

Well show me the math. Show me how you can reach a fourfold increase in increase rate in the atmosphere and a fourfold increase in net sink rate with a fourfold increase in human emissions and e.g. a threefold or fivefold increase in natural inputs…

It isn’t. Again, you slip implicitly into the discredited “mass-balance” argument

What? As repeatedly said, obviously to no avail: the biosphere as a whole is a proven net sink for CO2 of ~1 GtC/year (~0.5 ppmv/year), based on the oxygen balance. Not the mass balance:
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf

You think. But, there is no proof.

Bart, that remark only shows that you have not the slightest idea where you are talking about:
If you ad low-13C CO2 from vegetation or fossil fuels to the atmosphere, that will lower the 13C/12C ratio in the atmosphere. If you add 13C rich CO2 from the oceans to the atmosphere, that will increase the 13C/12C ratio in the atmosphere. What is measured is a firm decrease in lockstep to human emissions.

You have calculated it based on an assumption. It is a constrained solution, and the constraint is arbitrary.

That the increase in the atmosphere was a fourfold in the atmosphere is directly measured, the increase in human emissions is calculated from sales inventories and burning efficiencies and the net sink rate is the difference between these two. So, where is the assumption?

You arbitrarily constrain N – S(N,0) is less than zero by your assumptions.

And I have not the slightest problems with understanding dynamic systems which must obey all the same stuff like Henry’s law as good as static systems must do…

• Bart says:

92. Willis Eschenbach says:

Hoser April 22, 2015 at 10:13 pm

Willis, takes the stage again. Trots out another half-baked analysis. Adds to confusion, because he doesn’t get it, and doesn’t resolve anything. Please tell us more about the South Pacific instead. Several of my old posts here do address specifically these points.

None for me, thanks. The reason a man like you starts throwing mud is because you’re out of ammunition.

Best regards,

w.

PS—As to whether I “resolve anything”, you’re right, I often don’t resolve a single thing. This is because often my goal is not resolution, it is furthering the ongoing discussion.

• Willis,

it is furthering the ongoing discussion

Which gets repeated every few weeks/days nowadays…

• Bart says:

It is very tiring. You just insist on dealing with this system in a static framework. You construct a neat little narrative based on your static assumptions. But, it has little to do with how actual reality unfolds.

• Bart,

I don’t know what your daily work does include, I have some feeling that it has mainly to do with high frequency responses (radar, communication,…). Certainly not with chemical processes.

Besides building my own radio when I was fifteen, my knowledge of that field is limited. But I know that there is no phase distortion whatever if you add two independent streams of CO2 to a large reservoir where one stream is highly variable, but has hardly a trend and the other is hardly variable, but shows a continuous increasing trend, if the sink rate is slow enough…

And I think that my knowledge of dynamic processes (including a few runaway reactions) is at least more practical than yours…

• Bart says:

“…where one stream is highly variable, but has hardly a trend …”

T very clearly has a large trend, and it necessarily produces the trend in dCO2/dt. There is no way around it. There is no doubt about it. You are very confused on this issue. But, to get the proper phase response, you must have T feeding into the derivative of dCO2/dt, and the sensitivity is necessarily in ppmv/K/unit-of-time.

No way around it, Ferdinand. None at all.

• Bart,

Temperature has a “large” trend of 0.6°C over the past 55 years. According to the solubility of CO2 in seawater, that is good for an increase of 5 ppmv in the atmosphere.
Human emissions show a trend of 150 ppmv over the same time span.
The increase in the atmosphere was 80 ppmv in the same time span.
It seems to me that some larges are larger than other larges…

to get the proper phase response, you must have T feeding into the derivative of dCO2/dt, and the sensitivity is necessarily in ppmv/K/unit-of-time.

Bart, I repeat: any increase in temperature gives a CO2 increase in the atmosphere which asymptotes towards the new equilibrium which is ~8 ppmv/K. That is what the physical law says, already established in 1803 and confirmed by millions of measurements.
Thus the integration is between T and CO2 and derived from that, between dT/dt and dCO2/dt. NOT between T and dCO2/dt.
To have an integral relationship, you must have a 90 deg. phase lag. There is a 90 deg. lag between T and CO2 and between dT/dt and dCO2/dt. There is zero lag between T and dCO2/dt, thus in your own words: no integral relationship…

• Bart says:

“According to the solubility of CO2 in seawater, that is good for an increase of 5 ppmv in the atmosphere.”

Ridiculous, Ferdinand. Laughable. Crazy. The data show clearly that the sensitivity is in ppmv/K/unit–of-time.

This is a dynamic system. Every instant of time, new CO2 is upwelling, and old CO2 is downwelling. Your static analysis does not apply.

“I repeat: any increase in temperature gives a CO2 increase in the atmosphere which asymptotes towards the new equilibrium which is ~8 ppmv/K.”

I repeat: you are fitting the data to your theory, rather than your theory to the data. It is ridiculous. Laughable. Crazy.

“Thus the integration is between T and CO2 and derived from that, between dT/dt and dCO2/dt. NOT between T and dCO2/dt.”</i

It doesn't give the right phase. It's 90 degrees off. You are wrong. Ridiculously, laughably, crazily, tragically, wrong.

“There is zero lag between T and dCO2/dt, thus in your own words: no integral relationship…”

Yes, zero lag and dCO2/dt = k*(T – T0). That means CO2 is the integral of k*(T – T0). It is saying the same thing!!!

There is no way around it. Your attempt to get around it is mathematical gibberish!!! Nonsense. Craziness.

I’m glad I’m leaving. I feel like I have been on an extended visit to the asylum. I can’t maintain a sense of decorum any longer, or they’re going to have to commit me!

• olliebourque@me.com says:

Bart you say: dCO2/dt = k*(T – T0)
..
For the past 15-18 years per RSS (T-T0) = 0.
For the past 15-18 years dCO2/dt has been 2.1 ppmv/yr
..
What is your value of k?

• Bart says:

Man, you are dumb. Sure, mean T has not changed. Therefore, the gap between T and T0 has remained steady with essentially zero mean variation, at about T – T0 = 0.78K, which begets a steady rate in atmospheric CO2.

Ollie, or David Socrates, or whatever your nom du jour is, just butt out of the argument, will you? You are clueless.

• Bart says:

I was wrong. I read the mean temp off the scaled plot. The mean temp has been more like 0.2K, so the mean deviation has been about T – T0 = 0.84K, which begets a mean rate of change of CO2 of about 0.18 ppmv/month = 2.2 ppmv/yr.

But, still very dumb to claim T – T0 zero when I gave you the plot.

• olliebourque@me.com says:

Bart
,,
You need to get up to speed on what has been happening to global temps in the past 18 years.
..
..
Look real carefully at that plot, and even YOU might even see a drop in temperatures.
..
Please explain why your “theory” does not explain the rising CO2 levels in the past 18 years when temperatures have not been rising.

• Bart simply changes his T0 whenever it needs to fit the data of the period in question. In this case, with flat temperatures, the CO2 levels still go up until eternity, without any response of the increased CO2 levels in the atmosphere on the ocean influxes and outfluxes.

That is easy to do: adjusting the offset and factor always can fit the slopes of two straight lines, without any physical base about cause and effect…

Imagine the 0.002°C T – T0 difference to obtain the change of 10°C between a glacial and interglacial warming over 5,000 years…

• Bart says:

Ferdinand Engelbeen @ April 27, 2015 at 12:12 am

“Bart simply changes his T0 whenever it needs to fit the data of the period in question.”

That is how linearization of nonlinear systems works. The linearized solution is valid for some interval of time, but not for all time. How long it is valid is system dependent. But, a constant value for T0 is remarkably consistent with the data for the past 57 years.

So, again, due to your lack of experience with the mathematics, you call out something that is unremarkable and commonplace in the analysis of nonlinear systems as somehow being a cause for skepticism. Mathematics is the language of, the very basis for, modern science. Why you think you can make firm conclusions about this system while ignoring the mathematical fundamentals is something that escapes my understanding entirely.

“That is easy to do: adjusting the offset and factor always can fit the slopes of two straight lines, without any physical base about cause and effect…”

The dCO2/dt = k*(T – T0) relationship fits a lot more than just the slopes of two lines. It fits every major bump and burble in the data for over five decades running. You are the one who is fitting a straight line (computed trend in human emissions) to another straight line (computed trend in atmospheric concentration) and claiming it is conclusive. It isn’t. Yours is the trivial match which has no physical meaning.

olliebourque@me.com @ April 26, 2015 at 10:23 am

“Please explain why your “theory” does not explain the rising CO2 levels in the past 18 years when temperatures have not been rising.”

It does explain it. I gave you the plot. I gave you the values of k and T0. Look at the plot. It’s right there. How carefully do I have to spoon the pablum before you stop splashing it on your bib?

• Bart,

That is how linearization of nonlinear systems works.

Except that there is not the slightest reason to do any linearization as the whole CO2 cycle reacts as a simple, first order linear process.
So, again, due to your lack of experience with chemical equilibrium processes you make things far more complicated than necessary.

The dCO2/dt = k*(T – T0) relationship fits a lot more than just the slopes of two lines. It fits every major bump and burble in the data for over five decades running.

The bumps and burbles are almost completely caused by temperature bumps and burbles. Nobody refutes that. But the slope is proven from a different process that what reacts on the temperature bumps and burbles.
Thus the arbitrary match of the slopes has not the slightest power of proof that the temperature slope is the cause of the CO2 slope, the more that that violates physical laws like Henry’s law and a lot of other observations

The increase in human emissions is calculated from measured sales and measured burning efficiency for each type of fuel.
The increase in atmospheric CO2 is accurately measured.
The net sink rate is the difference between the foregoing two
All three increased a fourfold in the past 55 years.
That looks like a straightforward linear first order equilibrium process, slightly modulated by temperature changes.
As the human emissions were always larger than the increase in the atmosphere in the past 55 years, it seems a vey good candidate for being the cause of the increase. The more that it does fit all observations…

• Bart says:

“Except that there is not the slightest reason to do any linearization as the whole CO2 cycle reacts as a simple, first order linear process.”

This is an assertion which begs the question. There is not the slightest reason to constrain this system to be linear and time invariant over all time.

“But the slope is proven from a different process that what reacts on the temperature bumps and burbles.”

There would be phase distortion. There is none observable.

There is no violation of Henry’s law. Henry’s law is for steady state equilibrium in a closed system. This system is always in flux.

“As the human emissions were always larger than the increase in the atmosphere in the past 55 years, it seems a vey good candidate for being the cause of the increase.”

Certainly not on that basis. The so-called “mass-balance” argument merely fails to disqualify human attribution, but it does not lend any support to it.

The more that it does fit all observations…

It does not fit this.

93. Bart,

You are just talking nonsense: it doesn’t make any difference for the equilibrium if that is reached in a static or dynamic process. The CO2 levels in the atmosphere just will reach the new equilibrium for a temperature increase with 8 ppmv/K that is all. At that moment the average pCO2 of the oceans and the pCO2 of the atmosphere are equal and the incoming and outgoing CO2 fluxes are equal. No matter if that is the same surface of a cylinder in a laboratory or the world oceans where the source and sink places are thousands of kilometers apart.

Yes, zero lag and dCO2/dt = k*(T – T0). That means CO2 is the integral of k*(T – T0). It is saying the same thing!!!

Fatal error in your reasoning: CO2 is not the integral of k*(T – T0), it is the integral towards the new equilibrium, thus of the difference between current level and new level which has a finite endpoint.
That difference evolves towards zero over time:

dCO2/dt = k2*[k*(T – To) – ΔpCO2(atm)]

Where k = ~8 ppmv/K and ΔpCO2(atm) the difference between current CO2 level in the atmosphere and the CO2 level at the old equilibrium.

The moment that k*(T – T0) and ΔpCO2(atm) are equal, dCO2/dt is zero:
ΔpCO2(atm) = k*(T – T0)
which is what Henry’s law says…

For the current atmosphere, k*(T – T0) is about 6 ppmv above the 1850 temperature equilibrium while the current CO2 level is 110 ppmv above the same equilibrium or dCO2/dt = k2*[6 – 110] = k2*[-104]

Or with other words, the current and far future CO2 level in the atmosphere is more sink than source (currently ~2.15 ppmv/year), except than human emissions still provide more CO2 per year (4.5 ppmv) than the sink rate for the current CO2 pressure in the atmosphere.

Sorry that you feel so badly, maybe you should consult a chemical engineer who has some experience in dynamic equilibriums…

• Bart says:

“dCO2/dt = k2*[k*(T – To) – ΔpCO2(atm)]”

Wrong. There is no such dynamic observable in the modern era. Any such extra terms would therefore necessarily have k2 tiny and k2*k the only significant scaling factor.

That means that dCO2/dt = k*(T – T0) is the only equation we need concern ourselves with to diagnose attribution, and the clear implication is that human inputs do not significantly influence atmospheric CO2 levels.

Maybe you should consider that not every system evolves like a chemical vat sitting on a factory floor. Maybe you should fit your theory to the data, rather than futilely trying to fit the data to your theory.

• Bart,

Your problem is that you attribute the whole slope to the increase in temperature, which would have merit if there were no other sources of extra CO2 which increase over time.
The variability of the CO2 rate of change which follows the temperature variability can be seen as a transient response (although not from the same processes) to temperature changes with an amplitude of 4-5 ppmv/°C.
An increase in temperature of 0.6°C is good for ~5 ppmv extra (at 8 ppmv/°C) in the atmosphere and then it ends, according to established theory and reality..
Human emissions were ~150 ppmv in the same time span.
The observed increase in the atmosphere is 80 ppmv in the same time span.

It seems to me that your attribution is not really what the data show…

And the laws of solubility of CO2 in seawater are exactly the same for a stirred reactor with a content of a few m3 as for the global oceans be it that the time frames are quite different…

• Bart says:

“And the laws of solubility of CO2 in seawater are exactly the same for a stirred reactor with a content of a few m3 as for the global oceans be it that the time frames are quite different…”

I have no problem. Ferdinand. You do. You are begging the question. Big time. The data tell us very clearly that your conception of how this system behaves is false.

It isn’t even a close call. You are 90 degrees out of phase with reality.

• Bart,

The CO2 increase in the atmosphere after an ocean T increase needs time. It integrates towards the new equilibrium with a e-fold decay rate which depends of the exchange speed between oceans and atmosphere.

That makes that any sinusoidal change in T is followed by a sinusoidal change in CO2 with a lag of pi/2, independent of the frequency (if the system response is slow enough, which is the case here) and an amplitude which reduces with the increase in frequency.

That is exactly what is seen in the past 55 years: a small response of 4-5 ppmv/K for T changes with a pi/2 lag and a 6 ppmv increase as result of the total increase in temperature. That is all. Completely dwarfed by the much larger human emissions…

• Maybe Bart you should listen to Ferdinand who knows much more about this subject than you do, and stop spouting the same fallacious rubbish. As Salby said in his address the rate of change in pCO2 is governed by a proper balance equation, where he was wrong is in assuming that only the source term was temperature dependent.
The proper equation is:

d[CO2]/dt = Fossil Fuel emissions + Sources(CO2,T) – Sinks(CO2,T)

This balance equation is true at all timescales. As is clear from the data annual fossil fuel emissions are greater than the difference between Sources and Sinks, this is true at [CO2]=400 ppmv just as it was in 1960 when [CO2] was 320 ppmv. Small scale modulation of the [CO2] by T does not mean that human emissions are not the major source of additional atmospheric CO2.

• Bart says:

Ferdinand Engelbeen @ April 26, 2015 at 12:56 pm

“That makes that any sinusoidal change in T is followed by a sinusoidal change in CO2 with a lag of pi/2, independent of the frequency (if the system response is slow enough, which is the case here) and an amplitude which reduces with the increase in frequency.”

Yeah. In other words, it is an integral relationship. And, the trend in T is thereby causing a quadratic rise in CO2. Which means human inputs cannot be a significant contributor, because accumulated emissions are also quadratic, and there is little to no room for them to contribute additional curvature.

“That is exactly what is seen in the past 55 years: a small response of 4-5 ppmv/K for T changes with a pi/2 lag and a 6 ppmv increase as result of the total increase in temperature. “

What is seen in the last 57 years is dCO2/dt = k*(T – T0). It accounts for all but, at most, a small portion of the observed changes. Human inputs are insignificant.

Phil. @ April 28, 2015 at 8:20 am

Oh, Phil. Maybe you should try to grasp the argument. Ferdinand’s narrative is inconsistent with the data.

“…fossil fuel emissions are greater than the difference between Sources and Sinks…”

The really stupid “mass balance” argument again. Anyone who proffers it can immediately be dismissed as having no idea what they are talking about. It has no bearing on attribution.

“Small scale modulation of the [CO2] by T does not mean that human emissions are not the major source of additional atmospheric CO2.”

It’s not a modulation. The polynomial order is wrong. The actual relationship is dCO2/dt = k*(T – T0).

• Bart says:

That is to say, it is not a modulation of human inputs. It is necessarily a modulation of natural inputs with the proper polynomial order.

• Bart,

It is the integral of the temperature difference influence minus the influence of the increase in CO2 in the atmosphere:

dCO2/dt = k2*[k*(T – To) – ΔpCO2(atm)]

where ΔpCO2(atm) is the integral of dCO2/dt from t0 up to t-1.

Therefore dCO2/dt, without other influences, integrates towards zero, far from giving a slightly quadratic increase of CO2 over time.
The moment that dCO2/dt = 0, the whole ocean – atmosphere cycle is in steady state and
ΔpCO2(atm) = k*(T-T0) which is what Henry’s law says
Where k = ~8.

The only slightly quadratic increase left is from human emissions. The complete formula then is:
dCO2/dt = k2*[k*(T – To) – ΔpCO2(atm)] + dCO2(em)/dt

As dCO2(em)/dt was larger than dCO2/dt for every year of the past 55 years, besides the small term for k*(T-T0) of about 8 * 0.6 = ~5 ppmv in 55 years, the whole term ΔpCO2(atm) is the increasing pressure in the atmosphere, getting far above the temperature influence and thus giving more and more net sink growth.

In fact, the above terms dCO2(em)/dt – k2*ΔpCO2(atm) are about (*) what I plotted as the calculated increase of CO2 in the atmosphere, middle the variability caused by the temperature variability, assuming that temperature has little effect on the sink rate over time…

(*) For my calculation, I included (T-T0) in the base temperature to calculate ΔpCO2(atm)

***********************************

That above transient response is also what Paul_K did prove over a year ago already at Bishop Hill’s blog:
http://bishophill.squarespace.com/blog/2013/10/21/diary-date-murry-salby.html page 2, 4th comment:

For the transient behaviour, I am just using a simple response function of the form:-
τ * dCO2/dt = ΔT – f(T)* ΔCO2
where ΔT and ΔCO2 are measured from an arbitrary initial equilibrium condition. This equation is based on the assumption that the process of release of solute with temperature change starts off quickly and slows down as the concentrations adjust – a commonly observed phenomenon for the transient behavior of chemical equilibration processes.

Which gives always a pi/2 lag of a sinusoidal CO2 change of any frequency if the ocean response is slow enough, which is obvious the case here…

• Bart says:

“…where ΔpCO2(atm) is the integral of dCO2/dt from t0 up to t-1.”

I.e., where ΔpCO2(atm) = CO2. I have no idea why you put t-1 in there. This is a continuous time differential equation.

You cannot independently specify your atmospheric CO2 and your rate of change of atmospheric CO2. They are coupled together in the differential equation, and there is a unique solution. And, that unique solution has a high pass filtered version of T in it. And, that high pass response would produce readily observable phase and gain distortion if it had any significant effect over this timeline.

This is not what the data tell us, Ferdinand. There is no distortion. The data follow the differential equation dCO2/dt = k*(T – T0) with high fidelity for the past 57 years.

Moreover, if there were a significant feedback of the type you claim, it would also attenuate the response to anthropogenic CO2. The response would track the rate of human emissions, rather than the full accumulation, with a loss of a full polynomial degree. So, your claims here are mutually inconsistent to begin with.

“Which gives always a pi/2 lag of a sinusoidal CO2 change of any frequency if the ocean response is slow enough, which is obvious the case here…”

No, only for frequencies well above the response cutoff. To fail to be observable in the roughly 60 years of data we have, that cutoff would have to be at least a decade lower, in the range of 1/600 years^-1 or less. Such a remote frequency cutoff has no practical effect on our discussion here, where the trend in temperatures has only been going on for a little over 100 years.

There is no room for negotiation. The trend in T is causing the trend in dCO2/dt.

Face it. There is no outlandish coincidence in the fact that, when you match T with dCO2/dt for the variation, you also match the trend. It is all of one piece. The conclusion is unavoidable, all of your tortured excuses and feverish drawing of epicycles notwithstanding. The trend in T is causing the trend in dCO2/dt.

• Bart,

Your unique solution of the equations:

dCO2/dt = k2*[k*(T – To) – ΔpCO2(atm)]
has a transfer function
H(s) = k2*k/(s + k2)
which can be written
H(s) = (k2*s/(s+k2)) * (k/s)

doesn’t fit what others have written about a transient response. In your formula, k2 is a positive constant (which allows you to show filtering) while – ΔpCO2(atm) is the growing CO2 concentration in the atmosphere as result of the increase in the atmosphere from t0 on, which is a growing negative feedback, not a constant. That ends with dCO2/dt = 0 or H(s) = 0 in your formula.
In the case of a transient response dy/dt depends on y. In this specific case, dCO2/dt depends of CO2, which makes the transfer function not that simple.

As Paul_k did show, a transient response can be approached by the Runge-Kutta methods.

That doesn’t show any filtering for any frequency lower than the ocean response…

The response does track the human emissions and the temperature increase alike. ΔpCO2(atm) now is far beyond what k*(T – T0) gives, with as result an increasing net sink over time, but not growing fast enough to remove all human emissions of any year. And the overall response is not fast enough to filter out the fast variations caused by temperature (by a different, much faster process). These just come through and the small slope of temperature is only good for 5 ppmv increase over time, which is surpassed by human emissions in less than 2 years…

• Bart says:

“That ends with dCO2/dt = 0 or H(s) = 0 in your formula.”

I think you are confusing the dc gain, which is H(s) evaluated at s = 0. As you can see, the dc gain is H(0) = k, which means for a constant T input, your equation settles to CO2 = k*(T – T0), which is the same thing as what you would get setting dCO2/dt = 0.

As for the rest, no, sorry, you are wrong there, too. You are 90 degrees out of phase with reality.

• Bart says:

If there are any lurkers out there interested, this conversation has been going on in parallel at Bishop Hill. There is a lot of overlap, but you may find interesting additional tidbits there.

94. Buck Smith says:

One perspective often missed in CO2 discussions is that fossil fuel combustion generates only one tenth the CO2 emissions of bugs and insects. Fossil fuel combustion is also only one tenth the CO2 emissions of microbes. I don’t think we can assume those levels are constant. Nor do we have any independent way to measure them.

• Buck,

There is an independent way to measure the net balance of the biosphere: the oxygen balance.

Besides a small contribution by warming oceans, all oxygen movements are from the biosphere: plants use CO2 and release oxygen, while bacteria, molds, insects, animals use oxygen and release CO2 by digesting plants, either directly or indirectly…

Burning fossil fuels also uses oxygen, but these quantities are known with reasonable accuracy. Each type of fuel has its own oxygen use, which makes that the total oxygen use of fossil fuel burning is known from the individual sales of the different fuels and their burning efficiency.

Since about 1990, the oxygen measurements are accurate enough to measure small changes of a few tenths of a ppmv in the 210,000 ppmv oxygen of the atmosphere. That shows that the biosphere as a whole is a net producer of oxygen, thus a net sink for CO2. Thus not the cause of the CO2 increase in the atmosphere. See:
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf

As near all life on earth depends of photosynthesis, there can’t be more life than photosynthesis has stored, at least not on longer term. Plant growth in general increases with increasing temperatures (if not limited by other constraints like drought, nutrients,..), thus storing more CO2 in more permanent carbon species (humus, peat, browncoal, coal)…

• And then their is the temperature dependent oxygen solubility in clouds that you are not considering in your mass balance. Burning of fossil fuel is not a major player in these processes.

• fhhaynie

Sorry, haven’t had the time to revisit your pages, it was too busy here and at the Bishop’s…

The changes in oxygen solubility in the ocean surface layer (average ~200 m) due to temperature changes is taken into account for the calculation, which is a tiny difference compared to burning fossil fuels and oxygen production by the biosphere. The amount of water in the atmosphere and clouds is only a fraction of what is in the ocean surface layer and that is very fast cycle: what goes into the atmosphere rains out in a few days. Even if the total amount increased or decreased over time or its average temperature changed a lot, that would hardly affect the trend in O2 use…

• Yes, Evaporation and rain is a fast cycle but it is not constant. Cold water in the top of thunderclouds absorbs air (oxygen and nitrogen) as well as CO2. Some of that water shoots out the top, freezes and releases both air and CO2. The air in these clouds is moving upward fast enough to hold up large hail. When it rains, it is warmed and some evapotates releasing air and CO2. That which reaches the ocean surface continues the evaporation/condensation cycle while pumping CO2 into the upper atmosphere where it is distributed globally. Thermo tells you the direction of flow, kinetics tells you the rate.

95. Richard says:

I agree with Ferdinand that a reduction of CO2’s solubility from the warming oceans couldn’t account for the assumed 120ppmv rise. Ferdinand says 8ppmv/C but I think that’s a slight underestimation; assuming the entire oceans warmed by 1C down to 4000m the increase according to Henry’s law would be closer to 20ppmv. I think changes in ocean eutrophication as a result of temperature changes is a big unknown and may account for some of the increase, and I agree with a lot of what Bart says.

• Richard,

Only the surface temperature is important for the (steady state) equilibrium between oceans and atmosphere. The deep(er) oceans are colder anyway and don’t emit much more CO2 when reaching the surface, only when these warm up to the rest of the ocean surface, the maximum emission rate is reached, regardless of what the temperature was in the deeper layers.

Once the steady state was reached (at about 290 ppmv for the current average ocean surface temperature), all excess CO2 pressure (~110 ppmv nowadays) in the atmosphere will push more CO2 into the oceans, which is what is measured:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/exchange.shtml
and following pages…

• Bart says:

“Only the surface temperature is important for the (steady state) equilibrium between oceans and atmosphere. “

Completely and utterly wrong. This is not a static system. The surface oceans today are not the surface oceans of yesterday, or of tomorrow. They are always in flux.

• Bart,

It was a response to Richard about a change in temperature of the deep(er) oceans, that plays no role in the equilibrium: only the surface temperature does.

Of course, if the deep ocean upwelling increased in either total water flux or CO2 concentration or both, that would give a change in equilibrium, but that gets to half the extra CO2 influx as with increased CO2 pressure in the atmosphere the influx is suppressed and the outflux is increased.