Guest Post by Willis Eschenbach [see update at the end of the head post]
I first got introduced to the idea of “half-life” in the 1950s because the topic of the day was nuclear fallout. We practiced hiding under our school desks if the bomb went off, and talked about how long you’d have to stay underground to be safe, and somehow in all of that we never pondered exactly how a cheap sheet metal and plywood desk would ward off an atomic bomb … a simpler time indeed. But I digress. Half-life, as many people know, is how long it takes for a given starting amount of some radioactive substance to decay until only half of the starting amount remains. For example, the half-life of radioactive caesium 137 is about thirty years. This means if you start with a gram of radioactive caesium, in thirty years you’ll only have half a gram. And in thirty more years you’ll have a quarter of a gram. And in thirty years there will only be an eighth of a gram of caesium remaining, and on ad infinitum.
This is a physical example of a common type of natural decay called “exponential decay”. The hallmark of exponential decay is that every time period, the decay is a certain percentage of what remains at that time. Exponential decay also describes what happens when a system which is at some kind of equilibrium is disturbed from that equilibrium. The system doesn’t return to equilibrium all at once. Instead, each year it moves a certain percentage of the remaining distance to equilibrium. Figure 1 shows the exponential decay after a single disturbance at time zero, as the disturbance is slowly decaying back to the pre-pulse value.
Figure 1. An example of a hypothetical exponential decay of a system at equilibrium from a single pulse of amplitude 1 at time zero. Each year it moves a certain percentage of the distance to the equilibrium value. The “half-life” and the time constant “tau” are two different ways of measuring the same thing, which is the decay rate. Half-life is the time to decay to half the original value. The time constant “tau” is the time to decay to 37% of the original value. Tau is also known as the “e-folding time”.
Note that the driving impulse in Figure 1 is a single unit pulse, and in response we see a steady decay back to equilibrium. That is to say, the shape of the driving impulse is very different from the shape of the response.
Let’s consider a slightly more complex case. This is where we have an additional pulse of 1.0 units each succeeding year. That case is shown in Figure 2.
Figure 2. An example of a hypothetical exponential decay from constant annual pulses of amplitude 1. The pulses start at time zero and continue indefinitely.
Now, this is interesting. In the beginning, the exponential decay is not all that large, because the disturbance isn’t that large. But when we add an additional identical pulse each year, the disturbance grows.
But when the disturbance grows, the size of the annual decay grows as well. As a result, eventually the disturbance levels off. After a while, although we’re adding a one unit pulse per year, the loss due to exponential decay one pulse per year, so there is no further increase.
The impulse in Figure 2 is a steady addition of 1 unit per year. So once again, the shape of the response is very different from the shape of the exponentially decayed response.
With that as prologue, we can look at the relationship between fossil fuel emissions and the resulting increase in airborne CO2. It is generally accepted that the injection of a pulse of e.g volcanic gases into the planetary atmosphere is followed by an exponential decay of the temporarily increased volcanic gas levels back to some pre-existing equilibrium. We know that this exponential decay of an injected gas pulse is a real phenomenon, because if that decay didn’t happen, we’d all be choked to death from accumulated volcanic gases.
Knowing this, we can use an exponential decay analysis of the fossil fuel emissions data to estimate the CO2 levels that would result from those same emissions. Figure 3 shows theoretical and observed increases in various CO2 levels.
Figure 3. Theoretical and observed CO2 changes, in parts per million by volume (ppmv). The theoretical total CO2 from emissions (blue line) is what we’d have if there were no exponential decay and all emissions remained airborne. The red line is the observed change in airborne CO2. The amount that is sequestered by various CO2 sinks (violet) is calculated as the total amount put into the air (blue line) minus the observed amount remaining in the air (red line). The black line is the expected change in airborne CO2, calculated as the exponential decay of the total CO2 injected into the atmosphere. The calculation used best-fit values of 59 years as the time constant (tau) and 283 ppmv as the pre-industrial equilibrium level.
The first thing to notice is that the total amount of CO2 from fossil fuel emissions is much larger than the amount that remains in the atmosphere. The clear inference of this is that various natural sequestration processes have absorbed some but not all of the fossil fuel emissions. Also, the percentage of emissions that are naturally sequestered has remained constant since 1959. About 42% of the amount that is emitted is “sequestered”, that is to say removed from the atmosphere by natural carbon sinks.
Next, as you can see, using an exponential decay analysis gives us an extremely good fit between the theoretical and the observed increase in atmospheric CO2. In fact, the fit is so good that most of the time you can’t even see the red line (observed CO2) under the black line (calculated CO2).
Before I move on, please note that the amount remaining in the atmosphere is not a function of the annual emissions. Instead, it is a function of the total emissions, i.e. it is a function of the running sum of the annual emissions starting at t=0 (blue line).
Now, I got into all of this because against my better judgment I started to watch Dr. Salby’s video that was discussed on WUWT here. The very first argument that Dr. Salby makes involves the following two graphs:
Figure 4. Dr. Salby’s first figure, showing the annual global emissions of carbon in gigatonnes per year.
Figure 5. Dr. Salby’s second figure, showing the observed level of CO2 at Mauna Loa.
Note that according to his numbers the trend in emissions increased after 2002, but the CO2 trend is identical before and after 2002. Dr. Salby thinks this difference is very important.
At approximate 4 minutes into the video Dr. Salby comments on this difference with heavy sarcasm, saying:
The growth of fossil fuel emission increased by a factor of 300% … the growth of CO2 didn’t blink. How could this be? Say it ain’t so!
OK, I’ll step up to the plate and say it. It ain’t so, at least it’s not the way Dr. Salby thinks it is, for a few reasons.
First, note that he is comparing the wrong things. Observed CO2 is NOT a function of annual CO2 emissions. It is a function of total emissions, as discussed above and shown in Figure 3. The total amount remaining in the atmosphere at any time is a function of the total amount emitted up to that time. It is NOT a function of the individual annual emissions. So we would not expect the two graphs to have the same shape or the same trends.
Next, we can verify that he is looking at the wrong things by comparing the units used in the two graphics. Consider Figure 4, which has units of gigatonnes of carbon per year. Gigatonnes of carbon (GtC) emitted, and changes in airborne CO2 (parts per million by volume, “ppmv”), are related by the conversion factor of:
2.13 Gigatonnes carbon emitted = 1 ppmv CO2
This means that the units in Figure 4 can be converted from gigatonnes C per year to ppmv per year by simply dividing them by 2.13. So Figure 4 shows ppmv per year. But the units in Figure 5 are NOT the ppmv per year used in Figure 4. Instead, Figure 5 uses simple ppmv. Dr. Salby is not comparing like with like. He’s comparing ppmv of CO2 per year to plain old ppmv of CO2, and that is a meaningless comparison.
He is looking at apples and oranges, and he waxes sarcastic about how other scientists haven’t paid attention to the fact that the two fruits are different … they are different because there is no reason to expect that apples and oranges would be the same. In fact, as Figure 3 shows, the observed CO2 has tracked the total human emissions very, very accurately. In particular, it shows that we do not expect a large trend change in observed CO2 around the year 2000 such as Dr. Salby expects, despite the fact that such a trend change exists in the annual emission data. Instead, the change is reflected in a gradual increase in the trend of the observed (and calculated) CO2 … and the observations are extremely well matched by the calculated values.
The final thing that’s wrong with his charts is that he’s looking at different time periods in his trend comparisons. For the emissions, he’s calculated the trends 1990-2002, and compared that to 2002-2013. But regarding the CO2 levels, he’s calculated the trends over entirely different periods, 1995-2002 and 2002-2014. Bad scientist, no cookies. You can’t pick two different periods to compare like that.
In summary? Well, the summary is short … Dr. Salby appears to not understand the relationship between fossil fuel carbon emissions and CO2.
That would be bad enough, but from there it just gets worse. Starting at about 31 minutes into the video Dr. Salby makes much of the fact that the 14C (“carbon-14”) isotope produced by the atomic bomb tests decayed exponentially (agreeing with what I discussed above) with a fairly short time constant tau of about nine years or so.
Figure 6. Dr. Salby demonstrates that airborne residence time constant tau for CO2 is around 8.6 years. “NTBT” is the Nuclear Test Ban Treaty.
Regarding this graph, Dr. Salby says that it is a result of exponential decay. He goes on to say that “Exponential decay means that the decay of CO2 is proportional to the abundance of CO2,” and I can only agree.
So far so good … but then Dr. Salby does something astounding. He graphs the 14C airborne residence time data up on the same graph as the “Bern Model” of CO2 pulse decay, says that they both show “Absorption of CO2”, and claims that the 14C isotope data definitively shows that the Bern model is wrong …
Figure 7. Dr. Salby’s figure showing both the “Bern Model” of the decay of a pulse of CO2 (violet line), along the same data shown in Figure 6 for the airborne residence time of CO2 (blue line, green data points).
To reiterate, Dr. Salby says that the 14C bomb test (blue line identified as “Real World”) clearly shows that the Bern Model is wrong (violet line identified as “Model World”).
But as before, in Figure 8 Dr. Salby is again comparing apples and oranges. The 14C bomb test data (blue line) shows how long an individual CO2 molecule stays in the air. Note that this is a steady-state process, with individual CO2 molecules constantly being emitted from somewhere, staying airborne in the atmosphere with a time constant tau of around 8 years, and then being re-absorbed somewhere else in the carbon cycle. This is called the “airborne residence time” of CO2. It is the time an average CO2 molecule stays aloft before being re-absorbed.
But the airborne residence time (blue line) is very, very different from what the Bern Model (violet line) is estimating. The Bern Model is estimating how long it takes an entire pulse of additional CO2 to decay back to equilibrium concentration levels. This is NOT how long a CO2 molecule stays aloft. Instead, the Bern Model is estimating how long the increased atmospheric concentration from a pulse of injected CO2 takes to decay back to pre-pulse conditions. Let me summarize:
Airborne residence time (bomb test data): how long an individual CO2 molecule stays in the air.
Pulse decay time (Bern Model): how long the increased atmospheric concentration from a pulse of injected CO2 takes to decay back to pre-pulse conditions.
So again Dr. Salby is conflating two very different measurements—airborne residence time on the one hand (blue line), and CO2 post-pulse concentration decay time on the other hand (violet line). It is meaningless to display them on the same graph. The 14C bomb test data neither supports nor falsifies the Bern Model. The 14C data says nothing about the Bern Model, because they are measuring entirely different things.
I was going to force myself to watch more of the video of his talk. But when I got that far into Dr. Salby’s video, I simply couldn’t continue. His opening move is to compare ppmv per year to plain ppmv, and get all snarky about how he’s the only one noticing that they are different. He follows that up by not knowing the difference between airborne residence time and pulse decay time.
Sorry, but after all of that good fun I’m not much interested in his other claims. Sadly, Dr. Salby has proven to me that regarding this particular subject he doesn’t understand what he’s talking about. I do know he wrote a text on Atmospheric Physics, so he’s nobody’s fool … but in this case he’s way over his head.
Best regards to each of you on this fine spring evening,
For Clarity: If you disagree with something, please quote the exact words you disagree with. That will allow everyone to understand the exact nature of your disagreement.
Math Note: The theoretical total CO2 from emissions is calculated using the relationship 1 ppmv = 2.13 gigatonnes of carbon emitted.
Also, we only have observational data on CO2 concentrations since 1959. This means that the time constant calculated in Figure 3 is by no means definitive. It also means that the data is too short to reliably distinguish between e.g. the Bern Model (a fat-tailed exponential decay) and the simple single exponential decay model I used in Figure 3.
Data and Code: I’ve put the R code and functions, the NOAA Monthly CO2 data (.CSV), and the annual fossil fuel carbon emissions data (.TXT) in a small zipped folder entitled “Salby Analysis Folder” (20 kb)
[Update]: Some commenters have said that I should have looked at an alternate measure. They said instead of looking at atmospheric CO2 versus the cumulative sum of annual emissions, I should show annual change in atmospheric CO2 versus annual emissions. We are nothing if not a full service website, so here is that Figure.
As you can see, this shows that it is a noisy system. Despite that, however, there is reasonably good and strongly statistically significant correlation between emissions and the change in atmospheric CO2. I note also that this method gives about the same numbers for the airborne fraction that I got from my analysis upthread.
CO2 is a red herring. A means to an end for Agenda 21. Nothing more. The proof is in the alarmists’ refusals to acknowledge any of the benefits of higher than pre-industrial 280 ppm level. Seizing control of western economies through energy utilization IS the holy grail to control and re-distribution of wealth.
The environmental impacts of higher CO2 are overplayed to achieve that end.
That’s the whole climate change scam in a nutshell.
Nothing to do with CO2, at all.
lol, more conspiracy theories at WUWT…..
so much for science
(Although I read somewhere that CO2 is negatively impacting red herring.)
Sorry Max! You read the graph wrong. CO2 has caused Red Herring to spawn at 4X the norman rate.
They have entered their data upside down.
Haven’t we been here before?
There is a concept called the half life of facts; this suggests that about 50% of matters that we take as ‘fact’ today, will in 10 years time be shown not to be a true and actual FACT.
It is probable that the climate warmist’s claim that CO2 is a significant driver of gloabl temperatures, which at least as far as MSM, ‘consinsus’ scientists and politicians is taken as being ‘fact’ will be one of these ‘facts’ which in 10 years time will be shown not to be an actual FACT.
I learnt 🙂
That is probably true in a free and open society. However if the Greens and Environmentalist are able to control both the media and the scientific machinery… then in 10years… so called facts become superfacts which are maintained until they have no useful purpose. We see something like that happening now despite our best efforts to counter. Very dangerous indeed. GK
I very much agree Joel. After nearly two years studying other planetary atmospheres in depth it is clear there is nothing there in co2.
“For example, the half-life of radioactive caesium is about seventy days.”
That should be: For example, IF the half-life of radioactive caesium WERE about seventy days, then …….
The biological half life (how long it stays in the body) of Cs-137 is about 70 days.
Yep – well it depends on which isotope too! Caesium 135 has a half life of about 2 million years, 137 is 30 years, 134 is 2 years but most other isotopes have half lives measured in seconds to days.
Thanks, Dennis and xyzzy11, fixed. Moving too fast and depending on my memory. Such is life.
So Willis, If a 14CO2 molecule is removed from the atmosphere, that presumably is the consequence of some process. For example, it might have dissolved in the water of some droplet, that then rained out.
Now that marked molecule might re-enter the atmosphere; perhaps I drank the rain drop and later exhaled the 14CO2 back into the atmosphere.
So some of the “decayed” excess 14CO2 can get recycled back to the atmosphere.
Presumably, the observed abundance reflects any recirculation of marked molecules
I tend to think of CO2 (and H2O) as PERMANENT components of the atmosphere, and whether a particular molecule get replaced by another particular molecule, is somewhat irrelevant.
At the north pole, where grows no trees, some 18-20ppmm of excess CO2 (three times the ML amount) gets removed in as little as five months.
So the decay rate would eliminate an excess of 120ppmm (above 280ppmm) in 30 months, suggesting a decay time constant of 2 1/2 years for whatever processes occur at the north pole.
george e. smith April 20, 2015 at 1:28 pm
Thanks, George, interesting question. First, yes, the observed abundance is a measured value, so perforce it must include everything.
However, I doubt very much if the recirculation is at all significant. Once an atom of 14C leaves the atmosphere, it enters the very, very much larger reservoir of carbon circulating around the carbon cycle. This immediately dilutes by orders of magnitude.
In addition, the various atoms will be recirculated at different times. This dilutes them in time, as some of them will not reappear for decades, centuries, or longer.
As a result of this large temporal and spatial dilution, it seems to me that there wouldn’t be any significant recirculation. And indeed, this is borne out by the fact that the 14C numbers decayed all the way back to the pre-bomb pulse values.
Nuclear decay rate varies.
Well that’s interesting. Thanks. It certainly warrants confirmation, one way or another.
I think that text about figure 7 needs a little more explanation.
Nuclear tests did not measurably increase amount of CO2 in atmosphere, they only altered its isotopical composition. Assuming all 14C released in these tests ended up in CO2 which I’m not quite sure about. Therefore its decay line follows 14C diffusion between atmosphere and other carbon storages (biosphere, soil, surface sea waters, deep sea waters, …) in near balanced conditions and while atmospheric CO2 concentrations remain nearly constant.
Bern model estimates decay of doubling of CO2 in atmosphere. That’s not simple overturn, that’s when there suddenly appears large disproportion in balance between saturation of individual carbon storages and even the overturn rate changes significantly.
A 14C pulse in such conditions would show even faster decay.
It seems that your Figure 3 could be reproduced using any given year as a starting point.
To properly analysis the problem put forward by Dr. Shelby it seems that you should re-draw Figure 3 two times, first showing the curve over his specified 1990 – 2001 period, then again using his 2002-2014 period. According to you they should be equal, since the observed rate of increase in CO2 from Mauna Loa has been identical over both periods. Perhaps you could overlay them for us, like you did for your black and red curves in Figure 3, so we can see for ourselves that the human emissions predict there should be an identical rate of rise over both periods.
I agree that “we would not expect the two graphs to have the same shape or the same trends”. As I didn’t watch Salby’s video, I don’t know if that’s what he claims that should be happening, I will assume yes. If so, he is wrong. However, once one assumes that the increase in CO2 concentration is totally our fault, and IMO it is (as your figure 3 shows, Nature is actually working to try to counter it rather than adding more), then a significant increase of how much CO2 we emit should be followed by a significant increase in how much CO2 concentration rises. If we were, in 2013, emitting to the atmosphere the equivalent of 11*0.14=1.54 ppmv of CO2 more per year than we were in 2002, even if Nature partially counters that increase, we should be seeing an increase of the speed at which CO2 increases, of probably not as much as 1.54ppmv/year, but still SOME increase. Half of it? 40% of it? I don’t know. But we would NOT expect it to remain the same that it was in 2002. And that’s significant, in my opinion.
Do you, perhaps, think that the rate should be what is observed? Just by eyeballing, even your model of CO2 decay seems to not expect that. In the later part, it goes from below the red line to above the red line in the end, so your model seems to think that we should have seen a higher increase of CO2 concentration in the last 10 years or so.
Nylo – The curious thing is that although the ocean is absorbing about half the amount of CO2 that mankind is emitting, that does not mean that mankind is responsible for all of the atmospheric increase. If mankind had emitted no CO2, and if the temperature had done exactly what the temperature has done (gone up a bit) then atmospheric CO2 would actually have increased a bit through emission from the ocean.
I’m not sure about your last sentence Mike. If the oceans had warmed slightly wouldn’t the absorbance of CO2 decreased?
Partial pressure increase = greater solublity. Temperature increase = lower solubility. Balancing act.
Without our CO2 emissions temps would not have gone up in the late 20th century.
Also the oceans would still be mostly sinks ….
“then atmospheric CO2 would actually have increased a bit through emission from the ocean.”
Richard111 – (As Alex said) A warmer ocean holds less CO2 so releases it into the atmosphere (or absorbs less from the atmosphere).
Yes, but the equilibrium between oceans and atmosphere only changes some 8 ppmv/°C. That makes that the ~0.8°C increase since the LIA is good for about 6 ppmv increase in the atmosphere. That is all.
The rest of the 110 ppmv increase is from humans, which emitted some 200 ppmv over the past 160 years…
Mike Jonas says:
If mankind had emitted no CO2, and if the temperature had done exactly what the temperature has done (gone up a bit) then atmospheric CO2 would actually have increased a bit through emission from the ocean.
I agree. Global T has fluctuated slightly due to natural forcing, not because of CO2 (not saying that CO2 has no effect, but any warming from CO2 is just too small to measure). But all the available evidence shows that ∆CO2 follows ∆T, not vice-versa.
And pay no attention to anyone who asserts that “Without our CO2 emissions temps would not have gone up in the late 20th century.”
That is no more than a religious Belief. Without evidence showing cause and effect, statements like that belong on Hotwhopper, not on a science site.
Mike Jonas April 20, 2015 at 12:28 am
Mike, you are right about what happens … but the effect is quite small. As the ocean warms it outgasses, increasing the atmospheric CO2.
However, both theoretical and observational studies put the size of this effect at something around 15 ppmv per degree C of temperature rise. This puts the ocean thermal contribution of the 20th century (a warming of about 0.6°C) at about 10 ppmv of CO2. This 10 ppmv is far too small to explain the total CO2 increase of about 100 ppmv during that same time.
April 20, 2015 at 1:27 am
There is not a shred of evidence supporting your baseless assertion that without man-made CO2 the late 20th century would not have warmed.
The warming from c. 1977-96 can be accounted for without considering the rise in CO2 levels, the effect of which, if any, is negligible. The world warmed from the end of the LIA in the mid-19th century naturally in cycles related to oceanic oscillations. The early 20th century warming from the late ‘teens to late ’40s was followed by the cooling from then until the late ’70s, so the next cycle due was warming. The late 20th century warming looks virtually identical to the early 20th century warming cycle.
CO2 alarmism fails to reject the null hypothesis, ie that the late 20th century warming was entirely or predominantly from the same natural causes that produced all previous such warming cycles within the Holocene and prior interglacials.
“But all the available evidence shows that ∆CO2 follows ∆T, not vice-versa.”
This is wrong. In the first place incrases in c02 both precede and follow increases in temperature.
AGW theory tells us this will be so and the data show that. In fact the “lag” was predicted before it was ascertained.
Finally there is this paper.
Here is a hint…
“We have known for a while that the Earth has historically had higher levels of greenhouse gases during warm periods than during ice ages. However, it had so-far remained impossible to discern cause and effect from the analysis of information (in encapsulated gas bubbles) contained in ice cores.
An international team of researchers led by Egbert van Nes from Wageningen University (Netherlands) now used a novel mathematical insight developed to have a fresh look at the data. The analysis reveals that the glacial cycles experienced by the planet over the past 400,000 years are governed by strong internal feedbacks in the Earth system. Slight variations in the Earth orbit known as Milankovitch cycles, functioned merely as a subtle pacemaker for the process. In addition to the well understood effect of greenhouse gases on the Earth temperature, the researchers could now confirm directly from the ice-core data that the global temperature has a profound effect on atmospheric greenhouse gas concentrations. This means that as the Earth temperature rises, the positive feedback in the system results in additional warming.
“A fundamental insight by George Sugihara from the USA on how one can use observed dynamics in time series to infer causality caused a big splash in the field,” explains Egbert van Nes. “It immediately made us wonder whether it could be used to solve the enigma of the iconic correlated temperature and gas history of the Earth.”
Indeed this riddle has proven hard to solve. A slight lead of Antarctic temperature over CO2 variations has been argued to point to temperature as a driver of CO2 changes. However, more recent studies cast doubt on the existence of a significant time-lag between CO2 and temperature.
“It can be highly misleading to use simple correlation to infer causality in complex systems,” says George Sugihara from Scripps Institution of Oceanography (USA). “Correlations can come and go as mirages, and cause and effect can go both ways as in kind of chicken and egg problem, and this requires a fundamentally different way to look at the data.”
As direct evidence from data has been hard to achieve, Earth system models are used as a less direct alternative to quantify causality in the climate system. However, although the effects greenhouse gases on Earth’s temperature are relatively well understood, estimating the actual strength of this effect is challenging, because it involves a plethora of mechanisms that are difficult to quantify and sometimes oppose each other.
“Our new results confirm the prediction of positive feedback from the climate models,” says Tim Lenton, a climate researcher from the University of Exeter (UK). “The big difference is that now we have independent data based evidence.”
The following Movie will show you how it works
and one of the guys. not your typical “ivory tower” kind of scientist
Ferdinand and Willis – I was addressing the statement “once one assumes that the increase in CO2 concentration is totally our fault, and IMO it is“. But you are both right, the effect is modest, and I should have said so.
nope. you are wrong. very wrong. but hey, when you think you can explain the late 20th cenutry warming, publish your evidence in the scinetific literature and see if it can convince the experts on this topic.
D. Kuhn says:
when you think you can explain the late 20th cenutry warming…
It is not the job of scientific skeptics to explain, but rather, to debunk. We have done an excellent job of debunking the failed notion that CO2 is the control knob of the climate.
The AGW conjecture cannot make accurate, consistent predictions. So it cannot be a ‘theory’.
I note that you have run away from the debate on “lag” and the science of causality in dynamical systems.
Again, WRT the “lag”
1. It was predicted by the theory
2. It was discovered thus confirming the theory.
Evidence comes in tree varieties: Evidence can CONFIRM a theory ( not prove it ) Evidence can
DISCONFIRM a theory ( theories are not falsified they are modified) and evidence can be unrelated
to a theory. When we predict that the world will warm and it does, that is confirmation. When we predict the world will warm by 2C and it warms by 1C, that is still confirmation, and also an indication that improvement is possible.
“The AGW conjecture cannot make accurate, consistent predictions. So it cannot be a ‘theory”
The theory has made successful predictions since its inception.
For example in the 1930s guy Calandar predicted that if we increase C02 the temperature would go up.
He was correct. The temperature did go up.
The accuracy of the predictions is also pretty damn good considering the complexity of the system.
Warming predictions run about 2C per century and we see something consistently less than that. Lets put it this way: If the warming were on 1 C per century it would STILL be a good prediction. good enough to base policy on. Faced with the question “how much will it warm? we have these two answers
A) A best estimate of 2C, but it is likely high.
B) Skeptics who blather that we cant know
No decision maker is going to listen to a person who says that they can’t know. It’s tuesday morning, my
weekly forecast is due and it’s running as I type. It will be wrong, sometimes 15% high, sometimes 20%
an sometimes 40% high. Nevertheless, we take action based on the forecast and the model because it
works better than shrugging your shoulders. A few critics try to point out errors that everyone knows about as if they were doing something by merely being critical. Doubt doesnt win. Critics who cant improve a model are never listened to. They make a lot of noise, but they have no power.
You can’t do science by merely criticizing. If you don’t work to improve understanding, it you avoid the debate as skeptics do by shrugging their shoulders, then no one will listen to you
Steven Mosher, April 20, 2015 at 11:57 am
Steven, I have my reservations about the CO2 lead-lag in ice cores.
It is rather difficult to see any lead-lag during a deglaciation, as over the 5000 years warming, at least 4000 years CH4 and CO2 rise overlap with the temperature rise.
During the onset of a new ice age, things are quite clear: CO2 drops thousands of years later than temperature and CH4 which are synchronous.
That is not a matter of problems with the gas age – ice age difference, as CH4 is measured in the gas phase as CO2 is.
Here for the previous interglacial:
where temperature is at a new minimum and ice sheets at a new maximum before CO2 starts to drop…
Moreover, according to prof. Lindzen, the energy needed to melt the ice sheets over the period of the warming is about 200 W/m2 continuous. The extra supply by increasing CO2 levels is less than 2 W/m2…
When reading the article I fully expected Willis to show what the emission and absorption rate would have to be to get a straight line increase. I thought that is where the argument was headed. All the elements are there.
This sort of addresses some comments below which indicate there are people having trouble adding the two curves together in their mind: the increasing decay rate if the total disturbance in concentration continues to go up, and the fixed disturbance followed by a % decay.
To get a straight line increase in concentration the emission rate would not be linear, it would have to be enough to cover absorption plus the amount of increase. As the total went up, the amount to cover for absorption would increase, plus any more generating another increase.
On a minor point about the gigaton to ppm thing:
W: “The theoretical total CO2 from emissions is calculated using the relationship 1 ppmv = 2.13 gigatonnes of carbon emitted.”
By my calculated guess “carbon” here does not mean “carbon-dioxide”. It seems to be the approximate mole mass of air divided by the mole mass of carbon. 25.6/12 ~2.13. Because the terms carbon and carbon-dioxide are used interchangeably in the alarmosphere always take the time to find out which is being used. The conversion number for CO2 would be 25.6/44 ~0.58. If I am wrong someone please correct it.
GtC as carbon is mass, ppmv CO2 in the atmosphere is ratio: volume of CO2 in total volume of air, including the ratio in molecular mass, that is ~2.13 GtC/ppmv.
The near linear increase of the rate of change of emissions, increase in CO2 and sink rate, was caused by the slightly quadratic increase of total emissions and increase in the atmosphere over time:
Just a thought and I am no scientist. But to me one of the main reasons that the nuclear high atmospheric and actually all testing in the late 50’s and 60’s (including especially the hydrogen and neutron bomb tests) was stopped had nothing to do with radiation. I believe that the reason they stopped is that those political powers realized that the CME impacts of those tests were having a serious effect on all of their unprotected electronics on the battlefield and at home for both sides and so nobody would “win”.. ( so back to the horse and buggy and the proliferation of the arms race in conventional weaponry) but gee I could be wrong..
Nobody would have won whatever they did.
Einstein, I believe, said that he didn’t know what World War 3 would be fought with but he knew what weapons would be used in World War 4…….sticks and stones
Thanks GregK I had forgotten about that statement, I wonder who has the bigger sticks these days though
Willis,you have confused me.
“It is the time an average CO2 molecule stays aloft before being re-absorbed.”
What property of the molecule is averaged?
It’s based on probabilities. Billions upon billions of molecules. Some may be absorbed almost immediately some may take many years. It probably follows a bell shaped curve. The centre of the curve would be the average time that this happens
The really bad thing is it is probably more like a Boltzmann distribution. The problem is those are hard to deal with so we forget the long high tail and pretend it is a normal bell curve to make the statistics work out nicely. Most of the time it works out close enough for instrument resolution, so we go with it.
ghl April 19, 2015 at 11:00 pm
Thanks, ghl. We’re averaging the length of time that a molecule of CO2 stays airborne—the time between being emitted somewhere into the atmosphere, and then later being re-absorbed into some other part of the carbon cycle (plants, the ocean, etc.).
Willis, I think (sometimes dangerous) that what you intend is ” the MEAN time that a TYPICAL CO2 molecule stays aloft is blah blah blah ..”
And I too would doubt that it is bell shaped.
I tend to stay away from “average” unless I mean a strict mathematical average, which of course implies a given data set of exactly known numbers.
You are talking about your “average Joe CO2 molecule “.
Well I shun averages anyway; they are too late to do anything about.
re the half-life of atmospheric CO2 : The main sink of atmospheric CO2 is the ocean. The sink rate is proportional to the difference in CO2 partial pressure between atmosphere and ocean. This pressure difference has a half-life which is, by my calcs, around 13 years. IOW all other things being equal, the pressure difference halves in about 13 years. But – and it is quite a big “But” – that doesn’t mean that half of that extra atmospheric CO2 goes into the ocean in 13 years, because the absorption of CO2 pushes up the CO2 partial pressure in the ocean (and that’s complicated by chemical reactions in the ocean that reduce it). The end effect so far has been that around half of the total amount of man-made CO2 in the last few decades has gone into the ocean, and is likely to continue to do so for quite a long time yet (statements by alarmists that the ocean is getting “saturated” appear to be false). But in the long term, the atmospheric CO2 concentration will end up higher than it otherwise would be, because of the increased CO2 partial pressure in the ocean. How long will that take to go down? I don’t know. But with a new glacial coming in the next few thousand years, I sincerely hope that the extra ocean CO2 partial pressure lasts a lot longer than that, because that extra atmospheric CO2 is really going to be needed to keep the world’s plant life going.
Mike Jonas says:
But with a new glacial coming in the next few thousand years, I sincerely hope that the extra ocean CO2 partial pressure lasts a lot longer than that, because that extra atmospheric CO2 is really going to be needed to keep the world’s plant life going.
I agree with your comment in general, and have posted similar comments for several years.
However, I have been severely time-constrained in recent years so have done no detailed work.
Have you run the numbers?
Does the great “low-CO2” extinction event occur in the next Ice Age, or the one after that, or…?
The sun and the biome conspire to almost irreversibly sequester carbon. If man did not exist, it would be useful to invent him.
Fortunately, once man’s pitiful little aliquot of injected carbon is exhausted, and atmospheric CO2 resumes its inevitable downward progress, the progress will be slow enough for plants to evolve, as they have already done.
Hi Allan – Sorry, no I haven’t run the numbers. I am not sure whether my formulae are valid all the way down into a full glacial (eg, sea ice area is important, and a modest error there could grow errors exponentially over time). So if I think that medical advances can keep me alive long enough to observe it directly, I’ll consider moving to the tropics and watch events from there. [NB. That “if” is as in formal logic p->q]
No problem. Produce more cement.
Or start mining and burning clathrates
I have no time to run the numbers, but I do not think we have millions of years left for carbon-based life on Earth.
Over time, CO2 is ~permanently sequestered in carbonate rocks, so concentrations get lower and lower. During an Ice Age, atmospheric CO2 concentrations drop to very low levels due to solution in cold oceans, etc. Below a certain atmospheric CO2 concentration, terrestrial photosynthesis slows and shuts down. I suppose life in the oceans can carry on but terrestrial life is done.
So when will this happen – in the next Ice Age a few thousands years hence, or the one after that ~100,000 years later, or the one after that?
In geologic time, we are talking the blink of an eye before terrestrial life on Earth ceases due to CO2 starvation.
I wrote the following on this subject, posted on Icecap.us some months ago:
On Climate Science, Global Cooling, Ice Ages and Geo-Engineering:
Furthermore, increased atmospheric CO2 from whatever cause is clearly beneficial to humanity and the environment. Earth’s atmosphere is clearly CO2 deficient and continues to decline over geological time. In fact, atmospheric CO2 at this time is too low, dangerously low for the longer term survival of carbon-based life on Earth.
More Ice Ages, which are inevitable unless geo-engineering can prevent them, will cause atmospheric CO2 concentrations on Earth to decline to the point where photosynthesis slows and ultimately ceases. This would devastate the descendants of most current [terrestrial] life on Earth, which is carbon-based and to which, I suggest, we have a significant moral obligation.
Atmospheric and dissolved oceanic CO2 is the feedstock for all carbon-based life on Earth. More CO2 is better. Within reasonable limits, a lot more CO2 is a lot better.
As a devoted fan of carbon-based life on Earth, I feel it is my duty to advocate on our behalf. To be clear, I am not prejudiced against non-carbon-based life forms, but I really do not know any of them well enough to form an opinion. They could be very nice. 🙂
You wrote: “Over time, CO2 is ~permanently sequestered in carbonate rocks, so concentrations get lower and lower.”
That is not correct. Carbonate rocks get subducted into the mantle where they decompose under the high temperature, The CO2 is eventually returned to the atmosphere via volcanoes. Otherwise, life would have disappeared a very long time ago.
Atmospheric CO2 has gone up and down over time, within a moderately constrained range (something like 180 to 1500 ppmv). So there are negative feedbacks. Less CO2 in the atmosphere means less uptake, whether by plants, weathering rocks, or into the ocean. Also, lower temperature, which slows weathering. More CO2 has the opposite effect. So we need not fear a low CO2 extinction.
Minor amounts of carbonate rocks are subducted with oceanic plates into the Earth’ crust because there is very little carbonate in deep oceanic sediments (CO2 dissolves at great ocean depths). Most carbonate deposits are located in stable continental basins, for example, the huge Great Basin of central-western North America, a shallow-sea environment that has sequestered countless trillions of tons of CO2 in carbonate rocks for well over 400 million years.
Yes Mike you are right and you must count also with sequestration on dry land most of which is stable for hundreds of millions years.
So there must exist longer cycle where carbonate rocks from continental shelf and continents are recycled and changed back to CO2 and atmosphere.
I would bet on methane and oil in this. After some time most of limestone on continents and continental shelf is changing to methane and and oil and then seeping back to surface, as it is lighter than rocks, then replenishing CO2 in atmosphere.
April 20, 2015 at 7:43 am
For the Cenozoic and Mesozoic Eras and Carboniferous and Permian Periods, you’re right about the upper limit on CO2 level, but not for the first 184 million years of the Paleozoic Era, nor for most if not all of the billions of years of the Pre-Cambrian Eons. In the Cambrian, Ordovician, Silurian and Devonian Periods, atmospheric CO2 levels were in several thousand parts per million, and even (much) higher during the Pre-Cambrian.
Mike- I am not an expert in this area but I suggest your subduction argument fails due to the magnitudes of CO2 sinks (large) vs sources (small).
If I am wrong, why do we have areally huge, thick stable beds of carbonates all over the planet?
You seems to be saying the sinks and sources are in equilibrium and I suggest they are not.
I suppose one should run the numbers (if possible) but I do not have the time.
Fortunately, once man’s pitiful little aliquot of injected carbon is exhausted, and atmospheric CO2 resumes its inevitable downward progress, the progress will be slow enough for plants to evolve, as they have already done.
Kim, I presume you are referring to C3 vs. C4 plants etc.
I repeat, I have not run the numbers, but I suggest this extinction event could happen in a few thousand years, or a few hundred thousand years – the blink of an eye in geologic time.
Do you really think plants will have time to adapt?
Alex and Owen:
That’s a lot of cement.
What to do with it? Pave the planet to keep the dust down?
That’s quite a treadmill we will be on. 🙂
I’m having difficulty understanding why the residence time of a pulse of CO2 should be different from the residence time of its individual molecules. The carbon involved is brand new carbon, created from nitrogen, and it is a measurable pulse.
Further explanation is needed.
Carbon is element 6 and Nitrogen is element 7. Its not very likely to change one into the other outside some complex nuclear process.
They are talking about those produced by a nuclear explosion. The bomb releases a very large number of neutrons which thermalize in the atmosphere. When they strike a 14N, it spits out at proton to form 14C. The 14C then oxidizes in the atmosphere to form CO2. The reaction cross section of the 14N(n,p)14C reaction is about 2 barns which is fairly large. You just have to have a large source of thermal neutrons to make lots of it. A nuclear bomb is just such a source.
Suppose you have a market stall, with high cash turnover. You make a big sale – someone pays you $1000 in $10 notes. At the end of the day, you may not have many of those notes left, but you have still have the benefit of the sale..
The difference is about this:
With 14C experiment, you mark all money owned by people in a selected city and study how long it will take until they get rid of all of them, replacing them with unmarked money.
With Bern model you double amount of money owned by all people in a selected city and study how long it will take until they return to be equally rich as the rest of the nation.
Hmmmm. It seems to me the Bern model is unphysical, neglecting all the negative feedbacks recruited in a more gradual rise in CO2. And therein lies all the difference.
You would have to take into account that the ocean is venting some 90 GtC/y in the form of CO2 into the atmosphere, without much 14C, while it sucks up 92 GtC/y from the atmosphere with the 14C. That way the carbon cycle rinses out 14C rather quickly from the atmosphere.
Note also that this is a process with some fractination, the heavier (14)CO2 has more affinity with water, so it goes into the water much more easier than it comes out, compared to normal (12)CO2
“I’m having difficulty understanding why the residence time of a pulse of CO2 should be different from the residence time of its individual molecules. The carbon involved is brand new carbon, created from nitrogen, and it is a measurable pulse.
Further explanation is needed.”
With radioactive decay, the process is one way: K-40 decays to give Ar-40, but the reverse does not happen. So there is never an equilibrium.
But CO2 taken up by the oceans and plants is not one way. The plants die, decay, and return the CO2 to the atmosphere. CO2 molecules pass from the atmosphere into the ocean and vice versa.
So let D be the rate at which CO2 dissolves in the ocean and let E be the rate at which CO2 evaporates from the ocean. The residence time of individual molecules is C/D, where C is the amount of CO2 in the atmosphere. After a pulse of CO2 into the atmosphere, the rate at which the atmospheric concentration decreases is D-E. So the time constant for the decrease is C/(D-E); that is much larger, i.e., slower.
For C14, there is essentially none in the ocean, so the time constant you measure is the shorter one, C/D.
Carbon 14 half life is 3730 years, much longer than either time constant that we are discussing. Therefore, even uncorrected for radioactive decay, C14 concentration can validly be used to measure sequestration time constant.
I think Willis is wrong to say that there is a difference between an individual molecule and bulk gas.The C14 is merely a marker, mixed in with the bulk, and thus will behave no differently from the bulk. Otherwise he is implying that sequestration time constant is a function of the magnitude of the impulse.
The problem is the time delay between sink and source between the deep oceans: what goes into the deep is the isotopic composition of today (with some fractionation) but what comes out of the deep oceans is the composition of ~1000 years ago, which was less that half what goes into the deep. That makes that the decay rate of a 14CO2 peak is at least a factor 4 faster than for a 12/13CO2 peak…
Here (from here) is the Scripps plot of seasonally smoothed CO2 vs 57% of emissions. They superimpose almost exactly. Hard to see any 2002 problem.
That makes no sense – the real airborne fraction is decreasing and is now only about 40%. The 2002 problem is actually a decreasing AF since about 2002 (increasing emissions and flat atmospheric growth).
There is no law that says that the airborne fraction should remain the same, natural sinks are highly variable, as they depend of temperature (El Niño), drought, sunlight (Pinatubo) and total CO2 pressure in the atmosphere. If you look at the period 1987-1995, there was a similar drop, even with increasing temperature, while the current period shows a flat temperature…
…somehow in all of that we never pondered exactly how a cheap sheet metal and plywood desk would ward off an atomic bomb…
Purely as an aside, sitting under a desk would be expected to minimise injury from flying glass, and burns from a heat pulse passing through the window.
There’s not a lot you can easily do to protect someone sitting in the open inside the radius of a nuclear fireball. But if a nuclear attack on a city is underway, the bombs will be airbursts, relying on the blast and heat pulse to spread destruction very widely. Heat pulses will cause fires many miles from ground zero, and one of the easy cheap things you can do to lower destruction, if you have time, is to paint your windows white (since many people have some white paint somewhere handy). The heat pulse comes through a window before a blast (when the window is still intact), and would start thousands of small internal fires inside exposed homes.
This simple action could save hundreds of thousands of lives in a nuclear exchange. So it was suggested in Civil Defence pamphlets of the 1950s. And the Ban the Bomb activists mocked it so successfully that it was taken out – thus ensuring far greater carnage. It was about then, in my formative years, that I began to adopt the cynicism which has marked my adult ones….
Just last year, sitting under a desk would have saved many injuries from the overhead blast of the Siberian meteorite from flying glasses. People went to the windows to “see” – – and thus were blinded.
“Cold War kids were hard to kill,
under their desks in an air-raid drill”
….. Billy Joel
…sitting under a desk would be expected to minimise injury…
Yeah. There’s a more than a little ignorant self-flattery (sound familiar?) involved with people scoffing at and mocking “duck and cover”.
Unless you lived anywhere near a Minuteman silo complex as the Russians had 2 each 20-25 megaton war heads targeting each control center for these. They yields were large because the targeted silos were “hardened” and underground and the Russian missiles weren’t very accurate at the time. Now note that these high yield bombs were dirt diggers designed to access the American silo control centers, so try imagining what a ground burst of 25 megatons would not only do to the surrounding (large) area but to those areas/states downstream of the huge amount of radioactive fallout generated. Not pretty to say the least. Now about 400 of these silos still exist and though the Russian missiles are more accurate and therefore somewhat smaller, (0.8Mgton) a ground burst of this size would still create a lot of havoc. Some of the Russian launch complexes are also in the Ukraine.
This book excerpt in the HuffPo is absurd:
“Those are 450 ICBMs still capable of reaching targets around the world as quickly as you could have a pizza delivered to your door. This represents countless megatons of thermonuclear material— enough to turn the world into what journalist Jonathan Schell once warned would be a “republic of insects and grass.””
The megatons are in fact easily counted. Two hundred of the Minuteman III missiles are being or have been fitted with single W87 warheads, yielding 300 kilotons each. That totals 60 megatons. The other 250 retain their old, triple-MIRVed W78 warheads with up to 350 Kt, for a maximum of 262.5 Mt. The grand total then is under 322.5 Mt.
It is preposterous to assert that 950 warheads each yielding 300 to 350 Kt could turn the world into “a republic of insects and grass”. In the first place, they’d be used against enemy military targets, since they were designed to attack Soviet ICBM silos. But even if they were used in a homicidal, optimum burst height attack against the largest cities in the world, they couldn’t kill all humans (probably fewer than a billion), let alone everything except insects and grass.
The order of magnitude range of megadeaths in such an improbable attack would be 100 to 1000, most likely in the low hundreds.
Who cares what is causing the CO2 amount to rise? It doesn’t do squat to alter temperature so it’s irrelevant
Well, if it does, it alters it to the beneficial side, so it’s all good. Relax, and roll with the punches, er, uh, adapt.
Just as Question….. not so much about Radioactive Decay but thinking about Exponential “decay” as part of a resonant cycle, WOULD / COULD the Disturbance Amplitude INCREASE again, and then too cyclically be Exponential to some max. WHat would cause that to happen AND be the Parameters of such a cyclical event ?
Energy tends to dissipate. You might have several cycles running in a system and the amplitudes may coincide to give a ‘ bump’ , but the overall signal would diminish. Unless you are continuing to feed energy into the system.
Thanks, Alex. In this case, we’re not measuring energy. We’re measuring carbon and the carbon cycle, which doesn’t “tend to dissapate”.
Simple. The elephant is enlarging, outgassing, and net sinking. We need more biologists touching the elephant.
Yeah, sure, I’m as blind as everyone else.
I have no opinion concerning Dr Salby’s ideas because I’m not a scientist nor related professional, my only interest is as a taxpayer and forced consumer of idiotically expensive wind-generated power.
Here’s an oddity though which I’ve never seen explained, according to Law Dome ice core proxies, CO2 started to rise …
… long before human CO2 emissions from fossil fuels kick-off just after WW2:
“long before human CO2 emissions from fossil fuels kick-off just after WW2”
That is analysed here. The initial rise was due to the forest clearing that came with European colonisation.
Or rice, or sumpin’. Nick’s not sure.
But he’ll confidently assert.
So what caused the fall of CO2 between the Cambrian and Carboniferous? Presumably palaeo-tourism by our tine travelling decendants. (It has to be humans, at least we’re agreed on that, I mean – what else could it be??)
What you are saying, Nick, is that the CO2 rise chart is not accurate. You are saying that the AG CO2 output by mankind started much earlier and was much more massive that all the chart makers claim. The implication is that men with hand axes can change the composition of the atmosphere and therefore the planetary climate.
“The initial rise was due to the forest clearing that came with European colonisation …” Nick Stokes 1:19 am.
Lord knows how Dr Houghton (Dr. Houghton is an ecologist with interests in the role that terrestrial ecosystems play in climate change and the global carbon cycle) came up with global land use data back to 1850.
Emeritus Professor Michael Williams at Oxford, while recognising the historical uncertainties of forest clearing, burning etc., has dismissed the notion of pristine forests and pastures prior to European colonisation:
“Whether in Europe, the Americas, Africa, or Asia, the record is clear — the axe, together with dibble-and-hoe cultivation, and later the light low, often integrated with pastoral activity in Old World situations, reduced the extent of the forest. Fire was particularly destructive in this process. It was not pristine wilderness in which the indigenous inhabits were either incapable or unwilling to change anything. Everywhere, it was a far more altered world and forest than has been thought up to now”:
“The implication is that men with hand axes can change the composition of the atmosphere”
Men with hand axes changed the landscape, in US, Australia, Canada. We know that. You can calculate the carbon implications. Houghton and others have done that.
Nick why can your analysis be wrong?
Over the course of the Pleistocene the continent of Africa, at least, has alternated between dominance of forest and grassland. This resulted in the selection of adaptability and evolution of humans. Due to climate change (the natural, real kind) linked to glacial phases. How was this related to CO2 and why was that bad or good?
Any data regarding forest CO2 uptake vs corn field CO2 uptake vs rice field?
Carbon indians theory:
Most indians died from diseases after Columbus discovered America,
and thus their practice of burning the prairie ended, thus diminishing CO2,
and starting the Little Ice Age.
Nice theory…not sure of the timings… but if there was a missing /sarc it could be totally approved dogma.
… CO2 started to rise …… long before human CO2 emissions from fossil fuels kick-off just after WW2:
Yes. About 1750. When the Industrial Revolution started. In England…
I’d also challenge Willis’s 42%. It’s more like 55% and is not constant, but very slightly increasing. Hard to explain, if I’m right, and I’m not sure.
Recruitment of negative feedbacks, probably biological.
Hmmmm. I may have been thinking backwards about this. Nonetheless, I maintain that the percent of new emissions that is sequestered is not a constant, but slightly increasing, even in the face of rising emissions. If I’m wrong, and it is a constant, the increasing sequestration still needs a ready explanation.
There are two different numbers. The CO2 increase is about 42% of total CO2 added (including land use change) or 55% of fossil fuel emissions alone.
Absence of correlation indicates absence of direct causation
correlation does not indicate causation.
Hence, the agreement of the red and black lines in Willis’ Figure 3 only indicates a possibility that the recent rise in atmospheric CO2 concentration is caused by anthropogenic (i.e. man-made) CO2 emissions overloading the ability of the carbon cycle to sequester all the CO2 emissions both natural and anthropogenic.
At issue is whether that possibility is true: e.g. the IPCC says it is and Salby says it is not. Available data is not sufficient to resolve this issue although there are people on both ‘sides’ of the issue who select data to support their claims.
However, the recently launched OCO-2 satellite promises to provide data capable of resolving the issue in less than a year from now.
It is interesting to note that the preliminary NASA satellite data seems to refute the IPCC CO2 sink and source model supported by Willis in his above article.
I commend this essay on WUWT where Ronald D Voisin considers “Three scenarios for the future of NASA’s Orbiting Carbon Observatory”.
In particular, I ask people to peruse the illustration at the link which shows ‘Average carbon dioxide concentration Oct 1 to Nov 11, 2014 from OCO-2′. The very low CO2 concentration over highly industrialised northern Europe and especially the UK contradicts the ‘overload hypothesis’ used e.g. by the IPCC to assert that anthropogenic CO2 emissions are overloading CO2 sinks and, thus, causing the recent rise in atmospheric CO2 concentration. The low concentration in that region indicates that over the short time of the illustration the natural and anthropogenic CO2 emissions were all being sequestered local to their sites of emission. Clearly, this finding directly contradicts the ‘overload hypothesis’: however, it is for a very short time period and, therefore, the finding may be misleading.
When at least an entire year has been monitored by OCO-2 then it will be possible to observe if the anthropogenic CO2 emissions are or are not overloading the ability of the CO2 sinks to absorb them. We only have to wait a year for OCO-2 to indicate if the IPCC or Salby or neither of them is right. And there is no good reason to pretend anybody now knows which of them is right.
I failed to provide the link to Voisin’s article. Sorry.
This is key. We have poor understanding of the carbon cycle, and as understanding improves, the paradigm will shift. Just how, I dunno.
One thing is for sure, that is that a higher atmospheric CO2 level is good for carbon based life forms. Now we just have to convince ourselves that we are carbon based life forms, and the rest is easy.
Ronald Voisin’s droll comment about NASA’s ad hoc hypothesis explaining an inconvenient OCO-1 image viz.: “Australian industrial activity may have pushed it’s CO2 output upwind into the lush forests of Malaysia” caused much LOLing.
I commented on the original as follows:
Mike Jonas December 29, 2014 at 4:06 pm
169 comments already, and I haven’t read them all, so apologies to anyone who has said this already:
Ronald D Voisin’s analysis is incorrect. If you look at the graph of CO2 concentration against time, you will see that the seasonal variation dominates in the short term. Only over a period of several years is the growth in CO2 concentration apparent. So it is not at all surprising that short term local factors dominate the CO2 pattern at any one point in time. This does not in any way disprove that fossil fuel usage has been the major driver of CO2 concentration over the last few decades.
Your reply says
I respond because your reply demonstrates the truth of my assertions that “there are people on both ‘sides’ of the issue who select data to support their claims” and “We only have to wait a year for OCO-2 to indicate if the IPCC or Salby or neither of them is right. And there is no good reason to pretend anybody now knows which of them is right.”
You clearly are unwilling to wait the remaining months, and those who claim “fossil fuel usage has been the major driver of CO2 concentration over the last few decades” have a responsibility to demonstrate that their claim is right: others only have a duty to demand that they do demonstrate it.
Your arm-waving about different time-scales is twaddle in the context of your argument (it is very relevant to what I think is actually happening but that, too, is not relevant).
Willis provides the Mauna Loa data as his Figure 5 above. There is no indication that the sinks are overloading.
It is not true that as you claim, “Only over a period of several years is the growth in CO2 concentration apparent”. NO! The “growth in CO2 concentration” is the residual of the seasonal variation each year, and it is “apparent” each year. Importantly, that seasonal variation which provides each annual rise indicates that the sinks are NOT overloaded.
The seasonal variation is a ‘saw tooth’ consisting of rapid linear increase followed by rapid linear decrease in each year. The residual of the seasonal increase occurs because the ‘down slope’ is shorter than the ‘up slope’. Importantly, the rate of decrease does not reduce before the reversal which it would if the sinks were being filled: clearly the sinks do NOT fill.
I can also argue this the other way. But the most cogent argument is that the sinks don’t fill. Available data cannot resolve the matter but a year of OCO-2 data probably will.
re richardscourtney April 20, 2015 at 2:07 am
Allan MacRae , Ole Hulum and others have shown that short term d/dt CO is proportional to SST.
There are several depths of ocean sinks each with their own time constant, upwelling and downwelling currents in Indian ocean, tropics and poles; land, biosphere sinks and sources. It’s massively complex which is why Willis’ single exponential fit does not tell us that much.
OCO2 may well help see exactly where this is happening and thus clarify sinks and sources and establish that there is a strong temperature dependency. I’m not sure that one or ever several years data will be enough to work out the centennial component of all this.
rate of change CO2 vs SST
It seems I may not have been adequately clear.
You say to me and I very strongly agree
However, I wrote
Seeing that the sinks are or are not being overloaded is not the same as determining “the centennial component of all this”.
Please note that Mike Jonas has provided an example of the common assertion that the claim of anthropogenic emissions overloading the carbon cycle has to be disproved. That assertion is often made although it is an example of superstition (those who make a claim need to justify while others only have a duty to shout “Prove it!”). However, it seems likely that OCO-2 data will disprove it.
I think we will see that there is a strong temperature dependency short term but that has already been shown. OCO2 will give us some good regional detail.
You are probably right that just one year may be enough to establish that sinks are not being saturated. Though that can probably be seen also from existing data like the annual cycle plot I posted above.
It’s also interesting to look at rate of change at MLO with longer filter.
It seems that you and I have entered a state of mutual strong agreement.
Yes, the Mauna Loa data does provide the suggestion you say. My point is that several things make suggestions but – at present – nothing provides definitive evidence, and accumulation of OCO-2 data for a complete year promises to indicate or refute the assertion that anthropogenic CO2 emissions are overloading the carbon sinks to cause the recent rise in atmospheric CO2.
I am willing to wait for that data although I have more reason than most to not want to wait that long.
Yes Richard, it seems we agree on much of this.
Whether sinks are saturating is not really a yes/no question. It should perhaps be to what if any extent the rate of take up of atm CO2 being slowed by a partial saturation.
The whole question of saturation seems to relate to the idea of the Revel buffer: complex interdependencies of various ion concentrations in the surface layer of the oceans. This is the king-pin of the absorption argument and without that there is little question that the oceans have an immense capacity to absorb CO2.
What do you think we will be able to get from OCO2 ( Ohhh-CO2 ! ) data that will constrain the idea of Revel buffer, which seems still to be largely a speculative hypothesis?
Richard – I am actually arguing much the same case as you. When I say “ it is not at all surprising that” I am arguing that it is too soon to jump to conclusions from the short term (<1yr) data and that there there is an alternative possible explanation. I argue that we need to wait for several years of data before we can draw particular conclusions. You argue similarly but nominate one year. I argue for several yyears, because with an annual average increase of around 2ppm, a single year's data isn't going to be conclusive. When I say "does not in any way disprove” I mean “does not disprove”, not “does prove”.
The most curious part of your criticism of me is the bit about the sinks not overloading. I emphatically did not say that the sinks are overloading, for a very simple reason : I have done a lot of calculations on ocean and atmospheric CO2, and there is no evidence that I can find that the ocean is becoming saturated; on all measures, the rate of absorption of CO2 by the ocean is either stable or accelerating slightly.
You ask me
I answer; I don’t know.
Please note that my answer is not an evasion. When the data exists then people will be able to discuss what it indicates but, until then, discussion of what the data will or will not indicate is pointless.
“I don’t know” is a scientific statement that has been much neglected in the ‘climate debate’.
Richard Courtney is right – the OCO and corresponding Japanese satellite data expose 99% of “science” on CO2 sources as witchcraft with no factual basis or relevance.
Voison is right. The Sahara desert emits more CO2 than Manhatten. Suck on that.
Sincere thanks for the clarification.
I apologise if I misunderstood you and – I hope – your clarification will correct any misunderstanding I provided for onlookers.
For clarity to assert any similarity and/or difference between our views, I address your saying to me
My point is about the specific issue of whether the recent rise in atmospheric CO2 concentration is caused by anthropogenic CO2 emissions overloading the ability of the sinks to sequester all the total CO2 emission (i.e. both anthropogenic and natural).
If that overloading is causing the rise then all regions of major anthropogenic CO2 emissions (i.e. major industrial regions) must have an above average atmospheric CO2 concentration over a year if the rise occurs in that year.
As my first post in this sub-thread explained
The important point is the ability of sinks near to the anthropogenic emissions to sequester the total of emissions (both natural and anthropogenic) near to the anthropogenic emissions.
Richard, the Mauna Loa CO2 data has only about one third of the cyclic variation that occurs at the north pole, where the amplitude is more like 18-20pmm. And the up slope is for about seven months, and the downslope for five months.
So the down slope at NP is able to expunge 120ppmm excess CO2 (over 280ppmm) in about 30 months so the decay time constant is about 2.5 years for whatever gose on at the NP.
My guess it is the fresh water sea ice melting, to become CO2 deprived ocean water, which then rapidly sucks up (or down) its Henry’s Law amount of atmospheric CO2.
When the refreeze starts in the fall, the CO2 segregation coefficient between the liquid/solid phases expels the CO2 from the ice into the sea water from which it is then expelled to the atmosphere, because the cold ocean water is already at its Henry’s Law saturation level.
Just my opinion.
Thank you George E
I have been saying it for some time and the math doesn’t look too wrong to countenance it. There is lake ice and snow cover to consider as well. Ice and snow have basically zero CO2 in them. Not too surprisingly they appear in winter and disappear in summer (mostly). I am not so sure about Edmonton…
george e. smith
Thankyou for your “opinion” that – as Crispin in Waterloo says – fits the data better than the ‘overload hypothesis’ promoted by e.g. the IPCC.
I iterate my point that we need data to assess the possible “opinions” and I have high hopes that OCO-2 will provide needed data.
Obviously, we need to see data from a full year (ie., all 4 seasons) before reaching preliminary conclusions (although we already know from the Japanese Satellite Data that the IPCC assumptions was questionable).
I understaood that NASA would be releasing the data every 3 months. There ought tio have been a March update (being 3 months after the December 2014 release). That update is about 1 month overdue.
Does NASA not like what the Nov 12 to Feb 11th data plot/photo/scan shows?
Yes. I agree all you say and imply. I again commend the comments of Voisin’s article which is here
It does seem that Voisin’s most cynical suggestion may be coming true.
The new data are available at:
NASA successfully launched its first spacecraft dedicated to studying atmospheric carbon dioxideat 2:56 a.m. PDT (5:56 a.m. PDT) on Wednesday, July 2, 2014.
richardscourtney, that year you refer to is nearly up.
So, do we trust the data? Will the raw data require pre-Paris adjustment?
You ask me
Only NASA can provide proper answers to your questions. I offer my response.
As Ferdinand Engelbeen says here
That URL provides this link to URLs to actual data files.
Those files presently run from 2 January 2015 to 9 March 2015. The preliminary data released in graphical form were for the previous October.
My opinions on the above facts are:
The satellite was launched in early July 2014 but the earliest data is for October 2014 which seems a long time (i.e. 3 months) for establishing and calibrating the satellite for its purpose.
It is strange that the data is not collated in graphical form for each month as they were for October 2014: the software to produce the graphs clearly exists (or did when the October data was collated).
It is strange that the data was obtained for October 2014 but the provided data file does not commence until 2 January 2015.
If OCO-2 data for October 2014 to October 2015 were available then there would be a continuous 12-month OCO-2 data set prior to the IPCC Paris Meeting.
If OCO-2 data is only available for after 2 January 2015 there would not be a continuous 12-month OCO-2 data set prior to the IPCC Paris Meeting.
I anticipate that there will not be a continuous 12-month OCO-2 data set prior to the IPCC Paris Meeting because embarrassing facts need to be avoided at the time of the Meeting.
Is that one of those things, rsc, they call a falsifiable prediction?
Nice informative article as always Willis. However, a few things are not correct.
This argument is often brought up by warmists and is incorrect.
CO2 absorption is a two way process. Both oceanic and organic land-based sinks also re-emit, the annual in and out flow is almost two orders of magnitude bigger than the averaged annual change. This is discussed in Gosta Petterson’s papers. ( Something Salby also points out. )
This means that C14 molecules get emitted from ocean and organic decay so the C14 curve does NOT just reflect uptake of individual molecules.
Your description above stops at an individual molecule being absorbed by a sink and thus implicitly assumes it stays there. If that were the case you would be right, but it is not the case.
I don’t think Salby is rigorous and I don’t linke his style. I remain unconvinced by his presentation and want to see a proper paper, not a slide presentation, but I don’t think the temperature vs CO2 question has been properly addressed. In that he is correct.
I would draw attention to the deviations in your fig3 , these are close to the bit Salby highlights and probably do indicate a temperature dependency.
That looks small, so appears it may be negligible but the differences in temperature were also small, so it may still be a significant part of the centennial scale rise.
This needs looking at properly. I don’t think Salby is doing that but he is raising valid questions.
The reason that the Bern model is so long is that is has an additional 189y time constant. That is a key part of the alarmist hype were CO2 emissions will be around for centuries and continue to produce warming even if we sign up to a legally binding agreement to go back to the stone age later this year, in Paris.
As you point out we can not even work out the short time constant accurately so 189y is a joke.
Don’t be too quick to dismiss the temp CO2 question because you don’t like Salby’s work. This is not a simple question so don’t imagine that fitting one exponential to slightly curved line necessarily established the underlying cause(s).
“so it may still be a significant part of the centennial scale rise.”
Well, here is the plot of CO2/temp over the past 400000 years. Temperatures go from warmer than present to about 8°C colder. CO2 varies between 180 and 280 ppm. It’s now at 400 ppm. We haven’t been emerging from an ice age in that century.
Nick Stokes – No that’s not a valid statement, because the two sets of data (the 450k-yr and the current CO2) are at very different resolutions. A change such as the recent CO2 change would likely not show at all in the Vostok data.
Last time I looked at the Vostok ice core data, I seem to recall the gaps between the data points were large enough to miss the entire chirstian era, never mind 25 years of alarming warming or 18 years of not so warming. Neither is the flip between between two different quasi-stable climate states very informative to the current situation in an interglacial.
If you look at the last interglacial in that graph you will see that the CO2 variations are fairly flat , whilst the temp data is in a heavy downward slide. Clearly one single linear relationship is not an appropriate way to explain the relationship.
This analysis shows 8ppm/year/kelvin for inter-annual variation, 4 ppm/year/kelvin , as the inter-decadal ratio.
Also the large swing around the 1998 El Nino gives a similar result of around 9ppmv/K/a ie inter-annual scale ( linked in above )
As even a trivial single exponential relaxation would show rate of change will diminish with the period length.
From this we can get a ball-park estimation of centennial change as of order 2ppmv/K/a ie 200ppmv/K/century based on the apparent halving of rate of change with each order of magnitude.
Since temps have been steadily rising for the last century that gives an estimation of the magnitude of the temperature driven change.
0.7K * 200ppmv/K/century = 140 ppmv
This at least merits a more rigorous analysis. It’s a shame Salby did not do one.
It isn’t a temperature dependency, it’s a rainfall-over-land dependency
What this graph tells me is that obviously every 100,000 years, men with axes come and clear land, thus increasing atmospheric CO2, just like in the late 19th century, correct?
Sorry, but you’re wrong. Stomata data show that during the Eemian, ambient CO2 was about 330 ppmv or more. Presumably levels were even higher during yet warmer interglacials, although I haven’t read studies on them.
Antarctic ice cores are not a good basis for reconstructing past CO2 levels.
Here’s a discussion of some of the problems with ice core CO2 data and comparison with observations derived from stomata:
It includes a graph covering previous interglacials, showing my above presumption correct.
“A change such as the recent CO2 change would likely not show at all in the Vostok data.”
What it does show is that huge variations in temperature were associated with CO2 variation, but that was quite modest. Something like 10 ppm per degree. The data doesn’t rule out brief spikes, but that would not alter that magnitude of dependence.
To put it the other way around, if the recent <1° rise really has caused a 120 ppm rise in CO2, or large part thereof, what would an ice age do?
Henry’s law for the solubility gives a change of CO2 for a temperature change of around 8 ppmv/K, not ppmv/K/unit-of-time.
Whatever the time needed to reach a new equilibrium: seconds by spraying seawater in a closed air cylinder, to hundreds of years for the whole earth, the same equilibrium is reached: static as well as dynamic.
The time component only shows with which speed the new equilibrium will be reached: that depends of the partial pressure difference between seawater and atmosphere and exchange speed (stirring, waves,…) and decreases with decreasing difference to get zero when the equilibrium is reached….
Further, temperatures did cool 1946-1975 and are flat 2000-now but CO2 simply did go up all the time with increasing speed…
Stomata data are proxies with as largest problem: they grow on land with a huge variability and bias of local CO2 levels. The bias can be accounted for by calibrating the stomata data over last century against direct measurements and… ice core data. But there is no guarantee that the local bias didn’t change over the previous centuries due to (huge) changes in land use in the main wind direction.
On the other side, ice core data resolution gets worse the further you go back in time, but a worse resolution doesn’t change the average over the period of the resolution.
Thus if the average stomata data differ from the ice core data over the full resolution period, the stomata data are certainly wrong…
Ferdinand, we have three proxy records of past CO2 levels and a computer model. Two of the proxies and the computer model agree that past hundred thousand years CO2 levels are significantly higher than the third proxy, yet we discard the two proxies on unreliability grounds and ignore the model to trust the outlier.
The two proxies that agree are stomata and Greenland ice cores, and the computer model Geocarb III.
The outlier is Antarctic ice cores.
Looks to me like the wrong approach to the problem to assume that the outlier is the correct one.
Sorry for the late reply, it gets difficult to track all the replies…
– Ice cores are direct measurements of ancient CO2 levels, not proxies, be it smoothed over 10 to 600 years, depending of the snow accumulation rate at the origin of the core.
– The Geocarb model is a very rough model based on proxies and simply can’t be used for the past century as its resolution is the multi-millennia.
– Greenland ice cores are unreliable for CO2 measurements as besides sea salt (carbonate) deposits, frequently volcanic eruptions from nearby Iceland dispose highly acidic volcanic dust. That gives in-situ CO2 formation and more during – now abandoned – wet CO2 measurements (melting all ice and evacuating CO2 under vacuum).
– Stomata are proxies, with all the problems that proxies have. In this case confounding factors like drought, nutrients and most above all: unknown changes in local bias…
Thus I prefer the real thing above the surrogate, be it that the higher resolution and higher (local) variability of stomata are interesting to further investigate.
The 14C decay rate is a special case: it is partly residence time and partly decay rate: You have a residence time if all molecules simply swap place and the total mass remains the same:
Atmospheric CO2 / throughput = 800 GtC / 150 GtC/year = 5.3 years.
The total CO2 decay rate is how much CO2 mass disappears into sinks to no return under the extra CO2 pressure in the atmosphere:
Increase in the atmosphere / net sink rate = 110 ppmv / 2.15 ppmv/year = over 50 years.
For 14C, ocean surface and the bilk of vegetation were simply swapping 14C with the atmosphere and more or less in equilibrium before the 1963 stop of above ground nuclear tests.
The difference is in the deep oceans: what goes into the deep oceans is the 14C level of today, but what comes out is the level of ~1000 years ago. For the 1960 situation, with the pre-bomb peak around 50% of the bomb spike, that gives for the extra CO2 of that time:
– 41 GtC CO2 out (99% 12CO2, 1% 13CO2 + 100% of the bomb spike).
– 40 GtC CO2 in (99% 12CO2, 1+% 13CO2 + 45% of the bomb spike).
While about 97.5% of all 12CO2 returns and thus 2.5% adds to the decay rate, about 98% 13CO2 returns, as that was not diluted by low-13C fossil fuels and finally 97.5*45% of 14CO2 returns… That makes that the 14CO2 decay rate is longer than the residence time, but also a lot faster than the decay rate for 12/13CO2…
Thus Pettersson and Salby both firmly underestimate the decay rate of the bulk of CO2…
Where the Bern model goes wrong is that it assumes a rapid saturation of the deep oceans, where there is no sign for. Originally that was built for 3000 and 5000 GtC, which indeed would give a much higher permanent level in the atmosphere, but the current ~400 GtC since the start of the industrial revolution is good for a residual 3 ppmv increase in the atmosphere after full equilibrium with the deep oceans…
It’s sloppy, but these are not “entirely different” periods, you’ve upped his sloppy and gone for factually incorrect.
Since one of the two quantities being discussed is the integral of the other and the rise is only slightly curved we should see this relationship, it is not really “apples and oranges”.
… neither is it “meaningless”. It is not correctly done but it is pointing to a change that merits being looked at correctly.
I’m not defending Salby , I wish he’d do it properly, but that is not reason to throw out and ignore temperature dependency. Several authors have pointed out that short term change in CO2 : d/dt ( CO2 ) is a function of temperature. No one seems to have satisfactorily determined how much of the inter-decadal to centennial change is also temperature related.
Like most things it is not a binary, black or white answer. Some part of the long term change is certainly due to temperature rise over the last 300 years. If the oceans were not warmer, they would certainly have absorbed a greater proportion of human emissions. The question is, how much?
The trouble with Bern model and any uniquely exponential model is that it TOTALLY ignores temperature dependency , which is clearly seen in short term change. Finding fault with Salby’s work does not address the question.
If there is a significant long term rise due to temperature, this could easily be falsely contributing to long term 189y time constant in Bern model, which is at the heart of the global warming alarmism.
If CO2 is absorbed in 20-30y as at time const of circa 9 years would suggest , the alarmist predictions suddenly loose their alarming magnitude.
Mike, the variability in rate of change of CO2 is studied by several others, here in the speech of Pieter Tans for 50 years of Mauna Loa data, from slide 11 on:
It is the short-term influence of temperature and drought on (mainly tropical) vegetation.
But the longer term increase >3 years is NOT caused by vegetation: vegetation is a net sink for CO2 over the years.
As the (very) long term influence of temperature on CO2 levels is around 8 ppmv/K the warming since 1960 is good for 5 ppmv of the 80 ppmv increase since that year…
The problem I have is the proposition that CO2 has held steady at circa 280ppmv for millennia. Most things in the natural world fluctuate, indeed CO2 fluctuates widely from place to place, day to day and month to month. It therefore seems incredible to believe it should not change from 280ppmv by nature alone.
The ice core methodology on which this theory is based, must be flawed. I suspect as CO2 diffuses and get compressed as the firn compacts, it does not give an accurate picture of annual CO2 trends. It certainly does not match real world measurements taken in the 19th and 20th century.
The best resolution ice cores are better than a decade and span the last 150 years. Direct measurements taken over the oceans in that period are around the ice core CO2 levels and there is a 20 year overlap (1960-1980) between the ice cores and direct measurements at the South Pole within +/- 1.2 ppmv.
The current natural fluctuations as measured worldwide are +/- 8 ppmv maximum seasonal and +/- 1 ppmv over 1-3 years around the trend. That is not measurable in the ice cores, but hardly of interest as the variability levels out after 2-3 years.
What the discrepancy between C14 and total CO2 half lives shows is that CO2 is being absorbed and re-emitted by (presumably) the biosphere. And that what is re-emitted is ‘old’ CO2 that has been there a long time, like before atomic tests.
This is of course consistent with organic decay and breakdown and indeed plant eating organisms etc etc.
What this raises is the possible mechanisms by which this occurs, and how these relate to climate change.
There is after all a huge amount of evidence that rising temperatures increase the amount of CO2 emitted by the biosphere.
It may turn out that in fact the cliamte change alarmists have it all back to front: Human emission of CO2 is largely irrelevant, since total atmospheric concentrations may be driven instead by the rise in temperatures (50-100) years ago…
..which were caused by something else entirely…
There is a dilution effect at each interchange. This will be much greater in the ocean where there is more mixing. Leave decay is mainly of leaf growth from the same or previous year. Wood decay will introduce a longer lag. From memory the oceanic annual cycle is about twice the CO2 mass of land based cycle.
This needs to be analysed as a rate reaction. See Gosta Pettersson’s work for more detail on chemical engineering explanation of rate reactions. This also gives the main time constant of the order of ten years.
You may want to consider the questions I posed in this post and the answers given by Ferdinand Engelbeen in the ensuing thread.
Yeah – seem to be on parallel tracks there.
Add a re-emission lag and it gets complex. Add non linearity and all bets are off..
Half life is the loss of half the radioactivity over a period not half the mass.
Half life length shows how active/dangerous a particular item is, a very short half life indicates a very dangerous substance, a very long half life indicates no real danger, 96Ru has a half life of 3.1×10^13 years so not dangerous from an ionising radiation standpoint. (though how that half life was calculated I have no idea)
John, the activity falls by half since half the mass of the radio-isotope no longer exists , it’s the same thing.
How dangerous something is depends upon the activity ( becquerels ) and nature of the radiation ( alpha being the most destructive if ingested ).
If you have 300 becquerel of a long half-life isotope it is just as dangerous as 300 becquerel of a short half-life isotope. however, it will remain so for millennia.
So your idea that “a very long half life indicates no real danger” seems rather confused.
for a very long half life item to have the same activity of a very short half life item, you would need to have an proportionally larger number of the long half life items. As an example, to get the same disintegrations per second from something that has 1000 second half life as something with a 1 second half life, you have to have 1000 times as much material to start with.
Mike when you say “half of the mass of the isotope no longer exists,” that isn’t strictly true.
Virtually ALL of the mass still exists but now it is something else, either a different element or some small particle, such as neutron, proton, alpha or whatever.
Mass is not destroyed in radioactive decay (well not much is; just binding energies etc.
Well, more accurately, a half life is the time to lose half of whatever is being counted. For example, if I have a contaminate-filled room (of soot or paint particles or perfume – regardless of good, bad, or indifferent particles) the half life is the time that half of them get absorbed or get combined or (if radiation) decay.
But a short life means also that after a short period of time, the radiation will not be present any more. One of the most troublesome radioactive substances around a reactor is cobalt 60, with an inconvenient half life of 5-6 years. Long enough to always be around, short enough to decay rapidly
Error bars. And trust in the people making the measurement.
You also have to have a very large sample of very long half life items to try to shrink those error bars somewhat. If you get enough of it in one place you can get a few disintegrations per day. If you monitor that for a few 1000 days and get a consistent count over the entire period, you can then extrapolate the half life out to millions of years with fairly good confidence. Smaller error bars = longer observation time and larger source size. This is all assuming that the apparatus continues to operate through the period.
Well U-235 has a half-life of about 704 million years but it only takes a small amount to make a big boom.
This is true, but when you set up the right conditions to start a large neutron concentration, you can forget about half life and move into the realm of nuclear fission reactions – two very different regimes. Fission will just pretty much happen when you have a large supply of low energy neutrons lying around – it(235U+n) has a very large reaction cross section for low energy n.
Be careful not to confuse Activity, measured in Bequerels (Bq) or Curies (Ci), with Dose Rate, measured in Grays (Gy) or REM (Reontgen Equivilant Man).
Thank you, Willis.
The 14C bomb dispersed quickly through the atmosphere and into the ocean due to nearly zero partial pressure with respect to the tagged 14C.
This would be true of any gas that one could tag. A bomb of the tagged mixture will have a half life of ~8.6 years when released in an untagged environment, according to the 14C bomb test.
The Bern Graph, on the other hand, takes for granted a 280ppm floor, as though rock formation would cease.
I’m not sure that Salby’s fit to C14 is that accurate ( he does not show his residual and I don’t think it would be very good ) , though accuracy is not essential to his point.
This models seems to give a very close fit, though Pettersson says it should not be fitted directly to the C14 ratio anyway.
It is interesting that the ratio of the magnitudes of the short and long exponentials in this model is exactly the same as Salby’s 8.64 time const. Mathematical coincidence?
Did anyone catch the source of the “tau=59.59470829” in Mr. Eschenbach’s code?
I understood it was his own fit but I’m not sure he said explicitly what was fitted to what.
Thanks. Yes, the high number of decimal places suggests that the computer spit it out of some operation, but it would be nice to see what that operation was.
The trouble with this is that emissions have not had a constant rate of growth since the pre-industrial period.
It is always possible to fit an exponential to a slightly curved segment of line like cumulative CO2 but with that length of data and very light curvature the uncertainty will be large. Also the pre-ind level is uncertain and will change the fit.
The problem is that the integrated cumulative sum removes most of the detail that may enable us to analyse the system. That is why the rate of change stuff I linked above may be more informative.
Joe Born April 20, 2015 at 3:24 am
Thanks, Joe, good question. For reasons of laziness, I did that in a separate spreadsheet. I used tau and the pre-industrial concentration as the variables. I started with the initial value of atmospheric CO2 in 1959. Succeeding values were calculated as
CO2[ t ] = P + (CO2[ t-1 ] + E[ t ] – P) * alpha
where the subscript “t” is time, CO2 is atmospheric CO2 concentration, P is the pre-industrial value, E is annual emission (in ppmv), and alpha is exp(-1/tau).
I used Excel’s “Solver” function to optimize the values of P and tau in order to minimize the sum of the squared residuals. That gave me ~ 57 years for tau and ~ 283 ppmv for P, the pre-industrial CO2 concentration. Since there were no constraints on the fitting process, I consider the fact that the best fit for P (283 ppmv) is quite close to the generally assumed pre-industrial value of 275 ppmv to be a confirmation of the method that I am using.
Thanks a lot. I’d thought it was something like that, but I hate to guess.
Superficially written. Neither clarifies nor clearly rebuts.
Shub Niggurath April 20, 2015 at 4:13 am
What was “superficially written”, and by whom? What doesn’t clarify, and what is not clear? What is not rebutted?
Your comment is totally opaque. This is why I ask people to quote what they object to.
In reply to:
Salby does total mass balance of CO2 in the atmosphere and states that he does total mass balance.
Salby analysis and conclusion is correct. It appears you did not understand Salby’s presentation and it appears you bring emotion into scientific analysis which is purposeless and clouds your summary/blocks your understanding of the issues/science.
The sum of all CO2 inputs in the atmosphere (anthropogenic is only one and volcanic eruptions are not the major source of natural CO2) minus the total sinks of CO2 in the atmosphere equals rise of CO2 in the atmosphere. That is mass balance.
As Salby correctly states the only input of CO2 into the atmosphere which we know with certainty is anthropogenic CO2. There is an immense amount of CO2 and CH4 that is flowing into the atmosphere from the deep earth. For example CH4 levels in the atmosphere mysteriously doubled for no physical reason and then stopped rising.
As Salby’s notes anthropogenic CO2 continues to increase in rate post 2002 yet the rate of rise of atmosphere CO2 does not increase. That is a fact a paradox.
Salby does a calculation of the maximum possible change in sink rate base and notes sink rate is proportional to total atmospheric CO2 not the change in atmosphere CO2.
He finds the maximum possible bound on the change sink rate does not explain the observation that post 2002 the rate of increase in atmospheric CO2 is constant yet the is a rise in total anthropogenic CO2.
The CO2 sinks do not increase the percentage of CO2 that is sequestered when total atmospheric CO2 increases with the exception of plants which thrive when atmospheric CO2 increase.
The major source of new CO2 into the atmosphere is deep core CH4 that is released into the biosphere as CH4, CO2 (micro organism eat the CH4), and liquid petroleum.
The key to solving the CO2 puzzle is to read and understand the late Nobel Prize winning astrophysicist book The Deep Hot Biosphere: The Myth of Fossil Fuels). It appears you have not read that book or the related papers. I am working on summary of Gold theory and book for this forum. I will include an explanation of Salby theory and explain the related mechanisms.
Atmospheric CH4 is about to fall and atmospheric CO2 is about to fall. I say that because I understand physically what is happening.
It appears you also did not listen to or understand Salby’s previous video which discusses irreversible sinks of CO2 verses movement of CO2 into the surface ocean which is reversible. The IPCC Bern CO2 model assumes there is very little exchange of deep ocean water with surface ocean water. The C14 carbon from the surface ocean water under the Bern assumption should therefore linger. It does not which is one of the many observations that supports the assertion the Bern model is incorrect and the half life of CO2 in the atmosphere is between 3 and 7 years.
“The key to solving the CO2 puzzle is to read and understand the late Nobel Prize winning astrophysicist book The Deep Hot Biosphere”
What Nobel Prize did he get ? Was he a participating author of an IPCC report ?!
Perhaps he is a citizen of EU, they won the peace price.
The Nobel committee will need to rescind the award made to the IPCC as the entire IPCC scientific premise is incorrect.
It doesn’t do anything for the reputation of an outstanding scientist to make false claims about his being a Nobel laureate.
Totally. Plus it dilutes the accomplishment of those of us who actually received one.
Some rain on your abiotic petroleum parade. Its ‘not even wrong’. Gold’s book is crackpot speculation. The Swedish experiment based on Gold found only trace oil in pump contaminated drilling mud. The Russian claims about the Ukraine deposits are bad geology. Those reservoirs are sourced from standard marine shales overthrust by fractured basement rock.
There is obviously abiotic methane. It exists in the outer solar system (Titan), but apparently not the inner solar systems rocky planets. It is produced on Earth by serpentization (mineral hydration) of ultramafic rock catalyzed by iron. That been known for decades, and at least 7 European seeps have been identified. Very recently, the first meaningful accumulation of such abiotic methane was discovered in methane clathrate on the Framm Strait seabed.
But not petroleum.
There most certainly is abiotic methane on the rocky inner planets. You yourself comment on Earth’s abiotic methane. Methane on Mars might be biotic, but probably isn’t. Mercury’s tenuous atmosphere contains trace amounts of methane, almost certainly abiotic. Venus’ atmosphere is loaded with the stuff, presumably abiotic.
And, while not a planet, lunar astronauts detected methane in the Moon’s thin atmosphere as well.
According to Gold’s book “Deep Hot Biosphere” they pumped up 12 tons of oil (“looking like ordinary crude oil” – Danish Geological Survey) along with 15 tons of fine-grained magnetite. The significance of the magnetite is FIRST of all that Gold considered it to have been produced by microbes who reduced it from another iron oxide. SECOND it was the magnetite “paste” that clogged up the well making further drilling impossible. It’s fine with me if you yourself choose to consider 12 tons a mere “trace” if you say exactly that. But I would also inquire if you knew WHY they stopped drilling (the magnetite). Yes, it was not practical to continue. But that a far different issue and does not prove anything about whether it was or was not there.
Now I have asked you on two other threads about this “trace” issue and you ignored the question.
It also seems to me that some commenters fail to distinguish Gold’s argument from that of present Russian and Ukrainian scientists who advocate abiotic oil through on-going geochemical processes. Gold in fact advocated residual primordial abiotic oil, delivered to earth by meteors and comets, plus biotic contribution from deep microbes feeding on this food source, then themselves serving as organic feedstock for further biotic production of long-chain hydrocarbons.
IIRC Gold’s argument.
Reply to Catherine Ronconi April 20, 2015 at 1:08 pm
Thanks – Quite right. Tommy started with a “Deep Earth” hypothesis for primordial GAS (methane). Then he added life to the upwelling methane: the “Deep Hot BIOSPHERE” . Famously turning thinking around: not life reworked by geology, but geology reworked by life (microbes deep in the rocks).
And he sure could ask the inconvenient questions about conventional wisdom. He could be wrong – but “crackpot” is a very unfortunate term used by some of his critics.
Life has certainly reworked the atmosphere and hydrosphere, so why not the lithosphere as well? Or reworked it in this way, since clearly life has already affected the lithosphere in other ways.
Bernie, since you seem worked up and thinking I am evasive on this. So I just wasted an hour fact checking from press and scientific commentary at the time of the Swedish experiment. Things you can still do to educate yourself on the falsity of abiotic oil (but not abiotic methane).
First, Gold persuaded the Swedish well investors that they would find commercial amounts of methane, not petroleum. They obviously didn’t.
Second, although there are (interestingly) magnetite producing bacteria, their magnetite is nanoparticle sized and could not jam a drill bit. Almost all granites contain either magnetite (iron) or ilmenite (iron/titanium) trace mineralization. Finding ‘fine grained’ magnetite does not mean it came from bacterial sources–especially when drilling granite.
Third, either you misread Gold’s book or in it he misrepresented what was brought up from the bottom of the hole. It was 12 tons of sludge (drill cuttings unavoidably mixed with drilling mud) in which there was some trace oil. It was not 12 tons of oil. See for example http://www.science-frontiers. They reported before and after in #69 and #79. Even the drillers on site thought it came from pump leaks into the drill mud.
Gold shifted his story over time from primordial methane to abiotic oil after this exploit. He also then asserted that undeniable biomakers in oil just means abiotic oil sources were contaminated by deep dwelling microbes. Shape shifting an original theory that much is itself prima facie evidence the theory is wrong.
Don’t take the book at face value. That is like taking AR4 at face value. Big mistake.
Replying to ristvan April 20, 2015 at 2:17 pm
(1) I did NOT misread Gold. See pages 120-121. He said 12 tons of crude oil as verified by the Danish Geological Survey. Perhaps you should read the book. Heck – I have YOUR book!
(2) Your link to science-frontiers goes NOWHERE. It says: “Sorry, We could not find http://www.science-frontiers“. So you leave us out in the cold.
Dr. J.F. Kenney, who worked in Russia under Soviet rule with Russian scientists at the Russian Academy of Sciences, wrote this about Thomas Gold:
The full discussion with cites is here: http://www.gasresources.net/plagiarism%28overview%29.htm
MRW April 21, 2015 at 7:06 am
Thank you for setting the record straight vis-a-vis Gold and the Russians.
There are several interesting sub-threads that have developed from Willis’s essay, but it takes longer to find them in the tangle of comments, than to read or reply.
Previously, the host had requested that we give the new format a try. Speaking for myself only, I have given it a try, and it doesn’t work.
I would urge the host to reconsider the ‘reply’ format.
Willian: “Salby does a calculation of the maximum possible change in sink rate base and notes sink rate is proportional to total atmospheric CO2 not the change in atmosphere CO2.”
That was one of the first question marks his presentation raised for me , where he pulled that from. This is typical of his style which you can go unnoticed in a slide presentation but would be a blatant omission is a paper. It’s long over due that he puts what he has on paper and stops relying of videos of slides.
Since you consider that you understand his work, maybe you can explain how he derived that.
Hello Willis, love your posts.
“Exponential decay also describes what happens when a system which is at some kind of equilibrium is disturbed from that equilibrium. The system doesn’t return to equilibrium all at once. Instead, each year it moves a certain percentage of the remaining distance to equilibrium.”
Not true at all.
Exponential decay is an example of First Order Kinetics. Your equilibrium case is an example of Approach To Equilibrium kinetics. They are NOT the same. You can do a First Order kinetics analysis on an Approach to equilibrium system and get a pretty good match for two or three half-lives, but after that, the modeled decay is way to fast. After 5 or so half-lives, the discrepancies can get pretty ugly. The math of Approach To Equilibrium is rather more complex and not as well known, so people tend to use First Order instead. As I mentioned, this is usually at least serviceable for two or three half-lives, but then trouble starts.
(Imagine a classroom of physics undergraduates, told to consider a transistor as a linear device, “over a short range”. They are all pumping their fists in the air, chanting in unison “TOO THE FIRST ORDER, TOO THE FIRST ORDER”. You get the idea.)
First Order kinetics describes the reaction A -> B, simple enough. There is no reverse reaction.
Approach to Equilibrium describes the system A -> B and B -> A. There is the reverse reaction.
If you start with all A, at first it looks like First Order, because the reverse reaction is too small to be significant. Over time, the reverse reaction, B -> A grows significant, and First Order no longer works well. At the end, we have equilibrium, where the rates of A -> B and B -> A are equal . And that is a fundamental definition of equilibrium.
Now consider CO2 absorption by the oceans. I do not think we should imply (by our mathematical treatment) that out-gassing is insignificant. I think there is much mischief about CO2 “residence times” because of this.
If the physics graduates were chanting in unison “TOO THE FIRST ORDER, TOO THE FIRST ORDER”, I would recommend they did a course in English first.
They are undergrads, what do you expect?
Don’t confuse the hypothetical chanters with the writer. He who puts words in the mouths of others must spell them correctly too.
Thanks Tony. That seems to echo what I said above, this C14 curve is not the single molecule residence time.
Gosta Pettersson discusses reversible / non-reversible reactions in his articles:
Looks interesting. I will check them out.
Thanks for the tip.
Mike, the 14C curve is not the same as the 12C/13C decay curve for an excess injection of fossil fuels either:
What goes into the deep oceans was the isotopic composition of 1960 at the peak of the bomb spike and some extra 12/13C. What returns is the composition of ~1000 years before: somewhat lower in quantity than what goes into the deep, but with a lot less 14C of long before the peak.
That makes that the 14C curve is less fast than the residence time, nut still a lot faster than the decay rate of an excess 12/13CO2 injection above equilibrium…
I would really be interested in Mr. Eschenbach’s contacting Dr. Salby and discussing with him his apparent disagreements. Then I’d like to see a distillation of that conversation for those of us who are not at the pinnacle of understanding every last detail. I’d also be interested to see what anyone who is versed in this subject to comment themselves (i.e.: William Astley who brought up some interesting points).
Hopefully this can be done. It’s what the debate (yes, I said that dirty word) in badly in need of……
What is needed is for Salby to put his hypothesis down on paper and stop messing around with video presentations.
Then everyone can have a look and see whether there’s a valid point being made.
I’m sick of doing freeze frame on a fuzzy video of a slide in a presentation that does not include the derivation of some key aspects. I don’t see that Willis or anyone else needs to be involved in that process. He just needs to stop messing around and publish. Even if it’s on arxiv.org or something.
I remember something in Dr. Salby’s video about “not publishing” till his data was released?
Who could keep “his” data from him? Does he not have copies of it?
Mike April 20, 2015 at 6:09 am
What he said … faffing around with the video was most unpleasant.
The Australian university which fired him “owns” his data and won’t let him have it.
WIllis – How does your time constant of 59 years correspond to the Bern Model? If I recall correctly, the Bern Model assumes some fraction of emissions stays in the air permanently. Can you fit the actual data with the Bern Model? The Bern model does appear to be the decay rate of a pulse of enhanced concentration, per your fit.
Good question, RERT. There’s not enough data yet to distinguish between a simple exponential (as I’ve shown above) and a multiple exponential decay. The Bern Model assumes that emitted CO2 takes one of four paths, each of which has a different time constant ranging (from memory) fro 3 to 174 years.
If this from Google is a good source http://unfccc.int/resource/brazil/carbon.html, then subject to me understanding correctly the time constants are 2.6, 18, 171 and infinite, as of TAR. (infinite I think because a(0) is non-zero).
Your response raises the question as to how these parameters are fitted if there is not enough data to distinguish from a much simpler model. Either someone has better data (?unlikely?) or most of these parameters are not statistically significant. Given there are 7 of them, hardly a surprise I guess!
I question the assertion that the total global CO2 input is a known. The Carbon Satellite preliminary indicates that natural sources are significantly higher than have been theorized. In addition the interpretation of the Keeler curve in terms of anthropogenic contribution is interesting but most likely flawed since the atmospheric CO2 increase is +/- linear and there is a difference between correlation and causation particularly if the larger sources are imperfectly understood. You can tune anything that reflects your bias. Note the immediate divergence between the IPCC models and reality when the models shifted from hindcasting to prediction. CO2 lags temperature in geologic time. Therefor it can not be the cause. The weeds are interesting and inform but maintaining a 30,000 foot view is essential.
halftiderock April 20, 2015 at 6:06 am Edit
In that case, please have the courtesy to quote whoever it was that said it was known. As it stands, there’s no clue what you object to.
As I look at it, Figure 7 actually is very interesting. It compares the 14C curve and the Bern model, which seem to be different. On second look, the 14C curve is First Order kinetics for the uptake of CO2 by the oceans. This is simply the A -> B forward reaction, without any contribution from the reverse reaction. The Bern curve shows the sum of the forward and reverse reactions together.
The difference between the two curves provides a cautionary tale about how you model things, and how you check your starting assumptions.
I suggest you look at Pettersson’s papers. He is chemist and understands reversible reactions and explicitly deals with this in his work. His paper 5 shows Bern and C14 and is not very different from Salby’s graph, except that it stops around y2k.
You seem to be talking the same language, so it should make sense to you.
I agree with what Willis said here. Salby is missing some of the key issues.
My own view is that there is a natural equilibrium level of CO2 of around 270 to 280 ppm. CO2 has been around that level since C4 grasses evolved in the few million years leading up to 24 million years ago. The evolution of C4 grasses increased the Carbon balance held in vegetation because C4 grasses could now grow in dry areas where all the remaining C3 bushes, trees, plants and (C3) grasses couldn’t grow before. CO2 fell to 280 ppm, for perhaps the very first time, 24 million years ago.
In the ice ages, CO2 declines with temperature by about 18 ppm per 1.0C as the oceans absorb more CO2 but also because the vegetation on the planet dies back significantly and there is less annual Carbon cycle from vegetation occuring. CO2 has been as low as 185 ppm which means trees and bushes could only grow where there is very high rainfall such as the tropical rainforests. in the ice ages, Africa’s rainforests decline to just a few small areas. The Amazon rainforest declines by two-thirds. For some reason, probably higher rainfall, the US southeast and Indonesia seem to hold onto to their trees. The rest of the planet is either grassland, desert or tundra or glacial ice.
At the natural equilibrium level of CO2 in a non-glacial cycle, of 280 ppm, if CO2 rises above that level, vegetation gets more active and CO2 is drawn back-down to 280 ppm. If CO2 falls, vegetation gets less active and CO2 goes back up.
Net absorption of CO2 by natural processes as a percent of CO2 above 280 ppm going back to 1750 when human emissions started to actually matter. The natural processes completely overwhelmed our emissions until the 1950s. (Landuse and forest-clearing by early civilizations is a joke. The natural processes are many times higher than humans clearing forests. Orders of magnitude.)
Now let’s compare the natural absorption rates compared to human emissions. In the 1940s, CO2 levels actually fell. The natural sinks were more than 100% of our emissions. Before 1900, they were orders of magnitude higher than human emission rates.
Since about 1950, the natural absorption rate has been around 50%. As Willis noted, it is probably closer to 42% or 45% but it is actually hard to tell because there are some uncertainties here. But this is a fluke. It is more the total amount of CO2 in the atmosphere that governs net absorption by natural processes, not our annual emissions.
Since Plants, Ocean, and Soils are absorbing about 1.7% per year of the excess CO2 above the equilibrium 280 ppm right now (a rate which appears to be increasing slightly), if we stopped emitting today, it would take about 155 years to get back down to 285 ppm (and then a few more decades for 280 ppm).
Thanks a lot for that background. Nuggets like that are the reason I visit this site.
Is there some source from which we could easily obtain those plots’ data?
Also: “Plants, Ocean, and Soils are absorbing about 1.7% per year of the excess CO2 above the equilibrium 280 ppm right now (a rate which appears to be increasing slightly).” How do we know that?
Illuminating as always, thanks. So Antarctica gave rise to C4 plants, who knew? Who cared? Well I do.
I’m sad but at the same time proud to be alive with the last generation of scientists.
Yes. The physicists underestimate the biology. C4 is a big evolutionary step. That includes that CAM and other structures improving the extraction of CO2/transpiriation in “dry” environments. Plants have been evolving for more than 24 million years, or even the entire cenozoic. The entire global biogeochemical carbon cycle has been evolving for hundreds of millions of years. Earth’s atmosphere today is entirely of biological origin (except Argon).
As for the amount of fossil fuel CO2 in the atmosphere, various approaches (eg. Segalstad) show that it is about 15 to 25 ppm. One approach (https://retiredresearcher.wordpress.com/) see figure 16 shows about 45 ppm. Adding CO2 to the atmosphere is entirely beneficial. A few thousand ppm would be great.
CO2 never “accumulates” in the atmosphere, from any source. Planetary biology guarantees that.
The residual number of “human” CO2 molecules depend of the residence time which is ~5 years and is currently about 9%.
The residual mass of CO2 above equilibrium is 95% caused by human emissions as the decay rate of any extra injection of CO2 is over 50 years.
Two different decay rates without much connection between each other.
Bill, I agree 100% with you….
I don’t like the word “equilibrium” though…..while being accurate, it’s not descriptive
“Limiting” would be the word that better describes it.
I affirm the comments above by William Astley. Salby is an expert analytical mathematician. See Murry Salby, (2012) Physics of the Atmosphere and Climate
Salby develops the equations later in the presentation for a technical presentation level.
On C14, suggest distinguishing between the residence time in the mixed surface layer vs the deep/rest of the ocean.
Salby made the same mistake as many before him: the e-fold decay rate of the 14C bomb spike is way shorter that the decay rate of an extra injection of CO2 in the atmosphere: what returns out of the deep oceans is much less 14C than what goes into the deep oceans, while for 12CO2 that still is near the same amount…
Willis Eschenbach and Ferdinand Engelbeen
Re: “He follows that up by not knowing the difference between airborne residence time and pulse decay time.”
Salby may not be presenting it well, but I believe he has gone far deeper into the equations and details that you give him credit for.
You argue “Airborne residence time (bomb test data): how long an individual CO2 molecule stays in the air.”
The bomb test data is NOT “an individual CO2 molecule” but a specific though very small (“infinitesimal”) pulse of CO2 with C14, other than it can be explicitly tracked. How is that infinetismal pulse that much different from a larger pulse under the Bern model? 0.5% does not make that much difference in total CO2.
Mathematically, the consequent absorption rate is very similar.
IF the ocean were under equilibrium BEFORE the pulse, why should the emission rates after be any different from the base absorption rate BEFORE the pulse?
Under his Cross-Correlation argument at 25:50 etc. Salby uses the C14 concentration decay rate to calculate the CO2 absorption rate is proportional to the abundance of CO2. ~ 30:20”-40”
From the bomb data C14 decay rate and his cross correlation analysis, I understand Salby to show the major difference from the Bern model is that the CO2 EMISSION rate is not constant, but varies with temperature.
Salby develops the atmospheric conservation equation to then find:
Note that he finds a correlation of 0.82 for changes of CO2 with temperature and a correlation of 0.93 when including moisture etc.
See: Janice Moore Notes on Dr. Murry Salby, London 2015 Lecture.
C14 section min 26:24 – 46.
Salby goes beyond your constant CO2 addition model to form a model of an increasing trend of CO2 emissions. Janice notes:
What am I missing from your / Salby’s arguments from my rapid reading/listening?
As human emissions are about twice the increase in the atmosphere and steadily increasing, the variability in the increase is not in the temperature dependency of the source rate but in the net sink rate.
The correlation between the variability of temperature and CO2 increase rate shows that temperature variability is responsible for the variability around the trend, but says next to nothing about the cause of the trend, as human emissions are increasing rather monotonically without any measurable variability in the atmosphere and by taking the derivatives, you have effectively removed the trend…
David L. Hagen April 20, 2015 at 6:34 pm
CO2 growth rate = Emissions rate (proportional to temperature) – absorption rate (proportional to CO2).
Note that he finds a correlation of 0.82 for changes of CO2 with temperature and a correlation of 0.93 when including moisture etc.
The problem with Salby’s balance equation is that he assumes that the temperature dependence is only due to the source terms, this is wrong. Both the absorption rate and the emission rate from the environment are dependent on both pCO2 and T, by not including the proper dependence he forces the result that he obtained.
The proper equation is:
d[CO2]/dt = Fossil Fuel emissions + Sources(CO2,T) – Sinks(CO2,T)
This balance equation is true at all timescales.
Indeed, David. Anyone who actually reads Salby’s work and is capable of understanding it can see very clearly that the man is brilliant, and thoroughly immersed in his subject matter. The people casually lobbing potshots at him have no such demonstrated skill. It would be funny if it weren’t so annoying.
Dr. Salby is brilliant in his own field, but out of his knowledge on the increase of CO2 in the atmosphere.
I have read what he said about CO2 in ice cores (not repeated in this lecture anymore). That was simply physically impossible and would imply the death of all vegetation on earth during glacial periods…
Ferdinand, you are not an ice core expert. You are just a guy who has read a few things about them, and internalized certain narratives about them.
I am not an ice core expert, but I know something about diffusion: if someone says that there is diffusion in ice cores which decimated the original peak values, that implies that the lowest values measured today would be a lot lower than measured at the original inclusion. Which is already problematic for most (C3) plants at the low levels found in the last glacial period.
As diffusion only stops when all levels are equal, finding similar peak levels each period 100,000 years back in time, implies even larger peaks in the past, which implies below zero CO2 values during the older glacial periods…
Salby’s comment on ice cores was not repeated in his last speech in London…
“…if someone says that there is diffusion in ice cores which decimated the original peak values, that implies that the lowest values measured today would be a lot lower than measured at the original inclusion.”
Not necessarily. It depends on the duration. As an analogy, suppose you had 100 buckets with marbles in them. Buckets 1 through 99 have 20 marbles apiece, and bucket 100 has 120 marbles.
Over time, you take a marble out of the bucket with the most in it, and distribute it uniformly into the other buckets. After a very long amount of time, you have 21 marbles in each bucket. Your highest high has decreased sixfold, but your lowest has only increased 5%.