Guest Post by Willis Eschenbach
A decade ago I wrote a post entitled “The Bern Model Puzzle”. It related to the following question.
Suppose we have a relatively steady-state condition, where the CO2 level in the atmosphere is neither rising or falling. Something like the situation around the year 1400 in the data below.

Figure 1. Historical airborne CO2 levels 1000AD to the present, from 10 ice cores and since 1959, from the Mauna Loa Observatory measurements (orange). Units are parts per million by volume (ppmv) of the atmosphere.
Now, suppose during that time, a volcano blows its top and dumps what we used to call a “metric buttload” of CO2 into the atmosphere. Over time, that pulse of CO2 will be absorbed by a variety of land and ocean sinks, and the status quo ante of atmospheric CO2 will be restored to the level it was before the eruption.
The “Bern Model” is a model used by the IPCC and various climate models. It purports to calculate how long it takes that pulse of CO2 to be reabsorbed by the natural sinks. And that’s where things get curious.
First off, the Bern Model says that 15.2% of that pulse of CO2 will stay airborne forever. Not 15% of the pulse, mind you … 15.2%.
I have never found anyone who can explain this to me. If this were true, it seems to me that every volcanic eruption would lead to a new and higher permanent level of airborne CO2 … but as you can see from Figure 1, that simply hasn’t happened.
For further evidence that the first claim of the Bern model is wrong, consider the annual swing of CO2 levels. From a low point around October to a high point around May of each year, there is a short sharp natural pulse of CO2 that leads to an increase in CO2 levels of about 6 parts per million by volume (ppmv). And this is matched by an equal sequestration of CO2 in natural sinks such that by the following October the previous CO2 level is restored. If that were not the case, CO2 levels would have been increasing every year since forever.
And during that same seven month period, at present we emit a pulse containing enough CO2 to result in an increase in CO2 levels of about 1.3 ppmv.
The Bern Model says that 15.2% of the 1.3 ppmv anthropogenic CO2 pulse will stay in the air forever … but the ~ 6 ppmv pulse is gone very quickly. So how does nature know the difference?
But that’s just the start of the oddity. It gets more curious. The Bern Model says that :
- 25.3% of the CO2 pulse decays back to the previous steady-state condition at a rate of 0.58% per year
- another 27.9% of the pulse decays at 5.4% per year, and
- a final 31.6% of the pulse decays back to the steady-state condition at 32.2% per year
This leads me to the same problem. How does nature know the difference? How is the CO2 partitioned in nature? What prevents the CO2 that’s still airborne from being sequestered by the fast-acting CO2 sinks?
There is, however, a more fundamental problem—the Bern Model simply doesn’t do a good job at representing reality. We have reasonably good information on CO2 emissions since 1850, available from Our World In Data. And we have reasonably good information on airborne CO2 concentrations since 1850 from ice cores and Mauna Loa, as shown in Figure 2.



Figure 1. Historical airborne CO2 levels from 1850AD to the present, from 10 ice cores and since 1959, from the Mauna Loa Observatory measurements (orange). Units are parts per million by volume (ppmv) of the atmosphere.
So I thought I’d take a look at the Bern Model, to see how well it could predict the airborne CO2 since 1850 from the emissions since 1850. The equation for the calculation is in the UNFCCC paper “Parameters for tuning a simple carbon cycle model“, and is also in the endnotes … bad news.



Figure 3. Actual atmospheric CO2 values, and values according to the Bern Model
No bueno … the fact that the Bern Model results are so much smaller indicates that it is incorrectly pushing much of the effect far out into the future.
So, is there a better way? Well, yes. The better way is to use the standard lagging formula:
CO2(t+1) = CO2(t)+ λ ∆E(t) * (1- exp( -1 / τ ) + CO2(t) exp( -1 / τ )
where:
- t = time
- E(t) = emissions at time t
- CO2(t) = CO2 concentration at time t
- λ = .47 (converts carbon emissions to ppmv)
- ∆ = difference from the previous value, so for example ∆CO2(t) = CO2(t) – CO2(t-1)
- τ = tau, the time constant for the decay
Using this formula, I find the time constant tau to be ~49 years. Here’s the result of that calculation.



Figure 4. Actual atmospheric CO2 values, and values according to a standard lagging model
This puts the halflife of a pulse of CO2 into the atmosphere at about 34 years …
Those are my questions and observations about the Bern Model. I’ve put the calculations and data into a spreadsheet here.
Now I need to go climb on the roof and pressure-wash the cedar-shingled walls in preparation for spraying FlameStop on them … dry times in California.
My very best wishes to all, comments and questions welcome.
w.
Of Course: As is my wont, I ask that you quote the exact words you are discussing. That way, nobody’s words get misconstrued. Well, fewer peoples’ words, at least.
The Equation: As promised …



Late News: Well, I’d just finished pressure-washing the upper part of the house when my pressure washer died … and while I know you may find this hard to believe, at that point I actually said very bad words …
Looks like Dr. W. is gonna have to engage in a forensic autopsy, to see if I can perform the Lazarus trick on the !@#$%^& pressure washer.
But not today … enough. And at least the pressure washing of the upper part of the house is done, done, done.
The GHG “extra” energy loop does not exist.
That pretty much renders all of the mysteries moot.
Models.
Nothing more to say (apart from thank you WE – you’re right again).
It must be the CO2…
Pressure washers don’t work well on carbonated water
Better rush out and buy 3 or 4 pressure washers while you can still purchase small gasoline engines in California.
This has always troubled me a bit. The Bern model as well as the time lagged adaptation presented here *assume* first order exponential decay kinetics. This is to say some fixed fraction disappears into a sink for each time period.
It strikes me that instead of first order exponential kinetics, the problem is an Approach to Equilibrium kinetics situation. This is because the two major sinks are bidirectional.
1) The oceans. Absorb carbon dioxide in colder regions and times, outgass in warmer regions and times.
2) Terrestrial plants are net absorbers during the growth season, decay and release otherwise.
The oceans, especially, should be considered an equilibrium system. This is *not* to assert that the oceans are in a state of equilibrium, but rather that they move to the equilibrium point with Approach to Equilibrium kinetics.
The seasonal pulse we see is synchronized to the N hemisphere growing season. The S has more oceans and the N has more agriculture and forests. It should be clear that the biological influence on CO2 concentrations dominates any effects from the oceans.
This also explains the centuries long lag between temperature and CO2 concentrations since it takes centuries for a forest to establish or die as the result of a naturally changing climate.
“It should be clear that the biological influence on CO2 concentrations dominates any effects from the oceans.”
Agreed, short term. On a longer term, I am not so sure. Afrer all, ocean are a reservoir with a huge capacity in relation to the atmosphere.
An item to ponder: On a very long time scale, carbonate rock formation dominates.
Yes, the long term trend is for all Carbon to be sequestered in carbonate rocks at which point, life stops. As for the oceans, they’ve had billions of years to reach a steady state; moreover, only the temperature of the top 100 meters or so varies between ice ages and inter-glacial periods and/or has any response to atmospheric CO2 concentrations.
In the far future, after all of the recoverable fossil fuels have been consumed, CO2 levels will eventually fall to a point that can no longer support agriculture and the windmills will need to be replaced with limestone kilns to return that life giving CO2 back to the environment and prevent agriculture from crashing.
Do not discount the biological component of the ocean absorption.
Yes, without the oceans atmospheric CO2 would be ~50% higher today.
And the the Southern Ocean (SO) draws down ~40% of that taken up by the world’s oceans (25-30% of anthropogenic CO2).*
*(Raven and Falkowski, 1999; Sabine et al., 2004; Khatiwala et al., 2009; Takahashi et al., 2009; Frölicher et al., 2015)
Yes, and that the thermal and biological components are not mutually independent.
Also that the sinks/sources, biological ones in particular, can change over longer time periods. Averaging over the annual seasons is clearly an incomplete approach.
Does anyone look at the steady state of these ice core models of CO2 at about 280 ppm for a thousand years and asks that is highly improbable? No spikes in either direction of even 10%?
Maybe we shouldn’t be looking how fast CO2 leaves the atmosphere, but rather how fast it dissipates from the ice? And that 280 ppm is some kind of natural level after so many years?
We have physical evidence that it was warmer in the past, and if CO2 leads or lags temperature it would also been higher than 280ppm
Those are all good questions.
Ice core analysis is similarly FUBARed as is the Bern model, or at least has not been demonstrated to be fit for purpose. Its analysis procedure evolved to the point that results obtained agreed with expectations. Results are reproducible now, but accuracy has not been verified.
When you say “dissipate from the ice” are you referring to where Firn becomes Ice and locks CO2 inside or dissipation from melting?
I guess I mean before it becomes glacier Ice. my question is not really the specific state of the water/ice, but more general, does the Co2 escape in some fashion until it settles at 280ppm?
I’ve often wondered myself if there was a limit to the amount that can be trapped in ice during it’s solidification process vs what is ambient at the time
Matt Kiro February 15, 2022 10:34 am
What do you think would cause a 10% swing? Even Pinatubo only put a couple of gigatonnes of CO2 into the atmosphere, that’s 1/5 of our annual additions … and 2 GT would only increase the atmospheric concentration by ~ 1 ppmv before sequestration …
w.
2019 anthropogenic emissions were about 36 Gt. Two ‘Pinatubo’ Gt is more like 6% of our annual additions. However, Pinatubo is estimated to have only emitted about 10 Mt CO2.
Wills,
I don’t mean a singular instance like an eruption, but even going up from 1850 to 1920 increased CO2 levels by 10-15% , and we see no variation like that from the ice cores measurements. There is no after from the MWP. While 1000 years is quite a short time in the earth’s history, we still should see some peaks and valleys. We see glaciers expand and contract, sea levels go up and down, tree lines go north and then retreat. But somehow CO2 is constant until 1850.
Matt Kiro February 15, 2022 1:53 pm
CITE YOUR DAMN CLAIMS! I’m not asking people to quote the exact words (or the exact link) you’re discussing just for my health. Where is your evidence of that?
In fact, in the graphs in my post above, the ice core data shows a rise of 6% (286 ppmv to 303 ppmv).
Grrr …
w.
I am basing my questions on your graphs and article. You stated we have atmospheric measurements from 1850 onward, that is why I chose that starting point.
“Figure 4. Actual atmospheric CO2 values, and values according to a standard lagging model”
And this is where I get a rough estimate of a 10-15% increase until ~1920. I’m sorry If my estimate from just looking at the graph was high.
My question is more about the 850 years before this, back until 1000 which are based on the ice cores. There is no significant change or variation in all those years. Does that make you question why that is ? Why do the CO2 levels Ice cores go down to ~280 ppm and then just stay there when we know the earth has gone through warmer and colder periods than now?
This isn’t to question the mathematical work you have done to find a equation that better fits the actual data we have. It is just posing a question I have every time I see these C02 levels from ice core data, because nothing about all the climate indicators we talk about on this site are that consistent for that long.
Thanks, Matt. The answer is that CO2 levels don’t vary much with temperature. Per the ice core glacial-interglacial records, CO2 levels only change by ~ 5-10 ppmv for each one-degree change in temperature … barely visible at the scale shown in Figure 1 above.
Here’s a detail with a greatly amplified scale.
You can see the peak of the Medieval Warm Period around 1000-1100, the gradual decay to the depth of the Little Ice Age around 1700, and the warming after that.
w.
Willis February 15, 2022 11:46
“What do you think would cause a 10% swing?”
According to http://icecap.us/images/uploads/CO2Temps.gif , CO2 has swung naturally from over 7,500 ppm to a few hundred ppm over the last 500 million years. Isn’t that the issue? We don’t understand naturally variability?
Sorry for my lack of clarity. I meant a 10% swing around the year 1400, as I referenced in my comment.
w.
You ask a very good question. Glaciers are a response to cold weather. However, They destroy trees, and keep them preserved for tens of thousands of years. When the glaciers recede, then bacteria can start decomposing the trees and grass. Vegetation can start drawing down atmospheric CO2 again when the glaciers are gone. When glaciers are at their maximum, the surface area of the oceans, where CO2 can be absorbed, is at a minimum. When the glaciers melt, the surface area of the oceans increases, allowing more CO2 to be absorbed. During glaciation, CO2 is temporarily sequestered in the ice, to be released back into the atmosphere when they melt.
In sum, glaciation seriously disturbs the Carbon Cycle. Yet, it is claimed that CO2 was essentially low and constant during the Pleistocene, and didn’t start to rise until the age of industrialization. I think that assuming a mile-think pile of ice has no impact on the Carbon Cycle is a serious mistake.
This would lead to the question , do glaciers sequester more CO2 than equal amounts of ocean water? If its the same, then it does not matter where it is sequestered. We have pictures and written records of glaciers expanding and retreating over the past 150 years, meanwhile CO2 has steadily increased, which suggests glaciers have had no measurable effect in that time on Co2. Mile high glaciers covering a lot of the northern hemisphere, that would be another issue =D
“This would lead to the question , do glaciers sequester more CO2 than equal amounts of ocean water?”.
No. Try freezing Coca Cola. Careful, or you’ll make a mess.
I don’t see the point of your claim. When a soda freezes, the water expands. The amount of CO2 stays the same.
And some of it is extruded into the remaining airspace.
You do not seem to know what a clathrate is.
And some of it is extruded into the remaining airspace.
Inadvertent double paste. Which eliminated my reply.
I do know what a Clathrate is. I have a PhD in Organic Chemistry.
I await your evidence that CO2 has equal, or higher, solubility in ice than it does in liquid water.
Clyde, comments like this one:
are exactly why I ask people to QUOTE THE EXACT WORDS YOU ARE DISCUSSING. I know of no one who says that CO2 was “low and constant” during the Pleistocene. A link to someone actually saying that would be very appropriate …
w.
See especially Fig. 3
https://www.nature.com/articles/s41467-019-12357-5
Also,
https://static1.squarespace.com/static/55967e85e4b0e6e634c12b6f/t/5643a68ce4b081513dad557d/1447274145321/
Not seeing it. Figure 3 shows the levels during the Pleistocene varying from 200 to 300 ppmv, hardly “constant”.
w.
I suppose that the point should be that, like beauty, “constant” is in the eye of the beholder — and is influenced by the vertical scale used on the graph. When I made my remarks to Kiro, what I had in mind was the meme often used by alarmists that we have an unprecedented rise in CO2 since the beginning of the Industrial Revolution. Basically, I was asking the question of why we don’t see a larger range in CO2 during the last 2.5 My.
I think some of the problem is the time resolution. If you take an average over 50 years you will end up missing short term phenomena. Go out to 500 years and it gets even more smoothed. Doesn’t answer why the average comes out to 280 but explains the smooth values.
You cannot compare, or put on the same graph, proxy data with low sampling rates and instrumental data with high sampling rates. The ice core derived concentrations are means over periods as long as sixty or a hundred of years. Average Mauna Loa over 60 years, and you have almost a single data point.
Thanks, Paul. If that were the case we wouldn’t see the excellent fit between ice core and modern data … but we do. Go figure.
w.
Where does it say 15.2% FOREVER? link?
Hey, Leif, your voice is always welcome.
It’s implicit in the link in the head post.
Note that the first percentage parameter, a(0), has no associated time decay parameter tau, while the other three have decay times of 171, 18, and 2.57 years.
So 15.2% stays in the atmosphere forever.
w.
But it is not clearly stated over how long a time span the model is supposed to be valid, so FOREVER seems to be a dubious assumption which is not stressed in the info on the model.
All models have a limited domain [space and/or time] for which they are valid.
for purposes of the model, 15.2% is forever
what length of time is the model fit for? clearly not 1850 to 2020!
The original paper by Siegenthaler and Joos (1992) states that:
“after 1000 years the airborne fractions of the different models are between 16 and 20% which approximately reflect the equilibrium partitioning of the excess CO2 between the atmosphere and ocean”. Which is what you would expect in a system where you have a fraction of CO2 in the atmosphere and some fraction in the ocean. Changing the total amount of CO2 will result in a change in both the mass in the ocean and the mass in the atmosphere.
The paper also states that the model only considers perturbations to the CO2 level from pre-industrial levels so that the annual change of CO2 levels due to seasonal variations is not explicitly part of the model.
True. However, from “Atmospheric Lifetime of Fossil Fuel Carbon Dioxide“:
And from their conclusion:
Call me crazy, but “tens, if not hundreds, of thousands of years” is close enough to “forever” for me.
w.
Well probably beyond the existence of humans 🙂
“Forever” gives you no pause? That is crazy.
Trust me, models are only good for about 20 minutes then they start getting bored.
SuperModels on the other hand can be good for a fair amount of time if you supply them with a hand held mirror
So, this “Bern model” thing is basically an effort to curve fit some set of data and not really an effort at a realistic model?
To illustrate the non-realism of this, I wonder if it would be fair enough to compare the decay rate of CO2 in the environment to, say, the decay rate of a radioactive substance in a rock or in a sample? Granted, in a situation like that, you don’t have the radioactive atoms being replenished in some sort of equilibrium balance, as the nuclear decay in a rock sample is *just* decay and simpler in that sense. Still this CO2 thing is partially a decay rate situation, and therefore maybe comparable or perhaps a decent analogy to some extent?
Now, what I’m getting at here is that if I’ve got just a single kind of decaying atom, like U-238, say, in a sample, it is just not realistic to assume more than one decay rate, not unless there is some darned good reason why some U-238 atoms are apt to be essentially different from all the other U-238 atoms! It just seems to me that questions of CO2 atoms being removed from the atmospher or even being built up in the atmosphere are somewhat similar? If CO2 molecules keep getting mixed between atmospheric layers so that none are really stored apart from the others, *and* they are all the same kind of molecule, then all those CO2’s have to have essentially a common prospect of ‘decaying’, i.e., of ‘leaving’ the atmosphere. So there should basically be just one ‘leave the atmospheric volume’ rate. Unless, there is some reason that a certain pool of CO2’s would get picked on, or else stored away and preferentially left alone, by some process that I’m just not visualizing?
Now, in reading the above, I realize that someone might think that my comparison to radioactive decay is causing me to oversimplify. After all, shouldn’t atmospheric CO2 removal involve many different process or mechanism, each with it’s own rate of removal for the CO2 molecules? To get a bit more sophisticated here, think about looking up the analogous statistic for a large population of humans, so, just think about looking up the average rate of death/’end of life’ for the sum total of all the humans in a particular country. For a lot of calculation purposes, it doesn’t much matter that the average death rate could be broken down by different categories of people, or by different processes of death. The average rate is just going to reliably take out a certain fraction of the population every day or year, or whatever — unless of course, the average rate actually *changes* for whatever reason.
Notice that, at any time, the “growth” or build up rate for CO2 might easily be something different than the decay, or ‘leave’ rate mentioned above, since ‘leaving’ is apt to be, in detail, a different set of processes, as opposed to whatever is involved in ‘arriving’ into the atmosphere. In an actual equilibrium situation, though, the rate of incoming ought to match the rate of outgoing in a meaningful way in *that* case! So where do we get the supposedly ‘curve matched’ solution that 15% is going to be ‘immortal’ (just sitting there untouched)? It sure is beyond me how *that* is assumed to be realistic or presumed good enough for predicting anything?
Think diffusion not decay.
It seems to me the complex dynamics re CO2 may have to do with the individual dynamics of various CO2 sinks, eg
1. Dissolving into sea water—with decades for mixing to occur into the depths
2. Conversion into coral and other shell material
3. Conversion into complex organic molecules following the greening effect, both at sea, phytoplanktons, and on land, from single celled up to trees.
4. Conversion into limestones
5. Others?
The motor portion or the actual pump? If a gas engine, gasohol might have eaten the seals in the carb. The actual pressure pump is a mess of o-rings and bypass valves, but there might be a kit available with a diagram of the pump.
I think you might have posted this in the wrong thread …
w.
Willis, I think he was trying to help you with your Lazarus project.
Ah, I see. Thanks, Mason. Tom, I just took it apart. It’s electrical. I have power to the switch, and power from there to the motor … then nothing. So I’m headed to town to get a new one. This one is very old, I’ve been surprised it’s lasted as long as it has. Back in a few hours …
w.
PS—I NEVER put gasohol into a small engine. It’s lethal. Fortunately, here in CA there’s not much gasohol.
Have you checked the brushes on the motor? They are generally spring loaded but eventually wear to the point that there is no contact.
Naw, it’s old and tired. I just got back from Harbor Freight, paid $80 plus tax for a new one.
w.
Does that FlameStop work pretty good?
Don’t know. First time trying it.
w.
We all hope you never have the opportunity to find out if it works or not.
Since politicians and models say you are living in a mega drought, you may never need to pressure wash exterior mold and mildew again.
Yeah, but I’m pressure washing it to allow me to spray “FlameStop” to cut down the flammability because of the dry conditions … go figure. Now I gotta go see if I can fix my pressure washer.
w.
For those interested in fire retardants, search for and read this paper. (Could not paste link for some reason). Education for inquisitive minds..
“Effects of boric acid and/or borax treatments on the fire resistance of bamboo filament”.
Ethan Brand, Fire Protection Engineer, Retired.
There would be layers of dust however.
Dear Willis,
I do remember your post from a decade ago, it was good then and nothing changed.. it´s climate science progress after all. .
Are you familiar with A. Ollila´s blog? He seems equally puzzled by that Bern Model:
https://www.climatexam.com/single-post/2016/08/29/the-residence-times-of-carbon-dioxide-are-16-and-55-years
” According to IPCC, the oceans can absorb about 55 % of the yearly CO2 emissions in the present climate but as soon as the fossil fuel emission rate starts to decrease, the ocean can not do it anymore! This is very difficult to understand.”
And using a different model than you, he also finds time scales of about 50years!
Isn’t the Bern Model changing significantly between the different IPCC reports?
Numbers and number of parameters are changing..
Last not least, you like those Bern people or Ollila do not seem to believe in uncertainty of your fitting parameters.. it would be nice to see the actual confidence range of your findings.
My only guess is that the Bern model postulates that the fast, medium and slow sinks have saturation points.
If the sink is geological, such as the weathering of rocks, I can see the logic.
If the sink is biological, then such thinking is cockeyed. Plants grow, and when they are sinking CO2, they grow faster. As such, the size of the sink is also going grow, as will the amount of CO2 that the sink is able to absorb each year.
The fact that I can model the atmospheric CO2 with a function with only one parameter (tau) indicates that the sinks are NOT getting saturated … if they were, tau would change over time, but it hasn’t.
w.
saturating sinks were also probably Hansen 1988s biggest mistaken assumption
as his apologists love to point out, if you correct that assumption his predictions aren’t nearly as far off
but some of the variable fast flow scenarios are interesting, and ominous
People may find interesting the work of Edwin X Berry: “The Impact of human CO2 on atmospheric CO2” (https://edberry.com/blog/climate/climate-co2-temp/the-impact-of-human-co2-on-atmospheric-co2/).
His model is simple enough to be correct.
His words: “The United Nations Intergovernmental Panel on Climate Change (IPCC) assumes natural CO2 stayed constant at 280 ppm after 1750 and human CO2 dominated the CO2 increase.
This paper proves this IPCC assumption is wrong because it conflicts with IPCC’s own natural carbon cycle which is valid data.”
Vicente, Berry makes a very common mistake. He’s conflating atmospheric residence time (the length of time an individual CO2 molecule stays airborne) with pulse relaxation time (how long it takes a pulse of CO2 to return to a previous steady state. As a result, his conclusions are … well … meaningless.
w.
yes the variable flow models are more interesting, believe some have been explored here
Vicente, sorry, but Dr Berry is completely wrong: he used the residence time of about 4 years as the main decay rate, but residence time and decay rate for some excess injection of CO2 (49 years) are completely different items…
and Berry is smart enough to know that. Hmm.
He should know that, but I have discussed that in comments on his blog on the first drafts, to no avail…
https://edberry.com/blog/climate/climate-co2-temp/human-co2-emissions-have-little-effect-on-atmospheric-co2/
“I have never found anyone who can explain this to me. If this were true, it seems to me that every volcanic eruption would lead to a new and higher permanent level of airborne CO2″
The answer is that the Bern cycle approximation as used by the IPCC ignores multi-millenia processes such as sedimentation. See, e.g., https://gmd.copernicus.org/articles/11/1887/2018/ (which shows some more complex versions of the Bern cycle model, but Table A3 also includes a multi-exponential version with no climate-carbon feedbacks that is directly comparable to the IPCC version, though with more parameters to describe the fast processes):
“On timescales of up to a few millennia, processes associated with ocean sediments and weathering can be neglected. In such a closed ocean–atmosphere–land biosphere system, excess CO2 is partitioned between the ocean and the atmosphere and a substantial fraction of the emitted CO2 remains in the atmosphere and in the surface ocean in a new equilibrium (Joos et al., 2013). This corresponds to a constant term (infinitely long removal timescale) in the IRF representing surface-to-deep mixing. On multimillennial timescales, excess anthropogenic CO2 is removed from the ocean–atmosphere–land system by ocean–sediment interactions and changes in the weathering cycle (Archer et al., 1999; Lord et al., 2016), and the IRF is readily adjusted to account for these processes, important for simulations extending over many millennia.”
so, 15.2% stays in the atmosphere for thousands of years? lol
if that were true you’d expect a much better fit for 1850 to 2020
hard to imagine that “15.2% stays forever!” being relevant over any period long enough to concern the IPCC
I’m not saying the numbers are necessarily right (or wrong), but giving the explanation for why an “infinite” lifetime for a portion of the CO2 could be consistent with the fact that volcanoes occasionally burp CO2 into the atmosphere, along with my “golf ball” analogy in a comment below. Regarding the 1850 to 2020 fit – see my comment below where I surmise that Willis’ emissions numbers are fossil-only and don’t include land-use change. Adding in land-use would make the concentrations calculated using the Bern approximation larger (but TBD on how good a fit they’d be).
(speaking of good fits: it would be interesting to take Willis’ single exponential approach and use it on the first half of the data, then the first 3/4 of the data, and see how well it captures the portion that isn’t using in the calibration, and whether it would under or over-predict the remaining portion)
Sorry, Marcus, but the numbers do include land-use change …
w.
Yes, you are right and I was wrong – you did use numbers that included land-use change.
On the other hand, your implementation of the Bern approximation was incorrect – see my explanation below. Basically, to calculate the concentration in year X, you need to take the emissions in years 0 through X and apply the 4 equations to each of those years, sum the total, divide by 2.13 to convert GtC to ppm, and add to preindustrial concentrations. The Bern equations don’t work if you use them on the previous year’s excess CO2 the way you’ve done it.
Thanks, Marcus. Graph up your results and show us what you got, I’m interested.
w.
My method is a pain to do in Excel (it requires one row of 4 equations for every year of emissions… for every year of concentration calculated), but I calculated 3 points:
2020 is 413 (versus 414 at Mauna Loa)
1990 is 359 (vs. 354 at Mauna Loa)
1960 is 322 (vs. 317 at Mauna Loa).
So not nearly as bad as the graph you show.
ah, so a “perturbed equilibrium” model
a lot of unsafe assumptions there
How about a frozen arctic ocean doesn’t absorb much CO2?
Only Bern puzzle related to the CO2 I knew about is related to the extra large holes Emmental cheese, Bern being capital of this cheese making province . By us, ordinary plebs it was always believed that these above mentioned holes were made by mice large and small, eating their way through the so said cheese.Come along the CO2 scientist with their models and experiments and create a new 92% consensus theory:
The holes are produced by carbon dioxide (CO2) released by bacteria.
Pleased to say that for cheese CO2 sceptics, the 92% consensus was not good enough, sceptic want the falsifiable theory.
However, there is now hypothesis which I’m sorry to say it may have anthropogenic origin.
It goes something like this:
All these Swiss milking maids are to be blamed, they let tiny invisible particles from hay that cows are chewing on while being milked, to fall into milk collecting buckets.
Since introduction of the electric milking system the Emmental cheese holes have become much smaller, which is great shame since you get a smaller lump of cheese for your Swiss frank.I understand that some people here as true sceptics require a reference for the latest research, so look it up here
https://www.journalofdairyscience.org/article/S0022-0302(17)94362-0/pdf
So, there you are, the solution to the only Bern puzzle I knew of, but often when I come here I am pleased to learn something new, in this case of a more recent Bern puzzle.
In one of the exclusive Swiss cheese delicatessen I visited many years ago the Emmental cheese was divided in two sections, cheese with larger holes was more expensive than one with just small holes (100 grams slices).
It is believed that this gave idea to the Swiss astronomer Rudolf Wolf, the author of the famous Wolf’s sunspot numbers, to weigh sunspot numbers according to the size of black ‘holes’ in his sunspot drawings, which at times of the high solar activity looked as slices of the Emmental cheese.
Perhaps Emmental cheese is in fact a cosmic ray track detector? A cheesy bubble chamber?
link
From the article we have this:
Based on the paper I linked above (as well as common sense), the human caused contribution to atmospheric CO2 is, at most, a temporary aberration. ‘Forever’ doesn’t enter into it.
Strangely enough, the amplitude of the seasonal variations are greatest in the Arctic, and declines as one move south, to a minimum at the South Pole. The Southern Hemisphere has more water and the Northern Hemisphere has more boreal forest and tundra. What does that imply?
It implies that the descending cold waters of the Southern Ocean are the major carbon sink …
SOURCE
w.
That helps explain the draw-down phase in the NH Summer, but not the ramp-up phase in the Fall-Winter. Clearly, the NH peak in May of every year is controlled by sunlight getting strong enough to activate photosynthetic organisms. Therefore, high northern latitudes have a strong influence from biogenic activity, while the seasonal changes at the South Pole are nil.
However unrealistic the Bern model may be, I’ve long disagreed with the view that it’s non-physical.
Thanks for the link, Joe, I hadn’t seen that.
Here’s the problem. From your post:
What you’ve done is conjured up an imaginary system where the Bern Model works. What you haven’t done is identify what the three constant temperature permeable membrane “vessels” are in the real world.
And since the fit of the Bern model to reality is so poor, and since the data can be fit so well with a single exponential decay parameter tau, it definitely makes me doubt that such “vessels” actually exist in reality.
w.
ADDENDUM: In the real world, the amount of CO2 absorbed is NOT proportional to the difference between the atmospheric concentration and the concentration in the reservoirs. Instead, it is proportional to the difference between the atmospheric pressure and the atmospheric pressure in the pre-anthropogenic steady-state condition.
Or, from Dr. Roy’s post:
This is the direct result of the fact that the Mauna Loa observations support the assumption that the rate at which CO2 is removed from the atmosphere is directly proportional to the amount of “excess” CO2 in the atmosphere above a “natural equilibrium” level.
This is not reproduced in any sense by your theoretical “three vessels” model.
In addition to the model in this post, I’ve used a much simpler model like Dr. Roy’s where the sequestration rate is proportional to the “excess” CO2 as Dr. Roy says. When I optimized this model with the pre-industrial level being able to float freely, it settled at a pre-industrial value of 287 ppmv … go figure.
w.
I wasn’t contending that the Bern model is accurate or that my block diagram correctly models the climate system or that a model simpler than the Bern model couldn’t match observations better. I was merely responding to your question
by making the mathematical point that there’s nothing non-physical about the system’s exhibiting multiple time constants even though the atmosphere does no partitioning.
Thanks, Joe. I think you mean that there’s nothing non-physical about A system exhibiting multiple time constants. And yes, you’ve described A system that does that.
My point was whether THIS system, the real world, can do that … particularly given that the decay is NOT equilibrating to some new steady-state after a pulse, as in your system,
Instead, in the real world, the decay is returning to the old pre-pulse steady-state. Instead of stopping when the pressure in the vessels is equal to the pressure outside as in your system, the real-world version of your vessels continue to absorb more CO2 … which doesn’t (and can’t) happen in your system.
Because of that, your system is NOT in any sense parallel to the real world of CO2 decay.
w.
Well, I remain of the view that there’s nothing unphysical about our real world’s exhibiting multiple time constants, but I doubt that we’d join issue soon, so we’ll have to agree to disagree.
Thanks, Joe. I understand you remain of that view and that’s fine.
However, you haven’t demonstrated even a theoretical system that will partition the CO2 and decay to a status quo ante. All your theoretical system does is decay to the current atmospheric level.
I’m most interested if you can propose a theoretical system that does what the real world does, which is to return to the level before the pulse of additional CO2. I’m not saying it’s not possible, you may indeed do it. I’m just saying I haven’t seen it yet.
My best to you, your questions and ideas always appreciated.
w.
Now that we’ve received the straight skinny from Mr. Engelbeen I’m not sure there’s much value in my merely making an abstract mathematical point.
But to my mind my four-box model, although highly simplistic, shows what I said it did, namely, that “what is not wrong with the [Bern] model is that it requires the atmosphere to partition its contents, i.e., to withhold some of its contents from the faster processes so that the slower ones get the share that the model dictates.”
Mr. Eschenbach is correct that I didn’t do what I didn’t intend to do: provide a system that returned to the status quo ante. That’s because the Bern model I based it on doesn’t do that.
If I’d wanted such a system I’d have added an infinite-volume reservoir. Or maybe I could have used some chemical equilibrium processes as analogs; I haven’t thought that through, so I don’t know. In either case the fact would remain that the model wouldn’t need to partition the atmosphere in order to exhibit multiple time constants. And that was my only point.
For what it’s worth I’ll add another theoretical point, this being a general one about linear systems whose decay ordinarily exhibits multiple time constants. There are certain special cases of initial condition (so-called eigenvectors) from which such a system’s decay will exhibit only a single time constant, and for different eigenvectors the time constant will be different.
Thanks, Joe. But if you make your reservoirs of infinite volume, the fast-acting one will keep pulling CO2 out of the atmosphere at a fast rate … and will do so until there’s no CO2 left at all.
Second, the Bern model is absolutely designed to return to the status quo ante. That’s the whole point of it.
I understand that your point is that it’s possible to imagine a system with multiple time constants that doesn’t require partitioning … but what you haven’t done is design one that returns to the status quo ante without partitioning.
Be clear I’m not saying such a theoretical system doesn’t exist. I’m just saying you haven’t proven the point you started out to prove, because your system does NOT return to the status quo ante.
w.
Joe, for your theoretical system to work, the three CO2 sinks have to sequentially become saturated. If that happens, you are correct that there’s no need for a physical partition. You’re right, there’s nothing unphysical about that at all.
But that’s not happening in the real world. There is no evidence at all that there is any change in the rate of sequestration, no evidence of any saturation of the sinks.
And that’s the heart of the mystery to me … how can the Bern Model work without physical partition and also without involving the saturation of any of the three (or five in earlier incarnations) carbon dioxide sinks with different tau values?
My best to you,
w.
I don’t see that. As I say, we’ll have to agree to disagree. Hey, I’ve made math mistakes before; maybe I did this time.
Or maybe not.
Willis,
If there are multiple time constants for the CO2 uptake in different reservoirs (as is the case for the natural CO2 cycles), then the overall formula for the decay rate is as follows:
1/Tau = 1/Tau1 + 1/Tau2 + 1/Tau3 +…
Which has as result that Tau is faster than the fastest of all underlying time constants.
So far the Bern model is right.
The problem is in the saturation of the different reservoirs, that is only true for the ocean surface. Due to chemistry, the fastest uptake (about less than a year to a few years), by the ocean surface is also the fastest saturated, see:
https://tos.org/oceanography/assets/docs/27-1_bates.pdf
Figure 3 and table 2 show the increase in DIC (dissolved inorganic carbon: CO2 + (bi)carbonates) over the past 30 years for Bermuda (BATS) and Hawaii (HOT). The percentage increase is around 10% of the percentage increase in the atmosphere over the same period.
At 10% of the change in CO2, the pCO2 of the ocean surface equals the pCO2 of the atmosphere (with a lag of a few years to follow the increase in the atmosphere).
For other reservoirs, mainly the deep oceans and vegetation, no saturation is in sight, not for the long foreseeable future and that is where the Bern model gets wrong.
Thanks, Ferdinand. I fear that your scientists in the link have done what I call “Looking where it ain’t.” The study areas don’t cover the important part of the ocean. This is the main carbon sink in the Southern Ocean. There, the mixed layer is the deepest, and the cold surface waters are slowly sinking.
As a result, it doesn’t get saturated, and that’s why it’s the main CO2 sink.
w.
Willis, they had no stations in the SH, thus couldn’t look over longer term, but the saturation indeed is only for 90% of the surface. Indeed it is in the remaining 10% that the main sinks into the deep oceans happens, both in the polar waters of the SH as the NH.
Here the findings of Feely e.a. with lots of measurements all over the oceans:
https://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtml
The Bern model only looks at the average surface and didn’t take into account the direct exchange with the deep oceans…
The Bern model originates from the so called Box-Diffusion model by Oeschger et al.
https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.2153-3490.1975.tb01671.x
It is a system with atmosphere – surface sea layer (also called mixed layer) and deep sea.
The transport from mixed layer to deep sea is governed by diffusion (they call it “eddy diffusion”).
A correct mathematical analysis of the governing equations will give an impulse response function of “Gaussian type” – not exponentials.
Later people made numerical calculations of the impulse response function (based on the governing equations in the Box-Diffusion model). They fitted a five term exponential expression to their calculated resppons.
The present exponential impulse response is thus just curve fitting. There is no “physics” in that expression.
The problem indeed is that they use a two-box model, where the main restriction is the mixed layer, but in fact they should have used a three-box model where a lot of CO2 is exchanged directly with the deep oceans via sink places near the poles and upwelling near the equator… These sink places are very deep under saturated for CO2, only restricted in mass flows.
One peculiar thing in Mr Oeschgers et al work is that they emphasize that they introduce the “eddy diffusion” concept with the argument that the carbon cycle can not be properly described by a box model.
Since then their work has been the base for a (four) box model!
The Box diffusion model works like this.
If one injects a pulse (of CO2) into the atmosphere, part of that pulse will be absorbed by the mixed layer.
In equilibrium there is roughly the same amount of CO2 in the mixed layer as in the atmosphere – meaning that CO2 is distributed between atmosphere and mixed layer as 1:1.
Then they introduce the Revell factor, that says that excess CO2 will have a reduce solubility in the mixed layer. The Revell factor is (according to Bolin) about 10, which means that the pulse will be distributed 10:1 – 90% stays in atmosphere and 10 % is absorbed in the mixed layer.
Then comes the slow eddy diffusion into work. That will drain the mixed layer of CO2 by diffusion. However, due to the Revelle factor – only 10% (the CO2 in the mixed layer) of the pulse is exposed to diffusion. The end result is a very slow process, and one can easily calculate the final status, which is that about 20% stays in the atmosphere for ever.
If one change the Revelle factor to 1 (which is in agreement with Henrys law) something completely different happens. The absorption is now much faster, and the decay in the atmosphere will agree with the bomb curve.
The equlibrium value will be that about 2% of the pulse will stay for ever.
JonasW, the Revelle/buffer factor is real and that is observed:
If you look at the increase of DIC in the mixed layer of several stations at figure 3 and table 2:
https://tos.org/oceanography/assets/docs/27-1_bates.pdf
The increase in DIC for Bermuda (BATS) and Hawaii (HOT) is only 10% of the increase in the atmosphere in the same period.
Henry’s law is only for pure, dissolved CO2 and doesn’t apply to bicarbonates and carbonates, although these are in equilibrium with each other. But when CO2 increases and therefore (bi)carbonates increase, also H+ increases and pushes the equilibrium back to CO2.
For a doubling of CO2 in the atmosphere, pure, dissolved CO2 also doubles in water.
For fresh water, where 99% is pure CO2, that doubles.
For seawater, where only 1% is pure CO2, that doubles to 2% per Henry’s law, but the other parts of DIC don’t double…
All together, seawater dissolves 10 times more CO2 than fresh water…
Where they are wrong is that there is no uniform mixed layer: at 10% of the sea surface, there is a direct connection between the atmosphere and the deep oceans, bypassing the mixed layer: that is the case for the sinking waters near the poles and the upwelling waters near the equator. The cold sinking waters can take lots of CO2 with them, as they are highly undersaturated for CO2.
Based on the sink rate of the 13C/12C ratio over time, one can estimate the deep ocean CO2 flux which “dilutes” the human fingerprint caused by burning fossil fuels. That gives about 40 PgC/year direct exchange and is the largest near-permanent sink for our CO2 contribution.
The discrepancy in the years up to 1990 was probably from vegetation which was up to then neutral to a small source, after then an increasing sink for CO2…
The Revell factor is calculated from salinity and pH. I have read a number of papers about the Revelle factor, and I do have big question marks about the way they calculate the Revelle factor.
There is no direct measurement of the Revelle factor.
Since the Revelle factor change the solubility of CO2 in sea water, the right way to measure it is to measure the solubility.
It is straightforward to do such experiments. A tank partially filled with sea water. Add some extra CO2 to the air, and measure how much is absorbed.
Agreed; diffusion came up on this site when we discussed the Bern model eight years ago. (It also came up here ten years ago in connection with Mr. Eschenbach’s discussion of sub-surface temperature phase shifts.) As I said eight years ago, there’s plenty not to like about the Bern model. And I suppose you could say it’s non-physical in the sense that it’s a lumped-parameter model of a (partially) distributed-parameter phenomenon (although you may want to search this site for a block diagram by Mr. Engelbeen in which diffusion takes a back seat if memory serves).
But my comment was merely directed to the head post’s “How is the CO2 partitioned in nature?” comment. What’s not wrong with a lumped-parameter model is that it requires the atmosphere to partition its contents, i.e., to withhold some of its contents from the faster processes so that the slower ones get the share that the model dictates. As we see in the head post, though, the proposition that multiple time constants would require such a partition persists.
The Bern model is wrong.
As seemingly all pro-AGW models.
The author’s explanation is more credible.
“We have reasonably good information on CO2 emissions since 1850, available from Our World In Data.”
I am pretty sure that the Our World in Data emissions are only fossil based. Getting global datasets for both fossil and land-use for the full time period is a little bit annoying… two alternate options for the data:
The Global Carbon Budget (best quality, but only back to 1950): https://www.icos-cp.eu/science-and-impact/global-carbon-budget/2021 – see the spreadsheet 2021 Global Budget v0.6, tab Global Carbon Budget, column B global fossil and column C global land-use.
CDIAC is out of date, but has global fossil data from 1751-2014 (https://cdiac.ess-dive.lbl.gov/ftp/ndp030/global.1751_2014.ems) and land-use data from 1850 to 2005 (https://cdiac.ess-dive.lbl.gov/trends/landuse/houghton/1850-2005.txt).
I might suggest using the CDIAD data from 1850 to 1950, and the GCB from 1950 to 2020… there might be a small discontinuity between the two datasets, but it is probably the best I can think of.
Nope. The OWID data has both fossil plus landuse data. I put the source of the data in the endnotes. Might I suggest actually looking at the data before making untrue statements?
w.
I looked at this list from the Our World in Data link:
and didn’t see land use listed in there. Which led me to my conclusion.
But now I went to your dropbox link (I admit I did not look there before) and the last 4 data points in your dropbox file match the last 4 data points in the GCP file for fossil + land-use, so I admit I was wrong.
Okay, now I went to your dropbox link, and I have a different criticism. I think you implemented the bern cycle approximation improperly. For example, to calculate the concentration in 2020 (time period 170), you need to take the emissions from every year and use the Bern cycle in each year and sum them all. E.g., you have 4 equations to figure out the contribution of year zero emissions to year 170 concentrations:
a) =$B21*E$17
b) =$B21*F$17*EXP(-1*($A$191-$A21)/F$16)
c) =$B21*G$17*EXP(-1*($A$191-$A21)/G$16)
d) =$B21*H$17*EXP(-1*($A$191-$A21)/H$16)
(e.g., contribution from time t = emissions(t)*percent*e^t/tau, where B is emissions(t), X$17 is the percent, $A$191 is current time, $A21 is the time of emissions, and X$17 is the lifetime)
So the contribution of year zero emissions to year 170 concentrations is 0.183 GtC. The contribution of year one emissions to year 170 concentrations is 0.189 GtC. and so on until the contribution of year 169 emissions to year 170 concentrations is 9.74 GtC. Sum those all up and you get a total of 288 GtC for the contribution of year zero through year 170 emissions to year 170 concentrations. Divide 288 GtC by 2.13 to convert to ppm and you get 135 ppm. Add 135 ppm to the pre-industrial concentration of 278 ppm and you get 413 ppm. Which is a decent match to current concentrations…
(the bern cycle approximation isn’t a real carbon cycle model. A real carbon cycle model would operate more like what you implemented, where you take the previous year’s concentrations and emissions and use that to figure out the next year’s concentration)
Here’s my third and final comment. “How does nature know the difference? How is the CO2 partitioned in nature? What prevents the CO2 that’s still airborne from being sequestered by the fast-acting CO2 sinks?”
It is important to understand that the individual CO2 molecules are bouncing from the atmosphere to surface ocean to atmosphere to ecosystem on a short timescale (atmospheric lifetime about 5 years, I think?). But the “perturbation” lifetime is much longer.
Thought example: take two jars, each with 100 golf balls. Every minute, you take 10% of each jar and swap them. On average, a golf ball will move to the other jar once every 5 minutes or so. Now add 10 new golf balls to the “atmosphere” jar. So now there are 110 golf balls in the atmosphere, 100 in the ocean. After one minute, 11 golf balls move to the ocean, and 10 back to the atmosphere. Now its 109 to 101. Then 10.9 v. 10.1. 108.2 to 10.18. Keep repeating, ad nauseum. Eventually you end up with 105 balls in each jar. You could probably approximate this jar problem with 50% of any additional balls having a lifetime of 5 minutes, and 50% of any additional balls having an infinite lifetime.
And we can make a minor modification which will explain the seasonal cycle: have the atmosphere give the ocean its balls on the minute exactly, whereas the ocean gives the atmosphere its balls on the half-minute. So now the “seasonal” pulse is 10 balls, with 100% disappearing in a year, whereas the “human” pulse of 10 balls disappears more slowly as mentioned above. It isn’t that nature knows the difference but rather that the seasonal pulse is just sloshing carbon around within the system, whereas burning fossil fuels or releasing soil carbon through land-use takes carbon from “outside” the system and puts it into the system, and it takes time for it to trickle into all the systems corners.
Made a similar point mathematically eight years ago: https://wattsupwiththat.com/2013/12/02/is-the-bern-model-non-physical/
also implies there is likely some new equilibrium at which CO2 levels will level off, shortly after human contributions reach the maximum economically extractable CO2 impulse
and then CO2 starts to fall and oh hey look the interglacial is ending too
wow look at these deals on Canadian real estate
Willis – I agree. The Bern model is the central flaw in the modelling on which all the other hysterical nonsense is based. Any competent modeller would take one look at an Impulse Response Function like this and know that there is something fundamentally wrong with his model. Instead, such models have provided the basis for all the hand wringing about climate change being “catastrophic” and “irreversible”. We have done a statistical analysis of real observations similar to yours and reached the same conclusion. In the process we realized WHY the Bern model is wrong and we can demonstrate this mathematically. (It is because all such models are deterministic “conveyor belt” models which ignore turbulence.) We are presently writing this up for publication. We don’t know your email address but would value your input before we send it off to a journal.
In the not very far distant past Roy Spencer blogged a CO2 model that had the growth of concentration conforming to an S curve that flattened out around 560 or 580 ppm. I couldn’t find it on a quick search.
Fig. 3. in
https://www.drroyspencer.com/2019/04/a-simple-model-of-the-atmospheric-co2-budget/
Thanks a lot. I think that Willis should look at it and see if it works with his model.
Roy and I are measuring somewhat different things.
w.
Dr. Roy himself: February 16, 2022 5:52 am
For the interested go top Dr. Roy’s comment below February 16, 2022 5:52 am
Willis is on it. See his comment at 4:44 pm above.
All I can say is, thank you again Willis for an informative, and humorous post. While, as a layman lacking in the mathematical skills you have – meaning that I lack the ability to fully comprehend all that you posted – I still learn a lot from what you have to say. I especially liked “Now, suppose during that time, a volcano blows its top and dumps what we used to call a “metric buttload” of CO2 into the atmosphere”…
Buttoad seems to me an appropriate term for much of what passes as climate “science”.
By the way, if you manage a trip back to northeast Florida, I hope you will consider allowing me to buy you and your gorgeous ex-fiance a beverage of your choice and dinner at a place of your choosing in Fernandina Beach – my way of thanking you for the education you offer.
Thanks for your kind words, Barnes, and I do hope to take you up on your kind offer someday.
My best to you and yours,
w.
Slightly OT but this is a SELL for Octopus Energy
Pressure washers may be the origination point of the urgency sensor, now mandatorially placed in every appliance: the more you need it, the more likely that the appliance will fail.
Trying to follow the lagging calculation, does this mean that we are now dealing with the supposed effects of those C02 emissions as of the late 1980s? If actual warming based on CO2 is measured at less than half of best case projections — despite amplification from unpredictable natural events (e.g. volcanic eruptions) — would not this circumstance invalidate predictive models?
Willis,
Several points:
The seasonal swings are temperature driven, not pressure driven: the fast growth of new leaves in spring + summer uptake gives the drop in CO2, which is larger than the simultaneous release from the warming oceans.
In fall/winter the fluxes are reverse. These processes are hardly influenced by any extra CO2 pressure in the atmosphere.
The Bern model expects that different reservoirs (oceans, vegetation, sediments,…) all have different uptake rates, which is true, but also do saturate (at different levels).
The latter is only true for the ocean surface, which saturates at about 10% of the change in the atmosphere: that is the Revelle/buffer factor.
That is not true for vegetation, where the optimum growth for all trees and most other plants which follow the C3-cycle is above 1000 ppmv.
That is also not true for the deep oceans, which are highly under saturated for CO2. The only problem is that these are largely isolated form the atmosphere. The only direct exchanges between the atmosphere and the deep oceans are via the sink places near poles and upwelling near the equator.
That is the main factor in the rather constant e-fold decay rate of ~49 years over the past 60+ years or the ~34 years half life time…
The problem with the Bern model is that they originally calculated that for 3000 PgC and 5000 PgC extra in the atmosphere, which is for resp. all available gas and oil used and more if also lots of coal are used. In that case, of course, even the deep oceans could get more and more saturated.
With the current total of around 400 GtC emitted by humans since about 1850, that is only 1% of what is already in the deep oceans. When everything is in equilibrium, that gives some 3 ppmv extra in the atmosphere, that is all…
The second problem is that they use the average of the ocean surface for the whole surface, while the sink places are extremely under saturated for CO2, thus bypassing the rest of the surface…
See: http://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtm
That is the general problem of working with averages…
There were some interesting discussions in the past about the Bern model between Fortunat Joos (inventor of the model) and Ir. Peter Dietze:
http://www.john-daly.com/dietze/cmodcalc.htm
and
https://www.john-daly.com/dietze/cmodcalD.htm
I have never found anyone who can explain this to me. If this were true, it seems to me that every volcanic eruption would lead to a new and higher permanent level of airborne CO2
Well the settled science explanation is that anthropogenic CO2 is zombie apocalypse CO2. Other CO2 floats around in the air for a few years before getting sucked into plant stomata or silicate weathered into rock.
But not so man made zombie CO2. If this settles from the atmosphere, it waits for a suitable cinematic interval then zombie rises back into the air!
This is done by the magic power of the carbon Night King who periodically raises his arms slowly, just like Vladimir Furdik in Game of Thrones, at which all grounded zombie CO2 molecules then rise en masse into the night air.
I thought he was bad enough with the Wights.
As I wait for the last two books
He put the effort in at least – it was half a day’s work to get all his make-up sorted, even longer for the “children”.
“This puts the halflife of a pulse of CO2 into the atmosphere at about 34 years …”
I doubt that very much Willis. Look at plot #7 below, pulses of 12m∆ ML CO2 (green) that very closely follow the 12m∆ SST≥25.6°C (r=.84, lag=5mo), within months, not 34 years:
Your model is simple curve-fitting without ocean physics. The Bern model and your model are wrong because of the underlying assumption that there is no important and applicable temperature dependence for CO2 outgassing/sinking. Sure there is more CO2 because of emissions, but it is the ocean temperature which “decides” to partition the sources and sinks for all CO2 according to Henry’s Law of Solubility of Gases.
Most of the outgassing happens inside the yellow boundary within the tropics:
Bob, the response of CO2 to ocean surface temperatures is not more than 8 ppmv/K for Antarctic temperatures or 16 ppmv/K for global temperatures.
That is what Henry’s law says and is observed all over the oceans.
That means that the 0.8 K temperature increase since the LIA is good for about 13 ppmv CO2 increase in the atmosphere, not 120 ppmv as is observed.
Further, the 24 years is not the result of curve fitting, but of observations.
When there is a linear relationship between a disturbance and the resulting change (Le Chatelier’s principle), then the e-fold decay rate (where the residual disturbance is 1/e of the original disturbance) is as follows:
tau = cause / effect
tau = 120 ppmv (extra CO2 pressure) / 2.4 ppmv/year (net sink rate) = 50 years
Or a half life time of about 35 years.
The net sink rate is easy to calculate:
increase in the atmosphere = human emissions + natural emissions – natural sinks
2.1 ppmv = 4.5 ppmv + X – Y
X – Y = -2.4 ppmv
Whatever the exact height of X or Y, the net sink rate is known with sufficient accuracy.
That the net sink rate follows temperature changes with a small lag is true: but that is only the variability (+/- 1.5 ppmv) around the trend (90 ppmv) since Mauna Loa started its measurements. Nothing to do with the trend itself which is near zero in the derivatives. The temperature derivative is enhanced with a factor 3.5 to show similar amplitudes for its variability as for the CO2 variability:
I’ll leave the Berning to the experts while we lay folk and sundry deplorables take comfort that the worst dooming is now all under control-
‘Worst-case’ climate predictions are ‘no longer plausible,’ study (msn.com)
As to more serious problems like the longevity of domestic pressure washers thou shalt need to comprehend that commercial recirculating pressure pumps of various flow rates and pressures will consist of quality brass bodies with triple cylinder ceramic pistons like so-
Home page (interpump.com.au)
with appropriate fossil fuelled neddies to drive them but alas unlikely to frequent Home Depots Bunnings etc
How does the Bern model handle the pulse of C-14 from atmospheric nuclear weapons testing during the mid-20th century?
The Bern model does not matter. The 14CO2 pulse from atmospheric bombs was measured directly. The 1963 test ban marks the pulse beginning. 14CO2 peaked about a year later due to global mixing of the pulse. Then 14CO2 fell by 1/2 from 1965 to 1975.
This observation directly quantifies the Tau at 18 to 20 years, which occurred about 1985.
Almost none of the CO2 that existed in the Earth’s atmosphere in 1964 remains today.
I say again. People are conflating two very different matters. One is the “residence time”, how long an individual CO2 molecule stays airborne.
The other is the “pulse decay time”, how long it takes a pulse of CO2 into the atmosphere to be sequestered so that the atmospheric CO2 returns to the pre-pulse levels.
The two have NOTHING to do with each other. They have totally separate and different time constants “tau”.
w.
Willis,
The Tau of 18 to 20 years for the 14CO2 bomb pulse is not the residence time and indeed is much shorter than for a 12CO2 pulse for a different reason: there is an enormous lag (~1000 years) between what is going into the deep oceans and what returns.
Thus while the 14CO2 pulse was at it maximum around 1960, which was taken partly by the deep oceans, what returned from the deep oceans in the same year was the isotopic composition of long before the 14CO2 pulse.
That made that in 1960 some 97.5 of all 12CO2 returned, but only 45% of all 14CO2 and thus the decay rate of a 14CO2 pulse is much shorter than for a 12CO2 pulse…
If the 15.2% were correct, there would be more C14 in the atmosphere.
A good explanation of the non physical aspects of the Bern model is found here: https://youtu.be/rohF6K2avtY. At about 57 min. Salby shows part of the math error involved and more of it at about 1:05.
The Bern Model was falsified by OCO-2 data team papers published in 2017 in Sci Mag. The OCO-2 lead scientist just retired in January from the OCO-2 analysis group at NASA.
NASA no longer cares about global CO2 measured data. Only models.
As a UK resident I was going to ask why is it allowed to build houses with combustible cladding in a wildfire risk area.
But then I remembered we do that as well, as Grenfell Tower burnt down and 1000’s of other high rise buildings are having the combustible cladding removed.
Built 50 years ago when people didn’t think about such things.
w.
There is a very simple reason why 15% (sometimes 20%) is airborn. The same reason explains the slow decay of a pulse addition to the atmosphere.
The reason is the Revelle factor. It changes the solubility (residence time) of excess carbon in sea water.
The argumentation is like this:
In equilibrium (=pre-industrial level) a CO2 molecule on average stays about 10years in the surface sea layer. When the CO2 concentration in surface sea layer the residence time of the addition decrease drastically. Normal value of the Revelle factor is about 10. This means that the residence time for an addition is about 1 year, before it re-enter the atmosphere.
They just change the rate equations:
Outflow from surface sea to atmosphere at equilibrium = amount of carbon dioxide in surface layer / 10 years.
Outflow for addition= additional amount of carbon dioxide in surface layer / 1 year.
The effect is that the solubility of excess CO2 is 10 times lower than the equilibrium solubility.
The surface sea does not “want to” solve addition CO2, while it happily solves equilibrium CO2. They call this “buffering”.
All papers I have read about the Revelle factor are theoretical. Not measurments. Sometimes you can sea maps over the Revelle factor in the Oceans. That is calculated vales, but at first glance it looks as if it was measured.
It would be very straightforward to measure the Revelle factor. Take some sea water and add some CO2 to the atmosphere above. That will give the solubility.
I am pretty sure that experiments would show that Revelle factor is 1. If so, the IPCC carbon cycle model is totally wrong.
JonasW, the Revelle factor is measured at several seawater sampling stations over time and indeed shows a much smaller increase of CO2 derivatives in seawater than in the atmosphere, see my explanation at:
https://wattsupwiththat.com/2022/02/15/feeling-the-bern/#comment-3456103
The problem is that Henry’s law only applies to pure CO2 dissolved in water, not to bicarbonates and carbonates. If CO2 doubles in the atmosphere, pure CO2 in water doubles too, but in seawater that is only 1% of all inorganic carbon species, thus that doubles to 2%.
See the Bjerrum plot:
Are you sure it is measured ??
I think it is calculated. They measure salinity and PH and calculate the Revelle factor from that.
I strongly doubt the calculations.
The Revell factor changes the solubility. It says that excess CO2 has a lower solubility in sea water than “equilibrium” CO2 concentration.
A bucket with sea water with a hood above. Inject some extra CO2 and measure how much CO2 stays in the air -> will give a direct value of the Revell factor.
I’ve never understood the Bern model, although I’ll admit never trying very hard. The interesting thing about your example of major volcanic eruptions is that they lead to a decrease (not increase) in atmospheric CO2. I stumbled on that result in my model of yearly CO2 changes which assumes nature removes atmospheric CO2 at a rate proportional to the “excess” over some background level where sources and sinks are equal (see Fig. 5):
https://www.drroyspencer.com/2019/04/a-simple-model-of-the-atmospheric-co2-budget/
It turns out this effect of volcanoes on enhancing photosynthesis has been published before… the extra diffuse sky radiation after an eruption penetrates deeper into vegetation canopies;
https://www.researchgate.net/publication/10832080_Response_of_a_Deciduous_Forest_to_the_Mount_Pinatubo_Eruption_Enhanced_Photosynthesis
Fascinating, Dr. Roy. Now I’ll have to look at that, hang on … OK, here’s what I find.
The effect is not large, but it’s definitely visible. Well spotted.
w.
good stuff, Willis
The Bern Model says that 15.2% of the 1.3 ppmv anthropogenic CO2 pulse will stay in the air forever … but the ~ 6 ppmv pulse is gone very quickly. So how does nature know the difference?
The annual increase that is reversed by an annual decrease is from oscillatory source/sinks (mostly northern hemisphere vegetated areas). The Bern model is not about oscillatory source/sinks.
Sorry, but the Bern Model says that of any pulse of carbon into the atmosphere, 15% stays aloft. It says nothing about anything being off limits.
w.
Oscillating source-sinks of CO2 with an annual cycle are of a time period much shorter than the fastest-decaying component of all versions of the Bern model. The fastest-exponentially-decaying component that is properly in any version of the Bern model, no matter how many exponentially decaying components a Bern model is expanded/refined to have, is not shorter than the exponential decay (sinking into CO2 sinks) of individual CO2 molecules as indicated by exponential “decay”/sinking of individual carbon atoms of an isotope from nuclear bomb tests, with time constant (tau) mostly claimed 7-10 years, maybe as little as 6 years in a few claims although I also heard that short as being in the range of half-life as opposed to e-folding time. The annually-oscillating source-sink (or set of these) is not a model-disproving exception, but merely a cause of an annual cycle of ripple that gets added to the multi-year-scale exponential trends in both Bern models, and also non-Bern models that have a single exponential decay / sinking math component.
Donald,
Different processes at work: the Bern model is for the distribution and decay of any excess CO2 level (thus pressure), while the seasonal swings are temperature driven.
As the opposite δ13C and CO2 changes show, vegetation is the dominant effect of temperature changes over the seasons and the effect is a huge drop in CO2 when temperatures increase, no matter the CO2 pressure in the atmosphere.
http://www.ferdinand-engelbeen.be/klimaat/klim_img/seasonal_CO2_d13C_MLO_BRW.jpg
The same for year by year changes (Pinatubo, El Niño): again vegetation is the dominant effect, but then in opposite direction: more CO2 with higher temperatures (and drought in the Amazon) and less CO2 with lower temperatures and more light diffusion (Pinatubo).
For longer time periods (decades to multi-millennia), the oceans are the dominant effect…
All these processes work near independent of each other and are hardly influenced by any extra CO2 pressure in the atmosphere. The latter decay rate is mainly what is removed by the deep ocean cycle, which is highly undersaturated for CO2, but has a limited exchange with the atmosphere…
One way of coming up with the permanent component of addition of CO2 to the atmosphere, without effect of climate sensitivity, is ratio of amount of carbon in atmospheric CO2 (back when it was 280 PPMV or about 424 PPM by mass or about 595 gigatons of carbon) to the amount in the oceans (39,000 gigatons of carbon, not changed much in percent terms from what it was when atmospheric CO2 was 280 PPMV). In this oversimplification, about 1.5% of a pulse of CO2 emitted into the atmosphere remains in the atmosphere after the ocean water gains as much as it is ever going to, assuming the added CO2 does not warm the ocean. This neglects CO2 removal in ways other than being added to what’s dissolved in ocean water, such as being transferred to the lithosphere (which is on a time scale longer than the designers of the Bern model are concerned with). Also, the Bern model is oversimplified with a discrete number of exponential decay terms, and it gets more accurate as more exponential decay terms are added and the subset of the added ones that have decay rates longer than the longest decay rate already being used would take away some of the “permanent” component (which is 15.2% in the noted version of the Bern model that has only three exponential decay terms). Another thing: The permanent percentage of an addition of CO2 to the atmosphere varies with climate sensitivity. Demonstrating that a figure for this is excessive does not disprove the Bern model, but is merely evidence of using parameters for it that depend on a climate sensitivity figure (such as 3 degrees C/K per 2xCO2) that is greater than actual.
Regarding Figure 3: I just noticed it says that the particular Bern model being used says more CO2 should have been removed from the atmosphere by nature so far than has been the case, despite having an excessively large permanent component of CO2 remaining in the atmosphere. (Yellow “actual” looks like a little over 420 PPMV, red “Bern Model” looks like about 365 PPMV). I see this as not evidence of the Bern model in general being incorrect, but of the parameters of this particular version of Bern model (especially in one or both shortest term components) being incorrect. That is, assuming there are no errors whose correction would have modeled CO2 being higher than shown by the red curve.
Oops, I just looked again, I was a little incorrect when I said “a little over 420 PPMV”, it’s a little under.
Regarding Figure 4 showing atmospheric CO2 concentration being a fairly close match to a curve generated by modeling decay of each year’s emissions into the atmosphere being exponential with a time constant tau of 49 years: If atmospheric CO2 concentration in excess above the pre-industrial level and emissions are both growing exponentially at the same rate, then this can be modeled by a wide range of decay curve shapes for decay of a year’s emissions. That is a property of exponential growth of excess of atmospheric concentration and of emissions, if these two are growing at the same exponential rate. I expect the workable shapes of a decay curve for every year’s emissions includes a Bern one. Another workable decay curve shape here is decay happening in a year and being equal to some number around 50% of emissions of each year in question with atmospheric CO2 contribution not removed in a year being permanent, but ability of that curve to give a result that resembles Figure 4 does not prove correctness of such a decay curve. Because a variety of decay curves can give a result that resembles Figure 4, such a figure does not prove correctness of a decay curve, including of an exponential decay curve or of a Bern one that works (from choice of parameters for it) for such a purpose.
Regarding “I find the time constant tau to be ~49 years” just above Figure 4: How do you (or do you ?) reconcile this with “The calculation used best-fit values of 59 years as the time constant (tau)” in https://wattsupwiththat.com/2015/04/19/the-secret-life-of-half-life/ ?
I’m looking at a three times longer dataset in this post, 171 years (1850 -2020) versus 55 years (1959-2013).
w.
So, I suspect at this point that exponential growth rate of manmade emissions and of atmospheric excess above 283-285-whatever PPMV both reasonably support an exponential decay curve of annual emissions with only small errors, with exponential decay rate of “atmospheric excess” slowing a little as rate of exponential growth rate of emissions and of “atmospheric excess” slow a little. As I see this, the curve-fitting gets fairly good (so far) with modeling with choice of either 49 or 59 years of time constant tau, and the slower one works better for fewer past years and the faster one works better for more past years. Am I getting all of this correctly do far?
I see small slowdown of exponential-decay-rate model of nature removing CO2 from the atmosphere as exponent decreases by similarly small extent during this time of exponential growth of manmade CO2 emissions as being consistent with Bern models. I see consistency with more than one model, including a model I mentioned in a previous comment where nature’s removal of “excess CO2” from the atmosphere gets incompletely done (around halfway done) in about a year, with what added CO2 remains after that being permanent. Wide variety of models of “decay” of excess atmospheric CO2 that have fairly good fit to the data of growth of atmospheric CO2 and of human activities of transferring lithospheric carbon to atmosphere CO2 don’t disprove each other until they greatly disagree with ether.
Hi Willis,
I’ma little late to the party here, but I noted the smallish discrepancies between actual atmospheric concentration and modeled concentration might mostly disappear if your model included an ocean surface temperature effect. As Ferdinand has pointed out many times, there is an influence of average ocean surface temperature on atmospheric CO2, though it is not large… maybe 5 or 10 PPM per degree.
Dear Mr. Eschenbach
I 100% agree with you. I made my oun calculations concerning e-time and Bern-Model.
enclose an abstract out of it. Hope you enjoy it.
New basics assumptions to evaluate the e-time τ of CO₂ in the atmosphere
So we are pursuing the new approach:
· The historical CO₂ content based on natural CO₂ emissions/absorptions follows the same physical exchange principle as the anthropogenic emissions.
· There is no justification that pre-industrial emissions are constant.
· There is no reason to believe that the airborn fraction is constant.
· There is no reason to believe that Residence Time is constant.
· The Bern model served as a theoretical model for comprehension, but its use for forecasts and simulations for real conditions must be rejected.
· The increased CO₂ values from 280ppm to 411ppm are caused by several impacts: increase biomass by up to 30% see (34)C. Huntingford et al.This also increases the seasonal biomass cycle in the northern hemisphere.
· The increased biomass and the increased CO₂ partial pressure cause an additional increase in absorption and emission.
· We had a temperature increase of 0.8°C from 1975 to 2020. This results in an increased emission of CO₂ in the order of 2ppm/a. Takahashi et al (33)
· The results of the I⁴C study ((3)Skrable et al) were included in our thesis
· The residencel time τ can consist of different components.
· Since there was an equilibrium between Eland and Eocean before 1750 , we assume that the Ocean has an increase in absorption due to the increased CO₂ partial pressure, as assumed by the IPCC in Fig 6.1
· The increase of EDCNF was taken over tabulatally by (3) Skrable et all.
· Biomass combustion, which has risen sharply since the 1970s in particular, must be taken into account in the entire CO₂ budget. These are not included in ELUC and ENF (bp Statistic (4)).
Finaly we found an e-time of 3,4 years. If we calculate with these τ-values and data from Global Carbon Budget, MLO, EIA and EDGAR and formula (3) we receive a perfect match of measured and calculated CO2 concentrations between 1750 and 2020 .