Guest Post by Willis Eschenbach
There seem to be a host of people out there who want to discuss whether humanoids are responsible for the post ~1850 rise in the amount of CO2. People seem madly passionate about this question. So I figure I’ll deal with it by employing the method I used in the 1960s to fire off dynamite shots when I was in the road-building game … light the fuse, and run like hell …
First, the data, as far as it is known. What we have to play with are several lines of evidence, some of which are solid, and some not so solid. These break into three groups: data about the atmospheric levels, data about the emissions, and data about the isotopes.
The most solid of the atmospheric data, as we have been discussing, is the Mauna Loa CO2 data. This in turn is well supported by the ice core data. Here’s what they look like for the last thousand years:
Figure 1. Mauna Loa CO2 data (orange circles), and CO2 data from 8 separate ice cores. Fuji ice core data is analyzed by two methods (wet and dry). Siple ice core data is analyzed by two different groups (Friedli et al., and Neftel et al.). You can see why Michael Mann is madly desirous of establishing the temperature hockeystick … otherwise, he has to explain the Medieval Warm Period without recourse to CO2. Photo shows the outside of the WAIS ice core drilling shed.
So here’s the battle plan:
I’m going to lay out and discuss the data and the major issues as I understand them, and tell you what I think. Then y’all can pick it all apart. Let me preface this by saying that I do think that the recent increase in CO2 levels is due to human activities.
Issue 1. The shape of the historical record.
I will start with Figure 1. As you can see, there is excellent agreement between the eight different ice cores, including the different methods and different analysts for two of the cores. There is also excellent agreement between the ice cores and the Mauna Loa data. Perhaps the agreement is coincidence. Perhaps it is conspiracy. Perhaps it is simple error. Me, I think it represents a good estimate of the historical background CO2 record.
So if you are going to believe that this is not a result of human activities, it would help to answer the question of what else might have that effect. It is not necessary to provide an alternative hypothesis if you disbelieve that humans are the cause … but it would help your case. Me, I can’t think of any obvious other explanation for that precipitous recent rise.
Issue 2. Emissions versus Atmospheric Levels and Sequestration
There are a couple of datasets that give us amounts of CO2 emissions from human activities. The first is the CDIAC emissions dataset. This gives the annual emissions (as tonnes of carbon, not CO2) separately for fossil fuel gas, liquids, and solids. It also gives the amounts for cement production and gas flaring.
The second dataset is much less accurate. It is an estimate of the emissions from changes in land use and land cover, or “LU/LC” as it is known … what is a science if it doesn’t have acronyms? The most comprehensive dataset I’ve found for this is the Houghton dataset. Here are the emissions as shown by those two datasets:
Figure 2. Anthropogenic (human-caused) emissions from fossil fuel burning and cement manufacture (blue line), land use/land cover (LU/LC) changes (white line), and the total of the two (red line).
While this is informative, and looks somewhat like the change in atmospheric CO2, we need something to compare the two directly. The magic number to do this is the number of gigatonnes (billions of tonnes, 1 * 10^9) of carbon that it takes to change the atmospheric CO2 concentration by 1 ppmv. This turns out to be 2.13 gigatonnes of carbon (C) per 1 ppmv.
Using that relationship, we can compare emissions and atmospheric CO2 directly. Figure 3 looks at the cumulative emissions since 1850, along with the atmospheric changes (converted from ppmv to gigatonnes C). When we do so, we see an interesting relationship. Not all of the emitted CO2 ends up in the atmosphere. Some is sequestered (absorbed) by the natural systems of the earth.
Figure 3. Total emissions (fossil, cement, & LU/LC), amount remaining in the atmosphere, and amount sequestered.
Here we see that not all of the carbon that is emitted (in the form of CO2) remains in the atmosphere. Some is absorbed by some combination of the ocean, the biosphere, and the land. How are we to understand this?
To do so, we need to consider a couple of often conflated measurements. One is the residence time of CO2. This is the amount of time that the average CO2 molecule stays in the atmosphere. It can be calculated in a couple of ways, and is likely about 6–8 years.
The other measure, often confused with the first, is the half-life, or alternately the e-folding time of CO2. Suppose we put a pulse of CO2 into an atmospheric system which is at some kind of equilibrium. The pulse will slowly decay, and after a certain time, the system will return to equilibrium. This is called “exponential decay”, since a certain percentage of the excess is removed each year. The strength of the exponential decay is usually measured as the amount of time it takes for the pulse to decay to half its original value (half-life) or to 1/e (0.37) of its original value (e-folding time). The length of this decay (half-life or e-folding time) is much more difficult to calculate than the residence time. The IPCC says it is somewhere between 90 and 200 years. I say it is much less, as does Jacobson.
Now, how can we determine if it is actually the case that we are looking at exponential decay of the added CO2? One way is to compare it to what a calculated exponential decay would look like. Here’s the result, using an e-folding time of 31 years:
Figure 4. Total cumulative emissions (fossil, cement, & LU/LC), cumulative amount remaining in the atmosphere, and cumulative amount sequestered. Calculated sequestered amount (yellow line) and calculated airborne amount (black) are shown as well.
As you can see, the assumption of exponential decay fits the observed data quite well, supporting the idea that the excess atmospheric carbon is indeed from human activities.
Issue 3. 12C and 13C carbon isotopes
Carbon has a couple of natural isotopes, 12C and 13C. 12C is lighter than 13C. Plants preferentially use the lighter isotope (12C). As a result, plant derived materials (including fossil fuels) have a lower amount of 13C with respect to 12C (a lower 13C/12C ratio).
It is claimed (I have not looked very deeply into this) that since about 1850 the amount of 12C in the atmosphere has been increasing. There are several lines of evidence for this: 13C/12C ratios in tree rings, 13C/12C ratios in the ocean, and 13C/12C ratios in sponges. Together, they suggest that the cause of the post 1850 CO2 rise is fossil fuel burning.
However, there are problems with this. For example, here is a Nature article called “Problems in interpreting tree-ring δ 13C records”. The abstract says (emphasis mine):
THE stable carbon isotopic (13C/12C) record of twentieth-century tree rings has been examined1-3 for evidence of the effects of the input of isotopically lighter fossil fuel CO2 (δ 13C~-25‰ relative to the primary PDB standard4), since the onset of major fossil fuel combustion during the mid-nineteenth century, on the 13C/12C ratio of atmospheric CO2(δ 13C~-7‰), which is assimilated by trees by photosynthesis. The decline in δ13C up to 1930 observed in several series of tree-ring measurements has exceeded that anticipated from the input of fossil fuel CO2 to the atmosphere, leading to suggestions of an additional input ‰) during the late nineteenth/early twentieth century. Stuiver has suggested that a lowering of atmospheric δ 13C of 0.7‰, from 1860 to 1930 over and above that due to fossil fuel CO2 can be attributed to a net biospheric CO2 (δ 13C~-25‰) release comparable, in fact, to the total fossil fuel CO2 flux from 1850 to 1970. If information about the role of the biosphere as a source of or a sink for CO2 in the recent past can be derived from tree-ring 13C/12C data it could prove useful in evaluating the response of the whole dynamic carbon cycle to increasing input of fossil fuel CO2 and thus in predicting potential climatic change through the greenhouse effect of resultant atmospheric CO2 concentrations. I report here the trend (Fig. 1a) in whole wood δ 13C from 1883 to 1968 for tree rings of an American elm, grown in a non-forest environment at sea level in Falmouth, Cape Cod, Massachusetts (41°34’N, 70°38’W) on the northeastern coast of the US. Examination of the δ 13C trends in the light of various potential influences demonstrates the difficulty of attributing fluctuations in 13C/12C ratios to a unique cause and suggests that comparison of pre-1850 ratios with temperature records could aid resolution of perturbatory parameters in the twentieth century.
This isotopic line of argument seems like the weakest one to me. The total flux of carbon through the atmosphere is about 211 gigtonnes plus the human contribution. This means that the human contribution to the atmospheric flux ranged from ~2.7% in 1978 to 4% in 2008. During that time, the average of the 11 NOAA measuring stations value for the 13C/12C ratio decreased by -0.7 per mil.
Now, the atmosphere has ~ -7 per mil 13C/12C. Given that, for the amount of CO2 added to the atmosphere to cause a 0.7 mil drop, the added CO2 would need to have had a 13C/12C of around -60 per mil.
But fossil fuels in the current mix have a 13C/12C ration of ~ -28 per mil, only about half of that requried to make such a change. So it is clear that the fossil fuel burning is not the sole cause of the change in the atmospheric 13C/12C ratio. Note that this is the same finding as in the Nature article.
In addition, from an examination of the year-by-year changes it is obvious that there are other large scale effects on the global 13C/12C ratio. From 1984 to 1986, it increased by 0.03 per mil. From ’86 to ’89, it decreased by -0.2. And from ’89 to ’92, it didn’t change at all. Why?
However, at least the sign of the change in atmospheric 13C/12C ratio (decreasing) is in agreement with with theory that at least part of it is from anthropogenic CO2 production from fossil fuel burning.
CONCLUSION
As I said, I think that the preponderance of evidence shows that humans are the main cause of the increase in atmospheric CO2. It is unlikely that the change in CO2 is from the overall temperature increase. During the ice age to interglacial transitions, on average a change of 7°C led to a doubling of CO2. We have seen about a tenth of that change (0.7°C) since 1850, so we’d expect a CO2 change from temperature alone of only about 20 ppmv.
Given all of the issues discussed above, I say humans are responsible for the change in atmospheric CO2 … but obviously, for lots of people, YMMV. Also, please be aware that I don’t think that the change in CO2 will make any meaningful difference to the temperature, for reasons that I explain here.
So having taken a look at the data, we have finally arrived at …
RULES FOR THE DISCUSSION OF ATTRIBUTION OF THE CO2 RISE
1. Numbers trump assertions. If you don’t provide numbers, you won’t get much traction.
2. Ad hominems are meaningless. Saying that some scientist is funded by big oil, or is a member of Greenpeace, or is a geologist rather than an atmospheric physicist, is meaningless. What is important is whether what they say is true or not. Focus on the claims and their veracity, not on the sources of the claims. Sources mean nothing.
3. Appeals to authority are equally meaningless. Who cares what the 12-member Board of the National Academy of Sciences says? Science isn’t run by a vote … thank goodness.
4. Make your cites specific. “The IPCC says …” is useless. “Chapter 7 of the IPCC AR4 says …” is useless. Cite us chapter and verse, specify page and paragraph. I don’t want to have to dig through an entire paper or an IPCC chapter to guess at which one line you are talking about.
5. QUOTE WHAT YOU DISAGREE WITH!!! I can’t stress this enough. Far too often, people attack something that another person hasn’t said. Quote their words, the exact words you think are mistaken, so we can all see if you have understood what they are saying.
6. NO PERSONAL ATTACKS!!! Repeat after me. No personal attacks. No “only a fool would believe …”. No “Are you crazy?”. No speculation about a person’s motives. No “deniers”, no “warmists”, no “econazis”, none of the above. Play nice.
OK, countdown to mayhem in 3, 2, 1 … I’m outta here.




thethinkingman says:
June 8, 2010 at 11:36 am
Got this off of American Thinker . . .
Posted by: tmead new
Jun 08, 06:23 AM
Definitive evidence already exists to prove/disprove global warming. The GPS Master Control System has been measuring average atmospheric drag on each of the GPS satellites for 35 years. This is needed to predict satellite positions to NINE decimal places. If the Atmosphere is warming, it expands. If the atmosphere expands, the amount of gas encountered by the satellites increases. Thus the average drag over an orbit increases. The USAF Space Command has this drag data recorded from 1975 onward. If the trend is up, warming is occurring, if the trend is level or down, it is NOT.
True but the GPS orbits are so high (20,000 km) that they’re well up in the exosphere (mainly Helium/H). The connection between the temperature/density up there and in the troposphere is limited, probably correlates well with the 10.7 flux.
Comments on Dr. Martin Hertzburg’s presentation re: atmospheric CO2 ?
Willis, thanks for this post which inspired a wonderful scientific discussion.
Richard S Courtney, thank you for your reasoned views and independence. The 800 year lag in CO2 discussion points you brought out are interesting.
Anthony, as always, WUWT is a truly great venue.
John
sorry, “Hertzberg”
Steve Fitzpatrick:
At June 8, 2010 at 10:44 am you assert to me:
“If you really do not know, then it might be a good time to apply Occam’s razor. The simplest and most probable explanation is that the data agree because they are measuring the same thing.”
No!
The simplest and most probable explanation is that the data agree because they have been adjusted such that they agree.
Personally, I prefer to admit my ignorance as to the true explanation for their agreement and not to assume anything.
Richard
barry moore says:
As one of “the enemy”, I suppose, I will give you a very different perspective: Continually denying scientific facts for which the evidence is overwhelming does little to help the”skeptic” cause within the scientific (and, likely, policymaking) communities. It really just discredits you and makes your views easy to dismiss. For example, whenever I begin to wonder if Roy Spencer might be on to something in regards to cloud feedbacks, I remind myself that this is the same person who made some particularly poor arguments for why the CO2 rise might not be primarily anthropogenic. That [along with his stated views on human origins] helps me calibrate the likelihood that his analysis is correct (and the analysis of many other scientists is wrong) in an area where I feel less competent to judge.
I have often offered the advice here that you guys should focus on the issue of feedbacks and climate sensitivity. While I may not believe that most of the scientific evidence is on your side in this realm either, I admit that there is at least legitimate scientific uncertainty that there is room for intelligent debate on the subject.
Alas, my advice seems to go unheeded by many…which, in a way I suppose, is okay since I disagree with you on the seriousness of AGW and whether actions should be taken to mitigate it and I think not heeding my advice probably does “your side” more harm than good. Still, the scientific part of me cringes at the poor arguments that pass for serious debate here on subjects such as the cause of the current rise in CO2 levels.
Richard S Courtney says:
June 8, 2010 at 2:20 pm
“The simplest and most probable explanation is that the data agree because they have been adjusted such that they agree.”
If you really believe that, then there is not much more to discuss… good luck, I wish you well.
Joel Shore says:
June 8, 2010 at 2:34 pm
Continually denying scientific facts for which the evidence is overwhelming does little to help the”skeptic” cause within the scientific (and, likely, policymaking) communities. It really just discredits you and makes your views easy to dismiss.
Joel,
Crtical evaluation of “scientific facts” to ascertain if they are real is not unscientific, quite the contrary. You assume critical evaluation is wrong. Bad assumption.
John
>> bubbagyro says:
>>June 7, 2010 at 3:45 pm
Exellent bubbagyro ! I was not aware of this work and obviously Willis Eschenbach and many others aren’t either.
Those wishing to duplicate Eschenbach or Mann style hockey sticks need simply:
1. Plot long term smoothed, or data averaged over long term by some means (diffusion in this instance, or by averaging lots of data from different sources with various time shifts etc)
2. Splice on some recent instantaneous (eg daily) data that you know is rising.
3. Hey presto … a hockey stick. Instant alarm !
Joel Shore
Nice to see you here again, where have you been hiding-you’re a bit late coming to this particular CO2 party. Also I haven’t seen Ferdinand up to now-I hope hes OK 🙂
Tonyb
Willis, I am very late into this discussion, so I hope you find this contribution. I did all this about 5.5 years ago. It adds some insight to your observations. Hope it is useful. Note, I used “snip – snip” to indicate where I had clipped out part of the wording from a reference. This useage seems to have become inappropriate since then, but I don’t have time just now to edit all of this. Murray
1) http://public.ornl.gov/ameriflux/about-history.shtml
Just to set the stage:
snip”Yet, for many reasons our understanding of the global carbon budget is incomplete. At present, 40 to 60% of the anthropogenically-released CO2 remains in the atmosphere. We do not know, with confidence, whether the missing half of emitted CO2 is being sequestered in the deep oceans, in soils or in plant biomass. Uncertainties about flows of carbon into and out of major reservoirs also result in an inability to simulate year to year variations of the annual increment of CO2″. Snip.
This from a government site. Clearly the uncertainties in GCMs are much larger than the degree of certitude expressed by AGW advocates would suggest. We are not dealing with linear rates of change, and the results of analyses can change dramatically depending on the rate sensitivity of the factor being analyzed and the time period used.
2) http://cdiac.esd.ornl.gov/trends/co2/contents.htm
CO2 delta in the atmosphere from 1970 through 2004 averaged 1.5 ppm/yr. From 1958 to 1974 it averaged 0.9 ppm/yr. From 1994 through 2004 it has averaged 1.8 ppm/yr. Snip “On the basis of flask samples collected at La Jolla Pier, and analyzed by SIO, the annual-fitted average concentration of CO2 rose from 326.86 ppmv in 1970 to 377.83 ppmv in 2004. This represents an average annual growth rate of 1.5 ppmv per year in the fitted values at La Jolla. ” snip.
That’s the one site that can be seriously affected by nearby emissions. All eight regularly measured sites track precisely. The major measuring sites are widely spread from north to south, and the uniform measurement results indicate that CO2 emissions are quickly and well mixed in the atmosphere.
3) http://cdiac.esd.ornl.gov/ftp/ndp030/global.1751_2004.ems
From tables accessible at 2) and 3) we can do some decadal average annual analysis as:
Decade 1 2 3 4 5
Years ’54-63 ’64-’73 ’74-’83 ’84-’93 ’94-`03
Ave. annual fuel emissions (Gt/yr) 2.4 3.4 5.0 6.0 6.7
Percent change decade to decade 42 47 20 12
Ave. annual atmos. conc’n delta (ppm/yr) 0.8 1.1 1.4 1.5 1.8
Atmos. conc’n delta per Gt emission (ppB) 333 324 280 250 270
Implied atmospheric retention (Gt) 1.7 2.3 2.9 3.1 3.7
Airborne fraction (%) 71 68 58 52 55
Ocean uptake from fuel (Gt) 0.7 1.1 2.1 2.9 3.0
Deforestation factor (%) guesstimate* 1.03 1.06 1.09 1.12 1.15
Total emissions (Gt) 2.5 3.6 5.5 6.7 7.7
Airborne fraction of total (%) 68 64 53 46 48
Ocean uptake total (Gt) 0.8 1.3 2.6 3.6 4.0
*The above fuel emissions from 3) do not include any factor for deforestation/land use. Recent total emissions have been estimated by AGW advocates as slightly less than 8 Gt/yr total, giving about an additional 15% for deforestation/land use. As deforestation is to a degree linked to third world population, we can assume that factor was sequentially lower going back to prior decades. Using a higher factor for prior decades won’t change anything much. Column 3 fuel emissions data corresponds almost exactly with IPCC SAR figures.
While total average annual emissions have gone up by a factor of 3, ocean uptake has gone up by a factor of 5. That is hardly consistent with slow mixing or near saturation of surface waters. What seems to be happening is that increasing atmospheric partial pressure is increasing the rate of ocean uptake with the rate of increase slowed by surface warming/acidification. We can expect a large emissions
increase for the next decade, with corresponding relatively large increase in partial pressure. It remains to be seen how much of that will be offset. The decade to decade rate of increase in fuel emissions has declined very rapidly, from mid 40s% to about 12%. Based on the last couple
of years, one could expect the decade ’04-’13 to have total average annual emissions in the order of 9.0 Gt, with total fuel emissions near 7.6 Gt, (a decadal increase of 13%) and with an airborne
fraction near 45%. After that, with declining petroleum, CO2 sequestration for tertiary petroleum recovery, and rising fuel prices driving major accelerations of efficiency, nuclear and renewables, the annual emissions to the atmosphere are likely to begin declining, and to reach a very low level by 2060 or so. The IPCC 50% probability estimate (Wigley et al) is very close to 7.5 Gt near 2010, but goes to 15 Gt by 2060, requiring a compound growth rate of 15% per decade, which isn’t going to happen.
4) http://cdiac.esd.ornl.gov/pns/faq.html
snip Q. How long does it take for the oceans and terrestrial biosphere to take up carbon
after it is burned?
A. For a single molecule of CO2 released from the burning of a pound of carbon, say from burning coal, the time required is 3-4 years. This estimate is based on the carbon mass in the atmosphere and up take rates for the oceans and terrestrial biosphere. Model estimates
for the atmospheric lifetime of a large pulse of CO2 has been estimated to be 50-200 years (i.e., the time required for a large injection to be completely dampened from the atmosphere). Snip
This range seems to be an actual range depending on time frame, rather than the uncertainty among models. [See (5) below].
5) http://www.accesstoenergy.com/view/atearchive/s76a2398.htm
For the above decades 1 through 5, we have now had 4, 3, 2, 1, and 0 half lives respectively. From 3) and 5) and using an average half life of 11 years, (based on real 14C measurement) we get a total remaining injection in 2004 from the prior 5 decades of 139 Gt, which equates to an increase in atmospheric concentration of 66 ppm. The actual increase from 1954 to 2004 was very near 63 ppm. This result lends some credibility to the 50 year atmospheric residence time estimate. [See (9) below]. A 200 year residence time gives an 81 ppm delta since 1954, which is much too high.
Surprisingly, if we go all the way back to 1750 and compute the residence time using fuel emissions only we get a value very close to 200 years. (A 40 year ½ life gives a ppm delta of 99 vs an actual of 96 using 280 ppm as the correct value in 1750). If we assume that terrestrial uptake closely matches land use emissions, (this is essentially the IPCC assumption), and we know that the airborne fraction from 1964 through 2003 had a weighted average of 58%, to
shift to a long term 40 year ½ life from a near term 11 year ½ life, we would have to have prior 40 year period weighted average airborne fractions like 80% for ’24-’63, and 90%-100% before that. Since emissions in the last 40 years have been 3 times higher than in the period from 1924 to 1963 and 30 times higher than 1844 to 1883 it is not too hard to believe that the rapid growth in atmospheric partial pressure has forced such a change in airborne fraction. With rising SSTs we can expect the partial pressure forced rate of ocean uptake to be offset to a growing degree. (Of course we now know that since 2003 we have not had rising SSTs, rather a slight cooling.)As emission rates decline in the future, and with the delayed impact of ocean warming the half life can be expected to begin growing again but it seems very unlikely that the residence time for a pulse of CO2 would get back to 200 years.
6) http://www.hamburger-bildungsserver.de/welcome.phtml?
unten=/klima/klimawandel/treibhausgase/carbondioxid/surfaceocean.html
Here we find a nice description of atmosphere/ocean interchange mechanisms, with the major fault that it gives the impression that the exchange magnitudes are well known. While this was published sometime after 2001, the net ocean uptake from the atmosphere shown would be roughly correct for about the mid `70s, and has since well more than doubled, (see above) despite surface warming. This would suggest that a near surface increase in ocean carbon concentration
considerably upsets the exchange between the surface and deeper ocean waters. It seems possible that carbon fertilization plus warming considerably accelerate growth of ocean biota. The IPCC downplay this possibility, but do not outright deny it, which suggests a fairly high degree of probability to me.
7) http://www.grida.no/climate/ipcc_tar/wg1/105.htm
From the IPCC TAR we read snip In principle, there is sufficient uptake capacity (see Box 3.3) in the ocean to incorporate 70 to 80% of anthropogenic CO2 emissions to the atmosphere, even when total emissions of up to 4,500 PgC (4500 Gt) are considered (Archer et al.., 1997).snip. That’s a 3400 Gt sink capacity, and we are talking about sinking less than another 1000 Gt at a rate of about 4 Gt/yr peak, for a very few years at peak rate. However, the 3400 Gt additional capacity, which would add less than 10% to the ocean inventory seems like a very low value for 3 reasons. First the equilibrium concentration [see 8) below] is more than 3x the present concentration. Second, atmospheric concentrations were at least 5 times higher 100 million years ago, so seawater concentrations can be that much higher also. Third, experiments to test CO2 clathrate hydrate formation see formation at dissolved CO2 concentrations two orders of magnitude higher than the present concentration. Since 1900 total anthropogenic carbon emission has been about 300 Gt, (about 83% since 1945) of which about 170 are still in the atmosphere. In the next century, net emissions to the atmosphere may be no more than another 400 Gt., which would likely add less than another 90 ppm of atmospheric concentration. The idea that we are
saturating the ocean sink is not even remotely consistent with available numbers.
The IPCC goes on to say snip The finite rate of ocean mixing, however, means that it takes several hundred years to access this capacity (Maier-Reimer and Hasselmann, 1987; Enting et al., 1994; Archer et al., 1997). Chemical neutralization of added CO2 through reaction with CaCO3 contained in deep ocean sediments could potentially absorb a further 9 to 15% of the total emitted amount, reducing the airborne fraction of cumulative emissions by about a factor of 2; however the response time of deep ocean sediments is in the order of 5,000 years (Archer et al., 1997) snip. They then show a CO2 system diagram with sediment take up of 0.2 Gt/yr. The present
airborne fraction of 170 Gt would be taken up by the total system in only 800 years at that rate.
The SAR shows a net sink from atmosphere to ocean of about 2.2 Gt/yr. The problem here is that the level of uncertainty in the rate of ocean mixing, and in how that rate might change, is greater than the rate at which we are injecting carbon. [See 1) above]. The IPCC doesn’t discuss uncertainty. The increase we have already seen in the rate of ocean uptake [3) above] is 2x this number, but the difference is only 1% of the estimated round trip exchange.
For reference, also from the IPCC SAR we can find the following carbon inventory and exchange estimates. These were finalized in 1994, but some data may be base on mid `70s estimates.
a) Inventory
Intermediate and Deep ocean – 38,100 Gt; terrestrial soil, biota and detritus – 2190 Gt; surface ocean (down to about 400 m max) 1020 Gt; atmosphere – 750 Gt; ocean sediments – 150 Gt; marine biomass – 3 Gt. That’s a total of 42,213 Gt, excluding carbonaceous rock. I find
the level of precision amusing.
b) Annual Exchanges
Anthropogenic emissions to atmosphere – 5.5 Gt.
Atmosphere to surface ocean – 92Gt, surface ocean to atmosphere – 90 Gt, net to ocean – 2 Gt.
Surface ocean to marine biota – 50 Gt, reverse – 40 Gt; marine biota to deep ocean – 9 Gt; marine biota to DOC -1 Gt
Surface ocean to deep ocean – 92 Gt; reverse – 100 Gt; deep ocean to sediments 0.2 Gt; net ocean uptake 2.2 Gt.
8) http://cdiac.esd.ornl.gov/oceans/ndp_065/appendix065.html
There is a huge volume of data about the concentration of CO2 in seawater, including variability with both depth and latitude. The above reference is for the south Pacific. Data for the south
Atlantic showing variability with depth, but not with latitude is also available. The present concentration is about 25 mg/kg (2100 umol/kg). The variation in concentration, by both depth and latitude is similar in both bodies, varying about +-7% around the mean, with localized excursions up to +-13%. Since atmospheric concentration has increased about 32% in the last 150 years, and about 25% in the last 50 years, one would expect much greater variation in oceanic
concentration if the take-up by the deep ocean is slow. CO2 concentration varies directly with salinity, and inversely with temperature. Greatest concentrations are at depth (1500 to 2500 m),
and at higher latitudes. The equatorial regions are spoken of as a source for CO2, which must be a function of temperature as is the slightly lower surface concentrations. Heavy rainfall in the tropics may also contribute to reduced concentration. High latitudes are spoken of in the TAR as having “CO2 rich upwellings”, which is consistent with the observed data, but not consistent with the claim of slow mixing between surface and deep water. In deep, dark, cold waters, one would expect very slow local oxidation, so the likely source of deep water concentration would seem to be rapid transport from the surface, by the likes of the Atlantic Conveyer. Concentration would increase with both increasing salinity and decreasing temperature as the conveyer moves north.
There is essentially no variation with longitude except for the depth of the isolines in deeper waters. Curiously the partial pressure reaches a maximum at mid depths. Are currents near the
bottom carrying mixed relatively younger surface water with the lower partial pressures?
9)
http://ijolite.geology.uiuc.edu/02SprgClass/geo117/lectures/Lect18.html
Atmospheric gases in sea water
— saturation = equilibrium
Molecule Percent in atmosphere Equilibrium concentration in seawater (mg per kg seawater)
N2 78% 12.5
O2 21% 7
Ar 1% 0.4
CO2 0.03% 90
In surface sea water, atmospheric gases are close to their “saturation” concentration (or equilibrium concentration). But note that CO2 has a much higher solubility (equilibrium
concentration) than the other gases.
10) http://stommel.tamu.edu/~baum/paleo/ocean/node37.html –
snip Thermocline – Specifically the depth at which the temperature gradient is a maximum. Generally a layer of water with a more intensive vertical gradient in temperature than in the layers either above or below it. When measurements do not allow a specific depth to be pinpointed as a thermocline a depth range is specified and referred to as the thermocline zone. The depth and thickness of these layers vary with season, latitude and longitude, and local environmental conditions. In the midlatitude ocean there is a permanent thermocline residing between 150-900 meters below the surface, a seasonal thermocline that varies with the seasons (developing in spring, becoming stronger in summer, and disappearing in fall and winter), and a diurnal thermocline that forms very near the surface during the day and disappears at night. There is no
permanent thermocline present in polar waters, although a seasonal thermocline can usually be identified. The basic dynamic balance that maintains the permanent thermocline is thought to be one between the downward diffusive transport of heat and the upward convective transport of cold water from great depths. Snip There is a lot of variability evident in that quote, that makes giving firm single figure values pretty questionable. The mid latitude permanent thermocline has a maximum extent from about 40 degrees north to 40 degrees south. At latitudes above about
60 degrees there is usually no thermocline. The depth of the top of the thermocline can be from about 20 meters to about 400 meters, and the thickness can vary from less than 100 meters to about 400 meters. The depth of the bottom of the thermocline varies from less than 100 meters to about 900-1000 meters. The IPCC gives an average depth of the thermocline of 400 meters, but do not define whether they are considering top, middle or bottom. They seem to be taking the average depth of the top as 200 meters and the thickness as 400 meters average , but these would be very rough estimates at best, and could hardly justify the 3 significant digits they use.
Depth and thickness vary quite rapidly with both areal location and time, with time generally from hours to seasons, or in the case of ENSO to years. In a given location the thermocline depth can move up and down by 10s of meters diurnally and 100s of meters in a season, or, as noted above, can disappear altogether. Also in the near equatorial Pacific, where the thermocline is normally well established, there is also the well established “equatorial cold tongue”, a huge upwelling of cold water far from the high latitudes. The average depth of the oceans is generally taken as 4000 meters. The IPCC estimates the upper mixed layer as holding 2.6% of the total ocean CO2, (1020 of 39,120 Gt) which implies near 2.6% of the water or an average depth of 200 meters if it is taken to exist under 50% of the ocean surface. They refer to the water above the
thermocline as the “mixed layer” and consider the thermocline as a barrier that severely limits mixing between the intermediate and deep ocean. The intermediate layer is the thermocline zone. The deep ocean contains 90% of the total water, so the intermediate zone must be assumed to hold about 2.5% also where it exists. The other 5% of the water is in the upper 10% of the depth, where there is no thermocline.
There are major mixing mechanisms between surface and deep ocean other than diffusion through the thermocline. These include wave motion in the “furious fifties and screaming sixties” of the southern oceans, the giant delayed oscillator of the equatorial pacific, major sinks and upwellings, the Atlantic conveyer and the Antarctic Circumpolar Current. The surge at depth from passing swells can be felt clearly at a depth where the swell causes a 10% depth change, and can be detected at 5%. In the screaming 60s, where winter can find 1000 mile wavetrains of 20 meter waves, mixing can be expected to 400 meters. [This is consistent with Fig 4 in 11) below]. The ACC alone moves water at the rate of 130 million cubic meters per second, which is enough to exchange the entire Atlantic ocean in about 100 years. The IPCC says the thermocline is the cause of slow mixing between surface and deep waters. With it’s degree of variability in depth, extent and time it is more likely a mechanism of fairly rapid mixing. The total surface layer to about 200 meters depth, must hold near 5% of the total CO2 vs the 2.6% represented by the IPCC, and near half of the 5% must mix much more rapidly than the estimate used by the IPCC for the 2.6% they consider. Their exchange rate of about 100 Gt/yr between surface and intermediate/deep ocean is probably underestimated by a min. factor of 2 and maybe as much as 4 or 5. The differential between up and down transfer can easily be understated by an even larger factor. This would account for the observed ocean uptake rate of CO2 from the atmosphere, which is already 2x the IPCC estimate.
Wherever there is a great range of uncertainty in estimates, the IPCC seems to choose the extreme that will paint the most perilous picture. AGW advocates seem prone to this selective behavior.
11) http://www.aip.org/pt/vol-55/iss-8/captions/p30cap4.html See fig. 4
The first thing to note about Fig. 4 is that there is no evidence at all of a thermocline barrier at near 200 m depth. At 30 degrees S in the Pacific the 50 umol/kg concentration extends to beyond 400 m and at about 20 degrees N in the N Pacific the 40 umol/kg concentration gets to 400 m. The mid latitude Pacific is relatively warm, has relatively low saline concentration and can therefore be expected to have relatively low total CO2 concentration. Forty umol/ kg would be
about 2% anthropogenic CO2. The surface share of anthropogenic CO2 is about 2.5% in this region. Even though this is the zone that should have the strongest permanent thermocline, the anthropogenic concentration is well mixed way below the expected thermocline depth. In the colder and saltier N Atlantic, in the region which should at least have seasonal thermoclines, (30 to 60 degrees N), we find the anthropogenic share at 1.7% (65% of surface share) at a
depth of 1200 m.
We didn’t get to an ocean uptake equal to 10% of the last decade until about 1900, and yet we find the anthropogenic share equal to 10% of the surface share at a depth of >5000 m in the N. Atlantic. The Atlantic conveyer is certainly sinking surface anthropogenic CO2 emissions to the ocean bottom in less than a century. Since we have no longitudinal distribution, it may seem questionable to try and estimate the total Gt of anthropogenic CO2 in the oceans from Fig 4. However we know that there is little longitudinal variation in the Pacific, and probably the S. Atlantic is similar. In the N. Atlantic the share would be lower than shown to the west, but given that the N. Atlantic is much more saline than the N. Pacific, still higher than the N. Pacific. A rough estimate would be 120-140 Gt. Since we have emitted about 310 Gt since 1750, and about
>170 Gt is still in the atmosphere, the total ocean uptake is about 130-140 Gt, so this figure looks pretty realistic. If we accept Fig 4., which is based on measurement, then we have to conclude that the IPCC contention of slow mixing to the deep ocean because of the thermocline barrier is simply wrong.
12) http://www.aoml.noaa.gov/ocd/gcc/co2research
The key quote from this url is “The global oceanic CO uptake using different wind speed/gas transfer velocity parameterizations differs by a factor of three (Table 1)”. The IPCC seems to have used the lowest transfer rate. The actual current transfer rate is 2x the IPCC figure, and evidently some models support a rate of 3x the IPCC figure, which seems consistent with the above observations.
13) http://www.surfnewquay.co.uk/knowledge/articles.php?
A problem here, and probably in much of the IPCC work is the tendency to use averages. It is best to distrust averages. For example, this reference says it takes 1000 to 2000 years to turnover ocean bottom water.
Here http://calspace.ucsd.edu/virtualmuseum/climatechange1/10_5.shtml we find:
snip It takes, on the whole, one thousand years to renew the deep waters of the world’s ocean. This estimate is based on radiocarbon measurements from the CO2 dissolved within the ocean snip.
So we have already gone from “one thousand to 2 thousand” to “one thousand”. For purposes of CO2 take-up we are not concerned with the whole ocean bottom, or long averages. The relatively miniscule amount of C we are generating can be taken up by a very small fraction of the ocean. The present ocean inventory of carbon is about 39,000 Gt, and that gives a concentration of 25mg C per kg seawater. There are 1 million mg in a kg. Now we are going to add about 600 Gt by 2100, which if evenly distributed would raise the concentration by about 1.5% to 25.4 mg/kg. Since the variability of distribution of C in the ocean is +-7%, this addition isn’t even noticeable.
But what if we just penetrate 10% of the ocean in the short run? Then concentration in that portion goes temporarily to 29 mg/kg. And we do it in 100 years, not 1000 years. Then the ocean can spend the next 900 years equalizing the concentration. Thus it takes 1000 years to distribute our C injection, averaged throughout the ocean, but that has no bearing on the time required to take up the “pulse”.
Now lets look at the big currents. The first url in 13) above gives a speed of 0.5 km/hr in some of the trenches. (The gulf stream moving past South Carolina recently moved some stranded boaters 150 miles in 5 days. That’s 2 km/hr). If we assume an average speed of 0.2 km/hr, the current makes 4 complete circuits in 100 years. Given all the eddies along the way, it probably touches much more than 10% of the ocean and can transport a lot of C.
Steve Fitzpatrick says:
June 8, 2010 at 3:30 pm
If you really believe that, then there is not much more to discuss… good luck, I wish you well.
Steve,
After what I have seen come out since November 2009 about data “adjustments” in some “consensus climate science” communities, the possibility of “adjustments” seems much more likely to me than it used to. Now I think we should critically evaluate that possibility of “adjustments” when looking at the product of the “consensus climate science” community.
John
Nick Stokes says:
June 8, 2010 at 3:33 am
Lovely picture … but how does that actually physically happen in the real world, where that physical setup is not happening?
Very interesting, Nick. This does decay differently than an exponential. However, I don’t know of any real physical phenomena which are known to decay in that fashion … and in any case, if that”s the real underlying model, why is the Bern model jerking around with exponentials?
The exponential decay has theoretical underpinnings (le Chatelier’s Principle). Does such a thing exist for your style of decay?
You seem to have missed the point as to why remote locations, such as Barrow and Mana Loa, were chosen and why Beck’s measurements are of no use. I’m quite sure you can get higher (much higher ) readings if you take measurements in Piccadilly Circus or at the Arc de Triomphe but these would not be, in any way, representative of global CO2 concentrations. CO2 maps show that “well-mixed” concentrations vary by only a few ppm across the world.
________________________________________________________________________
Beck did have a series of measurements made at Barrow but not by a CAGW scientist and that is what I referred to
Please note I was a chemist working in industry, that is why I do not understand how anyone can believe CO2 is “well mixed” in the atmosphere and at equilibrium. I can not believe anyone could think it would be within a couple ppm from location to location. Here is why I came to that conclusion.
First: do you really trust the scientists: It seems the temperature readings were adjusted six times after analysis in July 1999 indicated that the temperature anomaly for 1934 was nearly 60% higher than for 1998. And this is just from one email
“At Mauna Loa we use the following data selection criteria:
4.In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step, in which we fit a curve to the preliminary daily means for each day calculated from the hours surviving step 1 and 2, and not including times with upslope winds. All hourly averages that are further than two standard deviations, calculated for every day, away from the fitted curve (“outliers”) are rejected. This step is iterated until no more rejections occur…..”
Do you not understand? The assumption is made that there is NO VARIABLITY and the data is adjusted to reflect that.
Second WHY are the results from various sites so in close to each other:
In the paper by Tom Quirk “ Sources and Sinks of Carbon Dioxide” The isotopic balance in the atmosphere is far more complex and there are many more variables than most think. Consider 94% of all anthropogenic CO2 is released into the northern hemisphere. Next the CO2 is not as well mixed as the IPCC state. From the nuclear tests in the 60’s the mixing north to south is very slow, like several years ( another rhetorical question) so why is the average northern hemisphere CO2 not higher than the south?
As J. A. Glassman so aptly put it in one of his replies,
Exert,
“So why are the graphs so unscientifically pat? One reason is provided by the IPCC:
The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically calibration procedures within and between monitoring networks (Keeling et al., 1989; Conway et al., 1994). Bold added, TAR, p. 211.
So what the Consensus has done is to “calibrate” the various records into agreement. And there can be no other meaning for “calibration procedures … between monitoring networks”. It accounts for coincidence in simultaneous records and it accounts for continuity between adjacent records. The most interesting information in this procedure would be the exact amount of calibration necessary to achieve the objective of nearly flawless measuring with the modern record dominating. The IPCC’s method is unacceptable in science. It is akin to the IPCC practice of making “flux adjustments” to make its various models agree. See TAR for 87 references to “flux adjustment”, and see 4AR for its excuse, condemnation, and abandonment. 4AR p. 117. ”
End of exert.
In other words there is agreement between site because they were ADJUSTED just like the temperature records.
Now let us look at the “pristine site” Mauna Loa.
1. Volcano out gassing
2. Land based Photosynthesis
3.Ocean based Photosynthesis
4. Diurnal warming/cooling of the sea surface as well as longer term cycles and its effect on CO2 not to mention calm vs turbulent seas. (Co2 absorption rate is dependent on surface area)
5. Soil Microbes
6.Rain: PROMOTION EFFECTS OF FALLING DROPLETS ON CARBON DIOXIDE ABSORPTION
ACROSS THE AIR-WATER INTERFACE
OF THE OCEAN —– In addition to CO2 transfer by impinging raindrops, there is CO2 absorption during the fall of raindrops. CO2 absorption by rain alone is going to keep the CO2 in the atmosphere from ever being uniform.
If you go to Barrow there are microbes, and the oceans mucking up the works too.
Temperature dependence of metabolic rates for microbial growth, maintenance, and survival
Here is Becks information from Barrow:
Date – –Co2 ppm * * latitude * * longitude * * *author * * location
1947.7500 – – 407.9 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1947.8334 – – 420.6 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1947.9166 – – 412.1 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.0000 – – 385.7 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.0834 – – 424.4 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.1666 – – 452.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.2500 – – 448.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.3334 – – 429.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.4166 – – 394.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.5000 – – 386.7 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.5834 – – 398.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.6667 – – 414.5 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.9166 – – 500.0 * * * * *71.00* * * -156.80 * * *Scholander * *Barrow
These data must not be used for commercial purposes or gain in any way, you should observe the conventions of academic citation in a version of the following form: [Ernst-Georg Beck, real history of CO2 gas analysis, http://www.biomind.de/realCO2/data.htm ]
Scholander got more than a 100ppm swing at Barrow over a year’s time. This type of variation makes more sense to me because I see CO2 in the atmosphere in terms of a huge mixing vessel with mixmen adding ingredients, others taking ingredients out, and a third, haphazardly flipping the switch on the mixer blades. If I take ten samples in different locations, under these conditions there is no way I would expect close agreement between the samples.
David C says:
June 8, 2010 at 8:00 am
The effect is not that large. During the glacial-interglacial swing, the Antarctic temperatures are thought to have shown a temperature change of about 10–12°C. It is generally thought that the global swing was about half of that, call it 5–6°C.
During the same time, the CO2 changed by on the order of 100 ppmv. Over this range, Henry’s Law is roughly linear. So that gives us a change on the order of 15–20 ppmv per degree of change.
Since 1959, we’ve seen a temperature rise of about six tenths of a degree, which would translate to change from increased outgassing of 9–12 ppmv …
Which is likely why there is no particular sign of it in the Mauna Loa records.
w.
Steve Fitzpatrick says:
June 8, 2010 at 8:56 am
My lifelong motto has been “Retire early …
…
… and often”.
At the moment I have time because I’m retired, although I could easily be called out of retirement by a great job offer. Or by increasing hunger.
w.
Is there a flight-path over Mauna Loa?
Willis,
“At the moment I have time because I’m retired, although I could easily be called out of retirement by a great job offer. Or by increasing hunger.”
Well, that certainly answers my question.. Keep up the good work, but I hope you don’t get too hungry.
Willis Eschenbach says:
June 8, 2010 at 10:3
But we have a phase change in one but not (at least on this planet) the other.
John Finn says:
June 8, 2010 at 1:37 pm
We apparently have a situation whereby
1. All 8 ice core records are wrong – by exactly the same amount.
2. All CO2 measurements from Mauna Loa and dozens of other sites around the world are also wrong – again by remarkably similar amounts.
3. The AIRS satellite data from the mid-troposphere is also wrong – simply because it agrees with ML and other surface based observations.
Who has this situation? Maybe you are fast reading and not getting the point.
The point is that the problem is three dimensional and all the Keeling measurements you think I am saying are wrong are two dimensional extrapolated to three dimensions by the assumption of “well mixed” which is wrong in a gravitational field when the molecules have different atomic weights. The gases are mixed, because of turbulence but how much mixed is something that has to be measured experimentally not assumed by hand waving “mixed”. They could be absolutely correct for that altitude ( for all the objections raised ) and still not reflective of world data .
The ice core data are obviously integrators in slices as large as the fig1 slice and are in locations that have very few sources of CO2 and are measuring the wind born fraction anyway. Creating a hokey stick by joining two different method and altitude measurements is a No No, until there is experimental evidence from the rest of the altitudes and the rest of the longitudes and heights .
The japanese measurements that average over the column of air, the stomata data and Beck’s compilations give more details than the ice cores and imply that observed CO2 measurements are within the natural variations and not unprecedented.
Gail Combs says:
June 8, 2010 at 4:02 pm
Thanks for the chemist POV.
I have been looking at this “well mixed” from a physicist POV and it seems basic science agrees 🙂 in the principles.
Keith Minto
anna v
I spent an hour Monday trying to find the co2 vs. altitude data you both seek. I read a paper month’s ago on flights by the U.S.A.F. around 1969-79 measuring co2 up to 60 km, if I remember correct, and can’t seem to relocate it. The per altitude band data is within with the gradients. Hope you will be able to find it, it’s there somewhere, or was.
Wayne,
I could not turn see anything worthwhile, we desperately need a re launch of the OCO. http://www.timesonline.co.uk/tol/news/world/us_and_americas/article5796656.ece
Willis Eschenbach says:
June 8, 2010 at 3:54 pm
“but how does that actually physically happen in the real world, where that physical setup is not happening?”
No, it’s the simplest version of the physical setup. A gas diffusing in a uniform semi-infinite medium. Do you have a better one? Which gives exponential decay?
It’s just standard diffusion theory – the solutions are set out in Carslaw and Jaeger’s “Conduction of heat in solids”, for example.
“The exponential decay has theoretical underpinnings (le Chatelier’s Principle). Does such a thing exist for your style of decay?”
Yes, it’s the Green’s function for diffusion in a semi-infinite region, with concentration prescribed at the surface. There’s a downloadable textbook on heat transfer by Lienhard here, and you’ll find a corresponding formula at Eq 5.54. This is for a sustained temp rise, power t^-1/2; you have to differentiate this to get the Greens function.
As far as theoretical underpinning goes, you could have cited Newton’s Law of Cooling (not one of his better ones). But, as Wiki says:
“This form of heat loss principle is sometimes not very precise; an accurate formulation may require analysis of heat flow, based on the (transient) heat transfer equation in a nonhomogeneous, or else poorly conductive, medium.”
Heat transfer and diffusion of dissolved substances follow much the same rules.
As for why the Bern model approximates with exponentials, I think it’s just for the mechanics of convolution. In general you have to work out a summation for each point, which some people think is hard. But exponentials are especially easy – just a recurrence relation.
anna v says:
June 8, 2010 at 9:04 pm
“gases are mixed, because of turbulence”
I think you confuse kinetics with thermodynamics. From thermodynamics (ie. the application of delG = delH -TdelS) it can be shown that even the EQUILIBRIUM distribution of the gases in the atmosphere, under the influence of gravity, would be such that there would be very little difference from the top to the bottom of the atmosphere, ie. the proportion of CO2 molecules at ground levels would be only very slightly greater than high up in the atmosphere. That is because of the importance of TdelS (the entropy factor) in the above equation. IF only enthalpy were involved, then, in the absence of turbulence (ie at equilibrium) the atmosphere would be a series of layers with the heavier molecules, like CO2, at the bottom (since in this case the enthalpy factor is concerned with gravitational potential energy): but because of the entropy factor that is indubitably not the case – entropy trumps enthalpy.
So, thermodynamics dictates that the gases in the atmosphere should be well mixed at equilibrium even in the absence of turbulence. Turbulence does two things: it increases the rate of mixing and it produces a result (a well mixed atmosphere) which (by coincidence) is very close to the thermodynamic equilibrium position.
An example of where turbulence can produce a result different from the thermodynamic equilibrium is provided by shaking a mixture of oil and water. The equilibrium position is two layers, turbulence produces a mixture without layers.