Guest Post by Willis Eschenbach
There seem to be a host of people out there who want to discuss whether humanoids are responsible for the post ~1850 rise in the amount of CO2. People seem madly passionate about this question. So I figure I’ll deal with it by employing the method I used in the 1960s to fire off dynamite shots when I was in the road-building game … light the fuse, and run like hell …
First, the data, as far as it is known. What we have to play with are several lines of evidence, some of which are solid, and some not so solid. These break into three groups: data about the atmospheric levels, data about the emissions, and data about the isotopes.
The most solid of the atmospheric data, as we have been discussing, is the Mauna Loa CO2 data. This in turn is well supported by the ice core data. Here’s what they look like for the last thousand years:
Figure 1. Mauna Loa CO2 data (orange circles), and CO2 data from 8 separate ice cores. Fuji ice core data is analyzed by two methods (wet and dry). Siple ice core data is analyzed by two different groups (Friedli et al., and Neftel et al.). You can see why Michael Mann is madly desirous of establishing the temperature hockeystick … otherwise, he has to explain the Medieval Warm Period without recourse to CO2. Photo shows the outside of the WAIS ice core drilling shed.
So here’s the battle plan:
I’m going to lay out and discuss the data and the major issues as I understand them, and tell you what I think. Then y’all can pick it all apart. Let me preface this by saying that I do think that the recent increase in CO2 levels is due to human activities.
Issue 1. The shape of the historical record.
I will start with Figure 1. As you can see, there is excellent agreement between the eight different ice cores, including the different methods and different analysts for two of the cores. There is also excellent agreement between the ice cores and the Mauna Loa data. Perhaps the agreement is coincidence. Perhaps it is conspiracy. Perhaps it is simple error. Me, I think it represents a good estimate of the historical background CO2 record.
So if you are going to believe that this is not a result of human activities, it would help to answer the question of what else might have that effect. It is not necessary to provide an alternative hypothesis if you disbelieve that humans are the cause … but it would help your case. Me, I can’t think of any obvious other explanation for that precipitous recent rise.
Issue 2. Emissions versus Atmospheric Levels and Sequestration
There are a couple of datasets that give us amounts of CO2 emissions from human activities. The first is the CDIAC emissions dataset. This gives the annual emissions (as tonnes of carbon, not CO2) separately for fossil fuel gas, liquids, and solids. It also gives the amounts for cement production and gas flaring.
The second dataset is much less accurate. It is an estimate of the emissions from changes in land use and land cover, or “LU/LC” as it is known … what is a science if it doesn’t have acronyms? The most comprehensive dataset I’ve found for this is the Houghton dataset. Here are the emissions as shown by those two datasets:
Figure 2. Anthropogenic (human-caused) emissions from fossil fuel burning and cement manufacture (blue line), land use/land cover (LU/LC) changes (white line), and the total of the two (red line).
While this is informative, and looks somewhat like the change in atmospheric CO2, we need something to compare the two directly. The magic number to do this is the number of gigatonnes (billions of tonnes, 1 * 10^9) of carbon that it takes to change the atmospheric CO2 concentration by 1 ppmv. This turns out to be 2.13 gigatonnes of carbon (C) per 1 ppmv.
Using that relationship, we can compare emissions and atmospheric CO2 directly. Figure 3 looks at the cumulative emissions since 1850, along with the atmospheric changes (converted from ppmv to gigatonnes C). When we do so, we see an interesting relationship. Not all of the emitted CO2 ends up in the atmosphere. Some is sequestered (absorbed) by the natural systems of the earth.
Figure 3. Total emissions (fossil, cement, & LU/LC), amount remaining in the atmosphere, and amount sequestered.
Here we see that not all of the carbon that is emitted (in the form of CO2) remains in the atmosphere. Some is absorbed by some combination of the ocean, the biosphere, and the land. How are we to understand this?
To do so, we need to consider a couple of often conflated measurements. One is the residence time of CO2. This is the amount of time that the average CO2 molecule stays in the atmosphere. It can be calculated in a couple of ways, and is likely about 6–8 years.
The other measure, often confused with the first, is the half-life, or alternately the e-folding time of CO2. Suppose we put a pulse of CO2 into an atmospheric system which is at some kind of equilibrium. The pulse will slowly decay, and after a certain time, the system will return to equilibrium. This is called “exponential decay”, since a certain percentage of the excess is removed each year. The strength of the exponential decay is usually measured as the amount of time it takes for the pulse to decay to half its original value (half-life) or to 1/e (0.37) of its original value (e-folding time). The length of this decay (half-life or e-folding time) is much more difficult to calculate than the residence time. The IPCC says it is somewhere between 90 and 200 years. I say it is much less, as does Jacobson.
Now, how can we determine if it is actually the case that we are looking at exponential decay of the added CO2? One way is to compare it to what a calculated exponential decay would look like. Here’s the result, using an e-folding time of 31 years:
Figure 4. Total cumulative emissions (fossil, cement, & LU/LC), cumulative amount remaining in the atmosphere, and cumulative amount sequestered. Calculated sequestered amount (yellow line) and calculated airborne amount (black) are shown as well.
As you can see, the assumption of exponential decay fits the observed data quite well, supporting the idea that the excess atmospheric carbon is indeed from human activities.
Issue 3. 12C and 13C carbon isotopes
Carbon has a couple of natural isotopes, 12C and 13C. 12C is lighter than 13C. Plants preferentially use the lighter isotope (12C). As a result, plant derived materials (including fossil fuels) have a lower amount of 13C with respect to 12C (a lower 13C/12C ratio).
It is claimed (I have not looked very deeply into this) that since about 1850 the amount of 12C in the atmosphere has been increasing. There are several lines of evidence for this: 13C/12C ratios in tree rings, 13C/12C ratios in the ocean, and 13C/12C ratios in sponges. Together, they suggest that the cause of the post 1850 CO2 rise is fossil fuel burning.
However, there are problems with this. For example, here is a Nature article called “Problems in interpreting tree-ring δ 13C records”. The abstract says (emphasis mine):
THE stable carbon isotopic (13C/12C) record of twentieth-century tree rings has been examined1-3 for evidence of the effects of the input of isotopically lighter fossil fuel CO2 (δ 13C~-25‰ relative to the primary PDB standard4), since the onset of major fossil fuel combustion during the mid-nineteenth century, on the 13C/12C ratio of atmospheric CO2(δ 13C~-7‰), which is assimilated by trees by photosynthesis. The decline in δ13C up to 1930 observed in several series of tree-ring measurements has exceeded that anticipated from the input of fossil fuel CO2 to the atmosphere, leading to suggestions of an additional input ‰) during the late nineteenth/early twentieth century. Stuiver has suggested that a lowering of atmospheric δ 13C of 0.7‰, from 1860 to 1930 over and above that due to fossil fuel CO2 can be attributed to a net biospheric CO2 (δ 13C~-25‰) release comparable, in fact, to the total fossil fuel CO2 flux from 1850 to 1970. If information about the role of the biosphere as a source of or a sink for CO2 in the recent past can be derived from tree-ring 13C/12C data it could prove useful in evaluating the response of the whole dynamic carbon cycle to increasing input of fossil fuel CO2 and thus in predicting potential climatic change through the greenhouse effect of resultant atmospheric CO2 concentrations. I report here the trend (Fig. 1a) in whole wood δ 13C from 1883 to 1968 for tree rings of an American elm, grown in a non-forest environment at sea level in Falmouth, Cape Cod, Massachusetts (41°34’N, 70°38’W) on the northeastern coast of the US. Examination of the δ 13C trends in the light of various potential influences demonstrates the difficulty of attributing fluctuations in 13C/12C ratios to a unique cause and suggests that comparison of pre-1850 ratios with temperature records could aid resolution of perturbatory parameters in the twentieth century.
This isotopic line of argument seems like the weakest one to me. The total flux of carbon through the atmosphere is about 211 gigtonnes plus the human contribution. This means that the human contribution to the atmospheric flux ranged from ~2.7% in 1978 to 4% in 2008. During that time, the average of the 11 NOAA measuring stations value for the 13C/12C ratio decreased by -0.7 per mil.
Now, the atmosphere has ~ -7 per mil 13C/12C. Given that, for the amount of CO2 added to the atmosphere to cause a 0.7 mil drop, the added CO2 would need to have had a 13C/12C of around -60 per mil.
But fossil fuels in the current mix have a 13C/12C ration of ~ -28 per mil, only about half of that requried to make such a change. So it is clear that the fossil fuel burning is not the sole cause of the change in the atmospheric 13C/12C ratio. Note that this is the same finding as in the Nature article.
In addition, from an examination of the year-by-year changes it is obvious that there are other large scale effects on the global 13C/12C ratio. From 1984 to 1986, it increased by 0.03 per mil. From ’86 to ’89, it decreased by -0.2. And from ’89 to ’92, it didn’t change at all. Why?
However, at least the sign of the change in atmospheric 13C/12C ratio (decreasing) is in agreement with with theory that at least part of it is from anthropogenic CO2 production from fossil fuel burning.
CONCLUSION
As I said, I think that the preponderance of evidence shows that humans are the main cause of the increase in atmospheric CO2. It is unlikely that the change in CO2 is from the overall temperature increase. During the ice age to interglacial transitions, on average a change of 7°C led to a doubling of CO2. We have seen about a tenth of that change (0.7°C) since 1850, so we’d expect a CO2 change from temperature alone of only about 20 ppmv.
Given all of the issues discussed above, I say humans are responsible for the change in atmospheric CO2 … but obviously, for lots of people, YMMV. Also, please be aware that I don’t think that the change in CO2 will make any meaningful difference to the temperature, for reasons that I explain here.
So having taken a look at the data, we have finally arrived at …
RULES FOR THE DISCUSSION OF ATTRIBUTION OF THE CO2 RISE
1. Numbers trump assertions. If you don’t provide numbers, you won’t get much traction.
2. Ad hominems are meaningless. Saying that some scientist is funded by big oil, or is a member of Greenpeace, or is a geologist rather than an atmospheric physicist, is meaningless. What is important is whether what they say is true or not. Focus on the claims and their veracity, not on the sources of the claims. Sources mean nothing.
3. Appeals to authority are equally meaningless. Who cares what the 12-member Board of the National Academy of Sciences says? Science isn’t run by a vote … thank goodness.
4. Make your cites specific. “The IPCC says …” is useless. “Chapter 7 of the IPCC AR4 says …” is useless. Cite us chapter and verse, specify page and paragraph. I don’t want to have to dig through an entire paper or an IPCC chapter to guess at which one line you are talking about.
5. QUOTE WHAT YOU DISAGREE WITH!!! I can’t stress this enough. Far too often, people attack something that another person hasn’t said. Quote their words, the exact words you think are mistaken, so we can all see if you have understood what they are saying.
6. NO PERSONAL ATTACKS!!! Repeat after me. No personal attacks. No “only a fool would believe …”. No “Are you crazy?”. No speculation about a person’s motives. No “deniers”, no “warmists”, no “econazis”, none of the above. Play nice.
OK, countdown to mayhem in 3, 2, 1 … I’m outta here.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.




Gail Combs says:
June 8, 2010 at 4:02 pm
Beck did have a series of measurements made at Barrow but not by a CAGW scientist and that is what I referred to
Sorry for the late comment, was absent for a three weeks vacation (without Internet!)…
The historical data of Barrow would have been of interest, if the instrument was accurate enough. Unfortunately, they used the micro-Schollander instrument, which was intended for CO2 control of exhaled air, which is about 2% (and higher), or 20,000 ppmv. The instrument was calibrated with… outside air. If the value was between 200 and 500, the instrument was deemed OK. The accuracy of the instrument thus was +/- 150 ppmv… These outside air values were used by Beck as part of his curve (with the 1942 “peak”).
Other interesting place were measurements in Antarctica, but these show extreme CO2 levels at abnormal low oxygen concentrations, which point to local contamination.
The similar problems of local contamination can be found in many of the historical measurements, but Ernst Beck used them all without hesitation…
so why is the average northern hemisphere CO2 not higher than the south?
There is an increasing lag between the NH and the SH, which points to the NH as general source of the increase
So what the Consensus has done is to “calibrate” the various records into agreement.
Sorry, that is not right. The calibration gases are calibrated against each other and at one place on earth (formerly at Scripps, currently NOAA I suppose) against an original set which is measured with an extremely accurate manometric method. That has nothing to do with “adjusting” the results.
About stomata data:
Stomata data are a proxy for current and past CO2 levels, but have there problems, as good as any other proxy.
The main problem is the same as for many of the historical measurements: plant with stomata grow by definition on land (sea creatures don’t need stomata), where CO2 levels are not very well mixed. Diurnal differences of over 200 ppmv in summer are not uncommon in vegetated areas. That alone gives a positive bias against “background” CO2 levels. That is not a problem itself if this was constant. Nevertheless, the stomata data show a resonable fit (+/- 10 ppmv) if calibrated against MLO and ice core data over the past 100 years.
According to the stomata people, stomata (index) density is defined by the average CO2 level of the preceding growing season. That fits reasonably good for the past 100 years. But the problem is in the past: how do we know that the landscape (and accordingly the CO2 levels) didn’t change over time? For several countries, we know that there was a huge evolution in landscape: from marshes to forests and to agriculture, all in the main wind direction of some of the stomata proxy places.
The same for weather influences: a warmer/colder climate may influence local/regional flora and thus CO2 levels far more than the “background” CO2 levels.
Thus while stomata data have their pro’s (a better resolution than ice cores), there are a lot of problems too, which make that past absolute levels or variability mainly reflect local/regional CO2 levels and should be taken with a grain of salt for “global” CO2 levels…
thethinkingman says:
June 8, 2010 at 9:05 am
I am sorry if this sounds a bit , well, simple but what is the resolution in years, decades, centuries etc. for the CO2 level revealed by ice cores?
Depends of the accumulation rate of the snow/ice at the ice core drilling site and average temperature, which influence the depth/time to close the bubbles. The ice cores with the highest accumulation rate are two of the three ice cores at Law Dome (about 1.5 meter ice equivalent per year). These have the best resolution of about 8 years. That means that any peak value of about 20 ppmv over one year or an extra 3 ppmv over 8 years would be measurable in these ice cores. Thus Beck’s “peak” of about 80 ppmv around 1942 would be visible, but is not.
The drawback of the high accumulation is that the complete core until (near) rock bottom is not going far back into time: only about 150 years. The third Law Dome core has a lower accumulation rate (was drilled downslope), has a resolution of about 40 years and goes back about 1,000 years in time. That one shows a 6 ppmv CO2 drop coinciding with the LIA.
The Law Dome project is very interesting, as it gives a lot of answers to the objections of Jaworowski: they used three different drilling methods (wet and dry), measured CO2 in firn and ice at several depths, no clathrates, no cracks,…
The accuracy of all three cores for similar gas age was +/- 1.3 ppmv (1 sigma), CO2 in firn of still open pores and CO2 in ice of already closed pores was identical and there was an overlap of about 20 years with South Pole direct measurements, all within the accuracy of the ice cores.
For more (paywall) details, see:
http://www.agu.org/pubs/crossref/1996/95JD03410.shtml
Other ice cores, progressively are more inland, that means less precipitation and lower temperatures. That gives less resolution, but far longer periods back in time, the latest from Dome C, with only a few mm ice equivalent per year, some 600 years resolution but 800,000 years back in time.
No matter the differences in resolution, temperature, dust inclusions, accumulation, if one plots all ice core CO2 levels over time, the CO2 levels of the same average age are within 5 ppmv for all different ice cores.
Ferdinand Engelbeen says on 6/20/10 at 4:02 pm said,
>> Yes, the IPCC does attribute the total of the rise to aCO2. That is about the increase of the total amount. That doesn’t mean that all individual molecules of aCO2 emitted in the past 150 years or so, still are in the atmosphere. Most of them were exchanged by “natural” CO2 during the seasonal exchanges at a rate of near 20% per year or about 150 GtC/800 GtC in the atmosphere (or a half live slightly over 5 years). But seasonal exchanges don’t change the total amount of CO2 (whatever the source) in the atmosphere when in equilibrium.
I don’t believe anyone said all the “individual ACO2 molecules” are still in the atmosphere. That would be silly. The molecules from any source, if they could be tagged, would be absorbed randomly, some almost instantaneously, and some never because of the tail of the absorption probability curve. We can only deal profitably with means, and the mean for the molecule is the mean for the slug, and is derived from the mean of the slug.
When we speak of a molecule of ACO2 and nCO2, we are making an approximation because the two species of CO2 are indistinguishable in the first order process of dissolution in the waters, and they balance in the terrestrial flux. The two species are different mixes of 12CO2:13CO2:14CO2. Some plants are fractionating, favoring one molecule over the other, and an argument can be made that the mechanical process of dissolution should depend on molecular weight. But these are fifth order conjectures or less, well lost in the noise of estimating the primary effects of temperature, pressure, wind velocity, and salinity.
What IPCC needed to have done is publish its mass balance analysis on which it claims to have based its carbon cycle model of AR4, Figure 7.3, p. 515. In that model the flux from fossil fuels is about 6 Gt/y, mixed in with about 120 Gt/y from land, and 91 Gt/y from the ocean, but omits the 270 Gt/y from leaf water claimed in the TAR. That puts fossil fuel emissions at about 3% of the total without leaf water, and 1.3% with. Expect the mass balance analysis to show that that fraction is the amount that any increase in the atmosphere CO2 concentration could be attributed to ACO2. That should be the result even when the model accounts for 12CO2 and 13CO2 separately (14CO2 concentration being lost in the noise).
More importantly, IPCC needed to make the outgassing of CO2 from the ocean temperature dependent. It is a positive feedback that IPCC overlooked, and one that confounds its conjecture that ACO2 is the cause of global warming.
And as far as mistaken attribution is concerned, IPCC zeroed the natural rise in temperature occurring at the start of the industrial era. That natural temperature rise should have continued for another 3ºC or so, extrapolating from the Vostok record. By zeroing it, IPCC then attributed that on-going, natural temperature rise to the CO2 rise at MLO, which it wrongly attributed to a global rise and wrongly to ACO2.
By the way, the atmosphere is never in equilibrium, unless you have a definition of equilibrium that is different than thermodynamic equilibrium.
You say,
>> But we know from the CO2 emissions inventory and the CO2 measurements (at a lot of places) that nature aborbs about halve the emissions (as mass!): nature as a whole is a net sink for CO2.
What you cite is IPCC dogma. It is but a coincidence of numbers, and not a cause and effect model. IPCC says that 70 PgC/yr of nCO2 is absorbed into the ocean from 597 PgC in the atmosphere, which is 11.83%. For ACO2, IPCC numbers are 22.2 PgC/yr absorbed from 165 PgC, which is 13.45%. One might consider these numbers close enough (a ratio of 1.138:1) for a carbon cycle budget, but that would be naïve. The difference between nCO2 and ACO2 is a delta 13C of -25‰ vs. -8‰, respectively. That’s a ratio of 13CO2 in the total CO2 of 1.0838% vs. 1.1924%, respectively (a ratio of 1.012:1). IPCC’s discrepancy in absorption is far greater than the delicate difference in mix between nCO2 and ACO2.
If the absorption of nCO2 and ACO2 were to be different, the only known physical difference is the mix ratio of 13CO2:12CO2. Suppose we postulate different solubility coefficients for 12CO2 and 13CO2, itself not implausible though likely unmeasurable, and solve so that the bulk of the ACO2 absorbed is IPCC’s 13.45% of its atmospheric concentration per year, and IPCC’s corresponding number is 11.83% for nCO2. The solubility coefficient for 12CO2 turns out to be -.0826 and for 13CO2, 86.33. Regardless of the units lost in percentages, the solution to IPCC’s model is that the ocean must outgas 12CO2. Ignoring that little difficulty, the conjecture is that ACO2 and nCO2 are dissolved in water, but that water fractionates, changing all the mixes in the atmosphere and the water.
In summary, IPCC’s model cannot be solved under the laws of solubility. This explains why IPCC does not use Henry’s Law and does not supply its mass balance analysis. The best explanation for IPCC’s irregular fluxes is that the concept of a molecule of ACO2, a mix, and of nCO2, a mix, has no more meaning than a molecule of the atmosphere.
Your model by analogy to banks and factories is as worthless as the other analogies suggested in this thread. But that is the usual fate of all scientific models by analogy.
You conclude,
>> there is zero net contribution by nature to the total amount of CO2 in the atmosphere and humans are (near) totally responsible for the increase.
The greenhouse gases of water vapor and CO2 come from surface waters, the latter principally in accord with the law of solubility (Henry’s Law is for equilibrium, and the atmosphere and ocean surface are never in equilibrium), and the concentrations of those two GHGs are dynamic feedbacks of the surface temperature. Man’s contribution is negligibly small, especially in consideration of the noise in the variables. The surface temperature in Earth’s warm state follows the radiation of the sun, filtered by ocean currents, amplified in the short term, most probably by cloud albedo, and regulated in the longer term by the negative feedback of cloud albedo. Humans are not involved.
Jeff Glassman says:
June 20, 2010 at 10:37 am
When we speak of a molecule of ACO2 and nCO2, we are making an approximation because the two species of CO2 are indistinguishable in the first order process of dissolution in the waters, and they balance in the terrestrial flux.
In first instance not indistinguishable, but with 150 years of emissions, the effect can be measured as well as in the atmosphere as in the upper ocean waters:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/sponges.gif
Note the difference in d13C level between atmosphere and ocean water (sponges reflect the d13C level of surrounding (bi)carbonate in the water without fractionation). That is caused by fractionation at the surface (both ways).
That puts fossil fuel emissions at about 3% of the total without leaf water, and 1.3% with.
Here we disagree: you see natural emissions not as a part of the total cycle. To make the full mass balance, one need to include the other part of the cycle, the sinks. Taking your own figures (without leaf water, but that doesn’t change the essence):
After a full cycle the total balance is:
97% nCO2 + 3% aCO2 – x% (a+n)CO2 = 1.5% (measured increase in the atmosphere)
or x% in this case is 98.5% or with other words, the natural sinks are larger than the natural sources and nature doesn’t add anything to the mass balance, no matter how large the natural sources and sinks are, no matter any change in individual or total sinks or sources, no matter the partitioning between oceans and vegetation as net sinks.
Or put in another way: if there were no human emissions, would the amount of CO2 in the atmosphere increase, decrease or stay the same?
More importantly, IPCC needed to make the outgassing of CO2 from the ocean temperature dependent.
It is temperature dependent, but so is the uptake by vegetation, in opposite direction. The seasonal changes in CO2 level therefore are relative small and mainly in the NH (more land/vegetation). The 8 ppmv/K from the long-term CO2-temperature dependency is about 4 ppmv/K on short term temperature changes, around the trend.
Of course that is a dynamic equilibrium, not a static one.
In summary, IPCC’s model cannot be solved under the laws of solubility. This explains why IPCC does not use Henry’s Law and does not supply its mass balance analysis.
You overestimate the role of Henry’s Law for seawater: that plays a very tiny part of the whole equation. Most of the CO2 of seawater is not CO2 in solution, but in form of carbonate and bicarbonate. pH, DIC (total dissolved inorganic carbon) play a much larger role and of course temperature, but also biolife. Seawater contains much more CO2 in different forms than fresh water or Henry’s Law will show. For pCO2 differences between sea surface and atmosphere, see the excellent pages of Feely e.a.:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/exchange.shtml
Man’s contribution is negligibly small, especially in consideration of the noise in the variables
The (temperature induced) noise in the total mass balance at the end of the seasonal cycle is about halve the emissions over the past 50 years:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em.jpg
Jeff Glassman said
“And as far as mistaken attribution is concerned, IPCC zeroed the natural rise in temperature occurring at the start of the industrial era. That natural temperature rise should have continued for another 3ºC or so, extrapolating from the Vostok record. By zeroing it, IPCC then attributed that on-going, natural temperature rise to the CO2 rise at MLO, which it wrongly attributed to a global rise and wrongly to ACO2.”
Yes, exactly
In my view the IPCC (bereft of Historians) took a temperature snapshot from 1850/1880 and assumed that the (gentle) rise from that date was due to Co2 without realising that they were merely recording the tail end of a much longer rise.
The LIA is much misunderstood, in as much the second -and arguably most severe phase- started sporadically in 1601-possibly the coldest year ever- and developed in fits and starts with the real action between 1650 and 1698 when the Little Ice age- in the usual sense of the word (Age meaning extended) came to an end. There was a 40 year warm period where temperatures rose at a faster rate than any since, and in effect after that cold periods were very episodic and do not merit the word ‘little ice age’ but ‘little ice age interludes’.
Ironically, the last concerted gasp of anything approximating the little ice age was 1879 1880 and 1881. So James Hansen commenced measurements from a distinct trough in temperatures-I have commented on this curiosity a number of times in articles I have written that use historic instrumental records from around the world, which I collect here.
http://climatereason.com/LittleIceAgeThermometers/
I think it would be very interesting if someone like Willis were to analyse Hansens highly influential 1987 paper from which Giss was constructed, because his 1880 start date was, to me, completely illogical. I increasingly feel that 1880 was chosen for a reason other than the reasons that are usually stated.
Whether the depth of the LIA was 1601 or 1684, what is certain is that temperatures have been rising in fits and starts since at least the 1690 decade-that is for for some 320 years or more- and every decade since 1810 has been warmer than that decade.
The IPCC snapshot is too narrow to see this broader picture, which can be backed up by instrumental records. Indeed the Lamb graph pastiche of the LIA used in the first IPCC report was more accurate than the Hockey stick that replaced it.
The lack of correlation betwen Co2 and the start of the rise in temperatures can be seen in this graph.
http://c3headlines.typepad.com/.a/6a010536b58035970c0120a7c87805970b-pi
I am keeping out of this particular CO2 debate having crossed swords with most of the protagonists here when I ran my own thread on ‘Historic variations in CO2’ over at the Air Vent.
http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/
Tonyb
Ferdinand Engelbeen says on 6/20/10 at 6:38 am said,
>>>> So what the Consensus has done is to “calibrate” the various records into agreement.
>>Sorry, that is not right. The calibration gases are calibrated against each other and at one place on earth (formerly at Scripps, currently NOAA I suppose) against an original set which is measured with an extremely accurate manometric method. That has nothing to do with “adjusting” the results.
That all may be true enough, but that is neither the end of IPCC’s “calibration”, nor the critical part of it. For example,
>>Based on an ocean carbon cycle model used in the IPCC SAR, tuned to yield an ocean-atmosphere flux of 2.0 PgC/yr in the 1980s for consistency with the SAR. After re-calibration to match the mean behaviour of OCMIP models and taking account of the effect of observed changes in temperature aon [sic] CO2 and solubility, the same model yields an ocean-atmosphere flux of −1.7 PgC/yr for the 1980s and −1.9 PgC/yr for 1989 to 1998. Citations deleted, TAR, Table 3.3, p. 208.
>>The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically >Such documentary data also need calibration against instrumental data to extend and reconstruct the instrumental record. Citations deleted, AR4. ¶1.4.2, p. 107.
In 2005, the global average abundance of CH4 measured at the network of 40 surface air flask sampling sites operated by NOAA/GMD in both hemispheres was 1,774.62 ± 1.22 ppb. This is the most geographically extensive network of sites operated by any laboratory and it is important to note that the calibration scale it uses has changed since the TAR. The new scale (known as NOAA04) increases all previously reported CH4 mixing ratios from NOAA/GMD by about 1%, bringing them into much closer agreement with the Advanced Global Atmospheric Gases Experiment (AGAGE) network. Footnote, citation deleted. AR4, ¶2.3.2, p. 140.
>>Note that the differences between AGAGE and NOAA/GMD calibration scales are determined through occasional intercomparisons. AR4, Figure 2.4, p. 142.
>>These differences are the result of different cross calibrations and drift adjustments applied to individual radiometric sensitivities when constructing the composites. Citation deleted, AR4, ¶2.7.1.1.2, p. 188.
>>The large spread in T2 trends stems from differences in the inter-satellite calibration and merging technique, and differences in the corrections for orbital drift, diurnal cycle change and the hot-point calibration temperature. Citations deleted, AR4, ¶3.4.1.2.1, p. 267.
>>These products rely on the merging of many different satellites to ensure uniform calibration. AR4, ¶3.4.2.2, p. 273.
>>For this reason, the proxies must be ‘calibrated’ empirically, by comparing their measured variability over a number of years with available instrumental records to identify some optimal climate association, and to quantify the statistical uncertainty associated with scaling proxies to represent this specific climate parameter. AR4, ¶6.6.1.1, pp. 472-3.
>>Evaluated on the scale typical of current AOGCMs, nearly all quantities simulated by high-resolution AGCMs agree better with observations, but the improvements vary significantly for different regions and specific variables, and extensive recalibration of parametrizations is often required. Citation deleted, AR4, ¶11.10.1.1, p. 918.
>>The set of GCM simulations of the observed period 1902 to 1998 are individually aggregated in area-averaged annual or seasonal time series and jointly calibrated through a linear model to the corresponding observed regional trend. AR4, ¶11.10.2.2.2, p. 923.
>>Additionally, the year by year (blue curve) and 50 year average (black curve) variations of the average surface temperature of the Northern Hemisphere for the past 1000 years have been reconstructed from “proxy” data calibrated against thermometer data (see list of the main proxy data in the diagram). TAR, Summary for Policymakers, Figure 1, p. 3.
>> Complex physically based climate models are the main tool for projecting future climate change. In order to explore the full range of scenarios, these are complemented by simple climate models calibrated to yield an equivalent response in temperature and sea level to complex climate models. These projections are obtained using a simple climate model whose climate sensitivity and ocean heat uptake are calibrated to each of seven complex climate models. TAR, Summary for Policymakers, p. 13.
>>CO2 has been measured at the Mauna Loa and South Pole stations since 1957, and through a global surface sampling network developed in the 1970s that is becoming progressively more extensive and better inter-calibrated. Citations deleted, TAR, ¶3.5.1, p. 205.
These clippings say nothing about the limitations and accuracy problems IPCC admitted to encountering with its calibrations.
You discussed a laboratory calibration of detectors. IPCC calibrates detector measurements from different locations and different times again, and using different instruments, so that the results look alike. It calibrates data supplied to models and its parameterizations so that the model results look like the results of other models. These, and every mention of an intercalibration, raise the specter of data adjustments. It graphs such calibrated records by calibrating the traces once again so that they merge or overlap.
IPCC uses calibration in lieu of error analysis. It uses visual correlation in place of numerical correlation, and is not above adjusting the offset and scale factor of one record to make it look like another, and then to claim a relationship. It calibrates its models and the data fed into its models so that the model results will look like the data. Then it fails to see if its well-tuned model with well-tuned data has any predictive power. This is not the stuff of science.
Ferdinand Engelbeen says on 6/20/10 at 12:00 pm said,
>> You overestimate the role of Henry’s Law for seawater: that plays a very tiny part of the whole equation. Most of the CO2 of seawater is not CO2 in solution, but in form of carbonate and bicarbonate. pH, DIC (total dissolved inorganic carbon) play a much larger role and of course temperature, but also biolife. Seawater contains much more CO2 in different forms than fresh water or Henry’s Law will show.
IPCC needed CO2 to accumulate in the atmosphere for its AGW conjecture to work. It tried to resurrect the buffer factor, the Revelle & Suess conjecture that failed. When it tried to measure the buffer factor, it exhibited the temperature dependence of solubility. This appeared in a draft to AR4, which IPCC promptly removed and suppressed.
IPCC reinforced the buffer factor with the model that the surface water is in equilibrium. With that assumption, it applied the chemical equations of equilibrium. The solution to these equations is the Bjerrum plot. The equations showed that the surface layer would be a buffer against CO2 dissolution, and in the bargain, that what CO2 was absorbed would cause a nice, alarming increase in ocean acidity.
Of course, the surface layer is not in equilibrium. It is not in equilibrium even in the most stagnant pool.
If one made a solubility measurement in IPCC’s surface layer model, he would find that Henry’s Coefficient depends not just on temperature, pressure, and salinity, but also on the state of the surface layer according to the Bjerrum plot. This is novel physics, invented to make AGW work.
A far better model, one consistent with all physics but not AGW, is that the surface layer is not in equilibrium. Instead it contains a surplus of molecular CO2, sufficient to preserve Henry’s Law and to itself be a buffer for the ions in the chemical equations. The buffer is not in the atmosphere; it is in the surface layer.
Jeff Glassman says:
June 20, 2010 at 1:42 pm
That all may be true enough, but that is neither the end of IPCC’s “calibration”, nor the critical part of it.
I may agree with most of what you object to, but not about the CO2 data: the IPCC has nothing to do with the calibration or the endresults of the CO2 levels at Mauna Loa or any other station on earth. That was the work of Keeling at Scripps until the end-1990’s and NOAA today, but controlled by different methods (manometric, GC, mass spectrometry) by different labs from different organisations in different countries. That makes the CO2 data robust and far beyond the machinations used for the temperature records. The only moment that all the (worldwide) data had to be revised was when was discovered that the original CO2 in N2 calibration gases (out of fear for corrosion of the inside of the containers) did give different results with the NDIR measurements than CO2 in air. This needed a worldwide recalibration of all instruments with the new calibration gases. But as the raw (voltage) measurements still were available, that wasn’t a huge problem.
That are the base CO2 data, which represent 95% of the atmosphere. The problems are in the last 5% of the atmosphere: the first 200-1000 m over land, where the fast emitters/sinks are (humans, bacteria, vegetation),… and mixing is slow under low wind conditions. Thus never use these data for “global” CO2 levels. But the below 200 m data over land are used for flux measurements to try to understand the uptake/release of CO2 over different areas. That is of interest for the detailed carbon cycle but of no interest for the global CO2 mass balance.
Jeff Glassman says:
June 20, 2010 at 2:16 pm
If one made a solubility measurement in IPCC’s surface layer model, he would find that Henry’s Coefficient depends not just on temperature, pressure, and salinity, but also on the state of the surface layer according to the Bjerrum plot. This is novel physics, invented to make AGW work.
Jeff, one need to make a differentiation between what is measured, what can be calculated and what is speculation in climate science. Besides temperature plots, most direct measurements are done by people interested in good data, whatever these show. That the IPCC (and some intermediates) manipulate the interpretation of the data is of a different category (including the GCM’s and other computer games).
Ocean pCO2 (the result of all factors in seawater: DIC, pH, salinity, biolife,…) were sporadically measured at a lot of places by ships and systematically nowadays and on a few fixed places on earth. E.g. in Bermuda:
http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
That gives (as sent in a previous message) that we know with reasonable accuracy where the outgassing is from the oceans and where the sinks are. And the intermediates (source in summer, sink in winter). The flux measurements in and out are more difficult to estimate, as wind speed is a huge factor in the exchanges, as simple diffusion and surface crossing is quite slow (there is average only 0.000007 bar difference in pCO2 pressure between the atmosphere and the upper oceans).
All measured pCO2 changes are positive in both air and oceans. The upper oceans track the air concentrations or reverse, but as any (deep) ocean burp would increase the d13C level of the atmosphere (including the sea-air fractionation), and we see the reverse, that proves that the flow is from the atmosphere into the oceans, not the reverse. Moreover, the pH is getting lower while DIC increases. If the lower pH would be the cause of more outgassing (converting -bi-carbonate into CO2, thus increasing oceanic pCO2), that would reduce the DIC content, but we see the reverse.
Thus there is overwhelming evidence that the oceans are not the source of the extra CO2 in the atmosphere, but a net sink.
“Thus there is overwhelming evidence that the oceans are not the source of the extra CO2 in the atmosphere, but a net sink.”
No matter. All that is necessary is for the effectiveness of the sink to vary. Net outgassing is unnecessary.
Warmer ocean surfaces will reduce the effectiveness of the sink.
Ferdinand Engelbeen says on 6/20/10 at 2:16 pm said,
>> Jeff, one need to make a differentiation between what is measured, what can be calculated and what is speculation in climate science. Besides temperature plots, most direct measurements are done by people interested in good data, whatever these show. That the IPCC (and some intermediates) manipulate the interpretation of the data is of a different category (including the GCM’s and other computer games).
We seem to be converging here, which is quite encouraging. My work in this area arises out of what I considered gross errors in the scientific method evident in IPCC reports. In particular, I find the use of an unvalidated model for public policy a breach of ethics for a scientist, and a fraud for personal gain. My observations is that there would be no climate crisis, and especially no CO2 crisis, but for IPCC. I am focused entirely on exposing the IPCC fraud. The rest of climatology can wander over the domain as it might choose, and I might look in on their work from time to time with amusement.
You said,
>>Ocean pCO2 (the result of all factors in seawater: DIC, pH, salinity, biolife,…) were sporadically measured at a lot of places by ships and systematically nowadays and on a few fixed places on earth. E.g. in Bermuda: http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
>>That gives (as sent in a previous message) that we know with reasonable accuracy where the outgassing is from the oceans and where the sinks are. And the intermediates (source in summer, sink in winter). The flux measurements in and out are more difficult to estimate, as wind speed is a huge factor in the exchanges, as simple diffusion and surface crossing is quite slow (there is average only 0.000007 bar difference in pCO2 pressure between the atmosphere and the upper oceans).
I think that Takahashi, et al., have done a commendable job in assembling those data into a beautiful model. It is AR4, Figure 7.8, p. 523. The sum of all the individual cells in the Takahashi diagram is correctly the net uptake of the ocean. However, Takahashi’s positive and negative partial sums are not faithful to the uptake and outgassing fluxes used by IPCC in Figure 7.3, p. 515. Therefore, the Takahashi model needs recalibration. I provided an example on my blog. See Rocket Scientist’s Journal, “On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, etc.”, Figure 1A.
You mention “all the factors”, with examples, but excluding dissolved molecular CO2. In my model, I rely on Henry’s Law, with the ocean surface layer a compliant, instantly available buffer. CO2 rich water rises at the end of the subsurface THC in the Eastern Equatorial Pacific (EEP) to outgas at the warmest prevailing SST of the day. (In an equivalent view, the THC can be considered a continuous path, closed by a branch over the surface of the ocean.) A warm air mass, heavily laden with CO2, rises, divides north and south, to enter the Hadley cells, which then carry the gas into the trade winds. This creates a plume of CO2 in the atmosphere that descends across Hawaii. That plume I imagine to be a ridge that can wander with the prevailing wind at MLO, causing it to have a seasonal cycle. (It’s too bad that the MLO data do not seem to include wind measurements.) MLO data represent a major source of atmospheric CO2.
That CO2 then wanders across the surface of the globe with natural wind currents. Meanwhile, the ocean surface layer moves poleward, cooling and reabsorbing CO2. These currents flow to the poles, where the surface water made dense by cooling (always to ice water temperatures) and especially by CO2, and somewhat by salinity (the conventional model), descends to depths as the headwaters of the THC. The polar regions represent major sinks of atmospheric CO2.
The THC subsurface current has many branches by which it reemerges, but a dominant branch leads to the EEP about a millennium later. The lag and the solubility curve shape is measurable in the Vostok record. See http://www.rocketscientistsjournal.com, “The Acquittal of Carbon Dioxide”.
Some conclusions: surface waters are the major source of atmospheric CO2, and the mean trends of CO2 concentration measured at MLO and the South Pole should not match.
Jeff Glassman says:
June 19, 2010 at 10:33 pm
Again you either misunderstand or pretend to. Read the IPCC definition again. Read my definition again.
Mean residence time is the time an average molecule stays in a reservoir. It is calculated as mass divided by throughput. IT DOES NOT HAVE A HALF-LIFE.
Pulse decay time (half-life) is the time for a pulse of CO2 which was added to the atmosphere to decay halfway back to the equilibrium. It has nothing to do with residence time.
There is a residence time when a system is at equilibrium. But by definition, there is no half-life of a pulse when a system is at equilibrium.
And as I said … if you don’t understand that, we can’t help you. You are free to write pages and pages here, but I fear that you think you are gaining some traction or convincing someone. But repeating misunderstanding doesn’t convince any but the already convinced.
“A warm air mass, heavily laden with CO2, rises, divides north and south, to enter the Hadley cells, which then carry the gas into the trade winds. This creates a plume of CO2 in the atmosphere that descends across Hawaii. That plume I imagine to be a ridge that can wander with the prevailing wind at MLO, causing it to have a seasonal cycle.”
I like that description but how does it square with the similar record at Barrow and other locations ?
Jeff Glassman says:
June 20, 2010 at 5:26 pm
This creates a plume of CO2 in the atmosphere that descends across Hawaii. That plume I imagine to be a ridge that can wander with the prevailing wind at MLO, causing it to have a seasonal cycle.
That there is a continuous CO2 flow from the warm equator to the poles is true, but is not the cause of the seasonal cycle. The main cause is the terrestrial vegetation cycle of the mid-latitudes, mainly in the NH, where leaf formation in spring and further photosynthesis use CO2, produce O2 and increase d13C levels from spring to fall. The opposite happens from fall to spring. That can be seen in the seasonal cycle of CO2, O2 and d13C levels.
Further, the pCO2 (measured or calculated) is directly proportional to the concentration of free CO2 in solution. Thus more or less known for different parts of the oceans and for different seasons. Henry’s Law still is valid, but has very little to do with temperature (alone) in the case of seawater. Thus calculating fluxes in/out seawater based on only Henry’s Law / temperature gives a complete wrong answer, the more that the diffusion speed of CO2 through (sea)water is very low, thus even with very large differences in pCO2 between ocean surface and atmosphere, the speed of CO2 transfer is low and wind speed is the dominant factor in the fluxes.
From the past near million years (Vostok, Dome C) we know that the (dynamic) equilibrium between temperature in the past and CO2 levels is about 8 ppmv/K.
That includes all (deep) ocean flows, ice sheet and vegetation expansion/retreat, etc… That means that temperature is only responsible for maximum 8 ppmv from the warming since the depth of the LIA. There is no reason to expect that this ratio is different now, to the contrary, as the short term influence of (ocean) temperature is about 4 ppmv/K around the trend.
Willis Eschenbach says:
June 20, 2010 at 6:18 pm
Mean residence time is the time an average molecule stays in a reservoir. It is calculated as mass divided by throughput. IT DOES NOT HAVE A HALF-LIFE.
While true for identical molecules, there still is a kind of half-life for a pulse of a different isotope (like 14C or 13C)…
For the sake of clarity I have plotted what happens if you add a pulse of “human” aCO2 of 100 GtC at once to the pre-industrial atmosphere containing 580 GtC:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/fract_level_pulse.jpg
Where FA is the fraction of “anthro” CO2 in the atmosphere, FL in the upper oceans, tCA the total amount of CO2 in the atmosphere and nCA the amount of “natural” CO2 in the atmosphere, the difference being anthropogenic.
It is easy to see that after a pulse the total amount is of course the old amount + the amount in the pulse, which brings the fraction of aCO2 to about 14%. This fraction is fast reduced to near zero in 40-50 years, simply by exchanges with natural CO2 through the permanent and seasonal exchanges between (deep) oceans and vegetation. The reduction of the extra CO2 from the pulse to near zero takes much longer, but despite that there is near no aCO2 left in the atmosphere, the origin of the increased CO2 level compared to the equilibrium still is 100% caused by aCO2.
Willis Eschenbach on 6/19/10 at 1:40 pm said, shouting at the end,
>>Mean residence time is the time an average molecule stays in a reservoir. It is calculated as mass divided by throughput. IT DOES NOT HAVE A HALF-LIFE.
He does not explain what he means by an “average molecule”. The reader might think he’s talking about the average in some sense of 12CO2, 13CO2, and 14CO2 molecules. What happened is that he conflated the average of time into the time of the average. These are interchangeable only in a special circumstance (linearity), which happens not to be applicable here. Then when he moved the average from time to the molecule, he was left with an expression of an average with respect to nothing real.
What he needed to do was average the life time of a bunch of molecules, then he would have had a target for the averaging. This might have kept him from conflating the average life of a molecule and the average lifetime in a slug of molecules.
When in his second sentence cited, Mr. Eschenbach says, “It is calculated …”, the “it” pretty surely refers to “mean residence time”. But that is not what “it” references in the third sentence. He might intend for the second “it” to refer to his “average molecule”. This is the grammatical error of a faulty pronoun reference, and it contributes to his scientific and mathematical error.
In his opening piece on 6/7/10, Willis Eschenbach said,
>>Suppose we put a pulse of CO2 into an atmospheric system which is at some kind of equilibrium. The pulse will slowly decay, and after a certain time, the system will return to equilibrium. This is called “exponential decay”, since a certain percentage of the excess is removed each year. The strength of the exponential decay is usually measured as the amount of time it takes for the pulse to decay to half its original value (half-life) or to 1/e (0.37) of its original value (e-folding time). The length of this decay (half-life or e-folding time) is much more difficult to calculate than the residence time. The IPCC says it is somewhere between 90 and 200 years. I say it is much less, as does Jacobson.
We can excuse Mr. Eschenbach’s use of the word equilibrium, even though IPCC seriously bungles the concept repeatedly, because Mr. E. says its “some kind of equilibrium”. Out of kindness, we can read “some kind of equilibrium” to mean steady state. However, Mr. E. didn’t even need the assumption to explain the decay of a pulse. He need not have introduced the state of the system. Later, Mr. E. says, “there is no half-life of a pulse when a system is at equilibrium.” When a pulse is added to a system, it will, as described below, have a half life regardless of the state of the system. What counts is the mass of the pulse and its vanishing at a rate proportional to its remaining mass.
Mr. E. claims that “a certain percentage of the excess is removed each year.” First, he doesn’t mean the “excess”. There is no excess in his problem, and he is left with a “certain percentage” of an uncertain thing. In many basic physical problems, the rate of increase or decrease is proportional to the instantaneous parameter value, whether mass, quantity, or size. Dissolution of a gas into a liquid and the emptying of a reservoir are relevant examples. This is known from physics and experiment, and is not a consequence of the problem as Mr. E. has described it. When the rate is proportional to the total remaining, the solution is unique and it is the exponential.
Mr. E. suggested I “Read the IPCC definition again.” Here it is, again, for all to read:
>>Turnover time (T) (also called global atmospheric lifetime) is the ratio of the mass M of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal S from the reservoir: T = M / S.
IPCC goes on to equate T to mean residence time — sometimes. Now IPCC never uses Turnover time in the main body of either its 3rd or 4th Assessment Report, so it doesn’t matter to its writings that the definition is incomplete. In the case of CO2 being dissolved into water, the conventional model is that the rate of removal S is equal to a constant times the instantaneous mass remaining in the reservoir. So S = kM. This is the situation Mr. E. should have in mind, but can’t express, when he says “a certain percentage … is removed”. This has the effect of making T = 1/k.
Now S is the rate of change of the mass, M. So we use differential calculus to write, dM/dt = S = -kM, where k is positive and is called the decay constant. This equation is easily solved, but can’t be written here within the html limits on this blog. We have the integral of dM/M dt equals the integral of k dt, and the solution is ln(M) = -kt + constant. Using the obvious value for the constant and obvious notation, the result is M(t)=M_0*exp(-kt).
We write that M(t_1/2) = M_0/2, so that exp(-kt_1/2) = ½, and thus t_1/2, the half-life, equals ln(2)/k. It doesn’t matter how big the mass is, all the way down to one molecule.
Similarly, we write M(T_e folding) = M_0/e = M_0 exp(-kt_e folding). Thus e-folding = 1/k.
Now the remarkable thing is that the e folding time is equal to the mean residence time as defined by IPCC. This is true for one molecule or for Avogadro’s number of molecules or for 800 PgC worth of CO2 in the atmosphere in some unknown isotopic mix. This is not an average molecule.
Just to be sure, we can compute the average lifetime of a molecule in the reservoir being dissolved. First we need the normalized number of molecules at time t, and that is kexp(-kt). So the average is the integral of kexp(-kt)t dt from 0 to ∞. It is demonstrated in first year calculus to be 1/k. It is the e folding time, the average life time of a molecule in a slug, the average life time of a hypothetical molecule in general, the average life time of the slug, the reciprocal of the decay constant (k), the mean residence time, and the (instantaneous) Turnover time, T, all at the same time.
I write a lot of words and provide a lot of references and cite a lot of sources because that’s what it takes to untangle what you write.
Steven Wilde on 6/20/10 at 10:53 said,
>>I like that description but how does it square with the similar record at Barrow and other locations?
I rely almost exclusively on IPCC reports, and on papers cited there. I did not find Point Barrow CO2 there. My opinion about the Barring Head CO2 record is the same as that for the South Pole, below.
Re Ferdinand Engelbeen 6/21/10 at 8:36 am
I did not say that the CO2 flow originating at the Equator was the cause of the seasonal fluctuations. My model says that those fluctuations are due to the seasonal fluctuations in the prevailing wind at MLO, which modulate standing waves or gradients in the atmospheric CO2 concentration there.
The model that the CO2 seasonal fluctuations at MLO are due to terrestrial vegetation cycles is IPCC’s model, but it is a weak conjecture. Seasonal fluctuations are seen all over the globe, but they are not in sync. Investigators are still testing the terrestrial model for CO2 fluctuations. You might want to look at Keeling, CD, The Concentration and Isotopic Abundances of Carbon Dioxide in the Atmosphere, Tellus, v. 12, no. 2, 6/60, and Manning, AC and RF Keeling, Correlations in Short-Term Variations in Atmospheric Oxygen and Carbon Dioxide at Mauna Loa Observatory, a Scripps publication, 11/8/60 for four decades of uncertainty in the model.
That the modulation is terrestrial seems implausible because of the massive natural flow that should be dominating MLO concentrations.
At the same time, the MLO concentrations reported by IPCC and in the journals is simply too pat. The record there and at the South Pole looks as though someone had deconstructed the data into a roughly exponential trend line plus a seasonal component, smoothed them both, put them through a recalibration, and then reassembled them. Each series is too perfect, and the comparisons too coincidental. Real data don’t do that.
A good argument can be made that the temperature reconstruction in the Vostok record is global. That is not true of the CO2 record, which is local, sampled inside the CO2 sink of the Antarctic. Both are heavily smoothed, low pass filtered by the fern closure time. The fact that all ice core records exhibit a hockey stick effect when merged into the instrument record is likely due to the difference in filtering plus IPCC data doctoring. What varies with SST is the outgassing in the EEP, which then has a profound effect on MLO. The uptake at the poles is at a constant temperature around 0º to 4ºC which means the ice core records have low variance, independent of SST.
Henry’s Law must have a profound effect for two reasons. One is that it has not been repealed. The other is that the notion that the surface ocean is a bottleneck, making CO2 queue up in the atmosphere waiting for slow sequestration processes to make room for it, is based on the assumption that the stoichiometric equations of equilibrium apply. A far better model that requires repealing nothing is that the surface layer is a buffer for molecular CO2 that allows the flux with the atmosphere and the dissociation, currents, and sequestration all to proceed independently and at their own pace. Atmospheric CO2 is uncoupled with sequestration.
It’s also worth noting that IPCC’s model gives the absorption of CO2 three time constants. This is nonsense, tending to invalidate its model. It is equivalent to having three separate reservoirs and circuits for the CO2. The fastest time constant would dominate the emptying of the reservoir, and that time constant is much faster than IPCC’s fastest. Henry’s Law is a law of equilibrium, but we know from experience that the equilibration time for CO2 dissolution is instantaneous compared with even short term climate.
One more thing is worth stating. The replenishment of CO2 in the surface waters is quite slow because the cooling of the layer is slow as the waters find their way back to the poles. A reasonable model is that Henry’s Law is satisfied everywhere along the path.
Jeff Glassman says:
June 21, 2010 at 2:46 pm
Turnover time (T) (also called global atmospheric lifetime) is the ratio of the mass M of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal S from the reservoir: T = M / S.
Indeed, this is the right definition of turnover time. That is about the possibility of any individual molecule (whatever its origin) in the atmosphere to be catched by or exchanged with CO2 from another reservoir (oceans, vegetation). In the current world, there is an exchange of about 150 GtC/year between the different reservoirs (both ways), thus about 20% of the atmospheric CO2 content (800 GtC) is exchanged with the other reservoirs.
Of course that is not used by the IPCC, as the turnover, no matter how large, has very little influence on how much CO2 resides in the atmosphere. What matters is what happens if you add an extra amount of CO2 to the atmotmosphere, whatever the source. That is governed by the pCO2 pressure difference between the oceans and the atmosphere, which is positive near the equator and negative near the poles and seasonal (temperature) dependent in between. That gives currently a sink rate of about 2 GtC CO2 into the upper oceans and about 1 GtC into vegetation. See:
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf
Thus the turnover time (which is based on the 150 GtC exchange rate) has simply nothing to do with the decay time of an extra mass of CO2 brought into the atmosphere (which is based on the 3 GtC/year removal rate), which is much longer than the turnover time.
Jeff Glassman says:
June 21, 2010 at 4:54 pm
My model says that those fluctuations are due to the seasonal fluctuations in the prevailing wind at MLO, which modulate standing waves or gradients in the atmospheric CO2 concentration there.
As the data from a lot of stations shows the same pattern (and a reverse pattern in the SH), the assumption that prevailing winds are the cause of the seasonal variability at MLO is questionable:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/month_2002_2004_4s.jpg
As all exchanges are at the surface, it is normal that the largest seasonal changes are seen near ground and that there is some lag with altitude (here for the NH):
http://www.ferdinand-engelbeen.be/klimaat/klim_img/seasonal_height.jpg
That the modulation is terrestrial seems implausible because of the massive natural flow that should be dominating MLO concentrations.
While the flows are massive, these are composed of a relative constant term, as emissions from the warm equator and sinks in near the poles are rather constant: the SST’s near the equator and near the poles are relative constant and don’t change much over the seasons, only the position changes somewhat. The main variability is in the mid-latitudes, where temperature and biolife govern the pCO2 of the oceans. But the seasonal trend is opposite to what one expect from higher temperatures in summer: CO2 is lower in summer than in winter. Thus the variability is caused by vegetation, not by the oceans, as also O2 and 13C trends show. Even if these (still) have large margins of error, the mass trend is clear.
At the same time, the MLO concentrations reported by IPCC and in the journals is simply too pat.
Before you accuse someone of manipulating the data, please have a look at the (raw) data yourself. These are available on line for four stations: Barrow, Mauna Loa, Samoa and South Pole:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
These are the calculated CO2 levels, based on 2 x 20 minutes 10-second snapshots voltages of the cell + a few minutes voltages measured from three calibration gases. Both the averages and stdv of the calculated snapshots are given. These data are not changed in any way and simply give the average CO2 level + stdv of the past hour.
Some of the data are “flagged”, if the stdv within an hour is high, the difference between subsequent hours is high, with upwind conditions, etc… These “flagged” data are excluded from daily, monthly and yearly averaging, because these represent local contamination and only data deemed “background” are used for averaging. Does that influence the average and trend? Hardly. With or without outliers, the shape and trend is hardly different, only less variable around the (seasonal) trend:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/mlo2004_hr_raw.jpg
for 2004 including all data
http://www.ferdinand-engelbeen.be/klimaat/klim_img/mlo2004_hr_selected.gif
excluding “flagged” data.
Please check it yourself…
More later…
In addition:
The detailed measurement, calibration and selection procedures for MLO (and other stations) are available at:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html
Further reactions…
A good argument can be made that the temperature reconstruction in the Vostok record is global. That is not true of the CO2 record, which is local, sampled inside the CO2 sink of the Antarctic.
The CO2 record is even more global: today there is hardly any difference in CO2 levels between the North Pole and the South Pole, for 95% of the atmosphere (only over land up to up to 1,000 m there is a lot of noise). There is only a small (14 months) lag between the NH and the SH if one looks at yearly averages, see the difference between the yearly averages of several stations:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends.jpg
Thus the Vostok CO2 record simply represents the global CO2 levels of that time, be it smoothed over about 600 years, as that is the time that all gas bubbles need to be fully closed.
The fact that all ice core records exhibit a hockey stick effect when merged into the instrument record is likely due to the difference in filtering plus IPCC data doctoring.
There is not the slightest role of the IPCC in this case, as the ice core data are sampled by different drilling and measurement teams from different countries. That the different ice cores show the same hockeystick, despite huge differences in accumulation rate, temperature, (coastal) salt inclusions, etc. only strengthens the case that there is a real hockeystick (confirmed by other proxies like d13C changes in sponges) in this case. The ice cores with the best resolution (8 years) show the same trend as those with the worst one (600 years) for the same average gas age and overlap with some 20 years with the direct data of the South Pole:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/antarctic_cores_001kyr_large.jpg
Henry’s Law must have a profound effect for two reasons. One is that it has not been repealed. The other is that the notion that the surface ocean is a bottleneck, making CO2 queue up in the atmosphere waiting for slow sequestration processes to make room for it, is based on the assumption that the stoichiometric equations of equilibrium apply.
Again you are missing a lot of factors in the exchange of CO2 between oceans and atmosphere. Henry’s Law still is working, but the amount of free CO2 at the surface is not only influenced by temperature, but by a host of other factors. At last it is the real partial pressure of free CO2 in the last few cm of water which decides which way CO2 will go: in or out of the waters, if the difference with pCO2 of the atmosphere is higher or lower. And even then, the flux involved is secondary to wind speed: even if the upper few cm are rapidely in equilibrium with the atmosphere, the diffusion speed to supply CO2 for more emissions or take in CO2 for more uptake is very, very slow. It is the wind/waves which mixes the layers which rules the uptake/release speed.
It’s also worth noting that IPCC’s model gives the absorption of CO2 three time constants. This is nonsense, tending to invalidate its model. It is equivalent to having three separate reservoirs and circuits for the CO2.
In fact there are three reservoirs involved: the ocean surface (+ vegetation), the deep oceans (+ longer term coalification of vegetation) and (silicate) rock weathering. That doesn’t mean that I agree with the Bern model, as the second and third term are only of interest if we should burn all available oil and a lot of coal. Then one would see an important increase of even the deep ocean CO2 content, which shows up in the equatorial upwelling of the THC.
But your fastest time constant is wrong, as it is based on the residence time, not the decay time needed to reduce any excess CO2 above (dynamic) equilibrium.
Re Ferdinand Engelbeen 6/22/10 at 5:03 am said:
>>The CO2 record is even more global: today there is hardly any difference in CO2 levels between the North Pole and the South Pole, for 95% of the atmosphere (only over land up to up to 1,000 m there is a lot of noise). There is only a small (14 months) lag between the NH and the SH if one looks at yearly averages, see the difference between the yearly averages of several stations: http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends.jpg
>>Thus the Vostok CO2 record simply represents the global CO2 levels of that time, be it smoothed over about 600 years, as that is the time that all gas bubbles need to be fully closed.
Even if what you say were true about North Pole and South Pole data, that does not make any South Pole data, today or in the paleo past global. That is not supported by logic. IPCC admits that a measurable east-west gradient exists in global CO2, and that the north-south gradient is 10 times as great as the east-west gradient. That observation is part of the data reduced by IPCC by which it concluded that a manmade catastrophe is looming due to man’s CO2 emissions. That conclusion is false in part because CO2, by IPCC’s own admissions, is not well-mixed.
As you have pointed out from time to time, local measurements can be quite different than what is assumed for the global CO2 concentration. MLO is local. It sits in the plume of by far the largest source of CO2, more than an order of magnitude greater than what man contributes. IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line.
By the way, the smoothing in the ice core data ranges between 20 years, according to one report, but more generally 30 years to a millennium and a half.
You say,
>>>> The fact that all ice core records exhibit a hockey stick effect when merged into the instrument record is likely due to the difference in filtering plus IPCC data doctoring.
>> There is not the slightest role of the IPCC in this case, as the ice core data are sampled by different drilling and measurement teams from different countries.
To the contrary, and as my remark stated, IPCC merged the ice core records into the instrument records. See TAR SPM, Figure 2, p. 6 (rocketscientistsjournal.com, “SGW”, Figure 34), AR4, Figure SPM.1, (“SGW”, Figure 35). This is unacceptable science. What the laboratories did in creating laboratory data is quite unimportant. The fraud starts with the IPCC.
You wrote,
>>Henry’s Law still is working, but the amount of free CO2 at the surface is not only influenced by temperature, but by a host of other factors. At last it is the real partial pressure of free CO2 in the last few cm of water which decides which way CO2 will go: in or out of the waters, if the difference with pCO2 of the atmosphere is higher or lower.
Actually the partial pressure of a gas in water is a fiction. It is taken to be the partial pressure of the gas in the gas state in contact with the water and in equilibrium with it. It is a laboratory concept, but good enough for climate work. What counts in dissolution is the partial pressure of the gas in the atmosphere, the solute, and the temperature of the water, the solvent.
The last few cm of water are relevant only to the extent that that microlayer might represent the surface layer. The surface layer, which is also known as the mixed layer, is a roiling turmoil running to a nominal depth of 50 m to 150 m or so. The wave action on the surface is the top of a larger overturning action, which is wind dependent. Of course, sometimes the wind is nil, and sometimes it is a storm. What counts is the average action over the surface as it migrates poleward, absorbing more and more CO2 along its path. The surface layer is thoroughly mixed with entrained air taken in from the surface. It is a perpetual, dynamic sampling mechanism by which surface air is captured and absorbed. The notion of a significant layer a few centimeters thick fits a stagnant pond.
You seem to agree when you write,
>>And even then, the flux involved is secondary to wind speed: even if the upper few cm are rapidely in equilibrium with the atmosphere, the diffusion speed to supply CO2 for more emissions or take in CO2 for more uptake is very, very slow. It is the wind/waves which mixes the layers which rules the uptake/release speed.
The surface layer is taking up CO2 as it cools, as Henry’s Law informs us. If you are talking about diffusion across the air-water boundary, it is extremely rapid – instantaneous on any climate scale. If you are talking about diffusion from the surface layer to deep water, it is irrelevant to and unsensed by the atmosphere. The ocean is, contrary to IPCC’s strictly equilibrium analysis, a buffer of surplus molecular CO2 that allows Henry’s Law to operate with the atmosphere, and the stoichiometric equations to operate along with the vertical ocean currents and the sequestration processes.
Ocean emissions occur when the water is heated. That occurs principally at the upwellings, and especially in the Eastern Equatorial Pacific. Small variations occur over the surface as shown by Takahashi. These are minor fluctuations in the larger pattern going from tropical temperatures to polar temperatures, spanning almost the full range of the solubility curve. As I said before, Takahashi got the net flux right. He did not get the 90 to 100 PgC/yr up and down fluxes right. The Takahashi diagram of AR4 Figure 7.8, p. 523 (rocketscientistsjournal.com, “On Why CO2 Is Known Not To Accumulate in the Atmosphere, etc.”, Figure 1), is not in accord with the carbon cycle of AR4 Figure 7.3, p. 515. One way to bring the Takahashi analysis into agreement with the carbon cycle is to recalibrate it as shown in Figure 1a, id.
You say,
>> But your fastest time constant is wrong, as it is based on the residence time, not the decay time needed to reduce any excess CO2 above (dynamic) equilibrium.
In the same spirit of repetitiveness, no decay time is involved. Dynamic and equilibrium are contradictory. You might be referring to a dynamic steady state, which is acceptable on climate scales as a description of the surface layer.
You are trying to defend IPCC’s model that ACO2 accumulates in the atmosphere to drive the climate, while nCO2 does not accumulate, but remains in a perpetual dynamic steady state. This model is indefensible.
Jeff Glassman says:
June 22, 2010 at 7:42 am
Even if what you say were true about North Pole and South Pole data, that does not make any South Pole data, today or in the paleo past global. That is not supported by logic. IPCC admits that a measurable east-west gradient exists in global CO2, and that the north-south gradient is 10 times as great as the east-west gradient.
The NH-SH gradient in yearly averages is less than 5 ppmv on a level of 390 ppmv or less than 2%. I call that well-mixed, which is true for 95% of the atmosphere, from the North Pole to the South Pole, including MLO. Well mixed doesn’t imply that at all places on earth at the same time one can find exactly the same levels. But away from huge sources and sinks, within a reasonable mixing time, the levels are within small boundaries.
Thus if the South Pole data are within 2% of the North Pole data within a year, with increasing emissions in the NH today, I may assume that the ice core data from Antarctica represent 95% of the ancient atmosphere, be it smoothed over a long(er) period.
IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line.
Sorry, this is a bridge too far. The raw hourly data, not adultered or changed in any way, including all outliers are available for checking and comparison for anyone like you and me, at least for four stations, including MLO (I even received a few days of the raw 10-second voltage data for check of the calculations, on simple request). If you have any proof that the data were changed by anyone, or even that the selection procedure has any significant influence on the level, averages or trends, well then we can discuss that. If you have no proof, then this is simply slander.
To the contrary, and as my remark stated, IPCC merged the ice core records into the instrument records. This is unacceptable science. What the laboratories did in creating laboratory data is quite unimportant. The fraud starts with the IPCC.
The merging of the ice core data and the instrument record is acceptable, as that is based on a 20 year overlap of the ice cores from Law Dome with the South Pole direct measurements. There is no difference between CO2 at the South Pole and at the top layer of the firn at the drilling site. There is a small gradient (about 10 ppmv) in the firn from top to start closing depth, which means that the CO2 level at closing depth is about 7 years older than in the atmosphere and there is no difference in CO2 levels between open (firn) air and already closed bubbles in the ice at closing depth. The average closing time is 8 years. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_overlap.jpg
The original article from Etheridge e.a. from 1996 (unfortunately after a paywall) is at:
http://www.agu.org/pubs/crossref/1996/95JD03410.shtml
Further, there are succesively overlaps between ice cores, all within a few ppmv for the same gas age. That means that CO2 levels were (smoothed) between 180 and 310 ppmv for the past 800,000 years, except for the past 150 years where they reached 335 ppmv in 1980 (ice cores) to 390 in 2010 (firn and air). Nothing to do with (much) higher levels many millions of years ago, when geological conditions were quite different.
As there is a rather linear correlation between temperature and CO2 (Vostok, Dome C) of about 8 ppmv/K, there is no reason to expect that the current increase is caused by temperature (which should imply a 12 K increase to explain the 100 ppmv increase of the past 150 years).
Actually the partial pressure of a gas in water is a fiction. It is taken to be the partial pressure of the gas in the gas state in contact with the water and in equilibrium with it.
The partial pressure of CO2 in water may be a fiction (I don’t thinks so), but the equilibrium with the air above is measured (since many decades) nowadays continuously on seaships and is the driving force for uptake or release of CO2 from/to the air above it. Much more realistic than some theoretical calculation from Henry’s Law which doesn’t take into account other factors than temperature. It can also be calculated from all other components, including temperature. See:
http://cat.inist.fr/?aModele=afficheN&cpsidt=1679548
As I said before, Takahashi got the net flux right. He did not get the 90 to 100 PgC/yr up and down fluxes right.
As Al Tekhasski said under a previous treat, while the Feely e.a. calculation is wrong, as they used (local) averages for pCO2 difference and wind speed for the calculation, while one needs an average of momentary pCO2 difference and momentary wind speed, which is quite different…
In the same spirit of repetitiveness, no decay time is involved. Dynamic and equilibrium are contradictory. You might be referring to a dynamic steady state, which is acceptable on climate scales as a description of the surface layer.
As a (retired) chemical engineer, I have seen hundreds of chemical reactions in dynamic equilibrium, where the equilibrium is shifted by either changing the concentration of one of the components and/or temperature and/or pressure. The whole CO2 cycle behaves as a simple linear first order process, where the “normal” equilibrium was ruled by temperature, now disturbed by the addition of more CO2 from outside the normal cycle. The fact that only halve the amount added is showing up in the atmosphere in an extreme linear way for the past at least 100 years supports that “model”:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg
Or do you think that any natural process would follow the human emissions in such an exact way?
You are trying to defend IPCC’s model that ACO2 accumulates in the atmosphere to drive the climate, while nCO2 does not accumulate, but remains in a perpetual dynamic steady state. This model is indefensible.
I never said that aCO2 is accumulating and nCO2 not, to the contrary. In fact aCO2 in first instance adds to the total mass (thus is the cause of the increase), but is readily exchanged by nCO2 on short term. The exchange itself doesn’t change the total mass, but about half of the initial increase (a mix of aCO2 and nCO2) is absorbed by oceans and vegetation, because a slightly higher pCO2 in the atmosphere decreases the outgassing at warm places (where the temperature/Henry’s Law/pCO2 didn’t change) and increases the uptake at colder places (something similar for vegetation alveoles).
You are trying to prove that the extra human CO2 added doesn’t accumulate in the atmosphere (as mass! not as individual molecules), despite that all known observations support that, probably because that is one of the cornerstones of the AGW hypothesis. Without accumulation, the AGW hypothesis fails.
I try to see what can be supported and what not on scientific grounds, in the case of accumulation, the IPCC is right. But for the other cornerstone, the effect of the accumulation, the IPCC is wrong (IMHO).
Jeff Glassman says:
I need to place the answer in two parts (too many links?)
Even if what you say were true about North Pole and South Pole data, that does not make any South Pole data, today or in the paleo past global. That is not supported by logic. IPCC admits that a measurable east-west gradient exists in global CO2, and that the north-south gradient is 10 times as great as the east-west gradient.
The NH-SH gradient in yearly averages is less than 5 ppmv on a level of 390 ppmv or less than 2%. I call that well-mixed, which is true for 95% of the atmosphere, from the North Pole to the South Pole, including MLO. Well mixed doesn’t imply that at all places on earth at the same time one can find exactly the same levels. But away from huge sources and sinks, within a reasonable mixing time, the levels are within small boundaries.
Thus if the South Pole data are within 2% of the North Pole data within a year, with increasing emissions in the NH today, I may assume that the ice core data from Antarctica represent 95% of the ancient atmosphere, be it smoothed over a long(er) period.
IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line.
Sorry, this is a bridge too far. The raw hourly data, not adultered or changed in any way, including all outliers are available for checking and comparison for anyone like you and me, at least for four stations, including MLO (I even received a few days of the raw 10-second voltage data for check of the calculations, on simple request). If you have any proof that the data were changed by anyone, or even that the selection procedure has any significant influence on the level, averages or trends, well then we can discuss that. If you have no proof, then this is simply slander.
To the contrary, and as my remark stated, IPCC merged the ice core records into the instrument records. This is unacceptable science. What the laboratories did in creating laboratory data is quite unimportant. The fraud starts with the IPCC.
The merging of the ice core data and the instrument record is acceptable, as that is based on a 20 year overlap of the ice cores from Law Dome with the South Pole direct measurements. There is no difference between CO2 at the South Pole and at the top layer of the firn at the drilling site. There is a small gradient (about 10 ppmv) in the firn from top to start closing depth, which means that the CO2 level at closing depth is about 7 years older than in the atmosphere and there is no difference in CO2 levels between open (firn) air and already closed bubbles in the ice at closing depth. The average closing time is 8 years. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_overlap.jpg
The original article from Etheridge e.a. from 1996 (unfortunately after a paywall) is at:
http://www.agu.org/pubs/crossref/1996/95JD03410.shtml
Further, there are succesively overlaps between ice cores, all within a few ppmv for the same gas age. That means that CO2 levels were (smoothed) between 180 and 310 ppmv for the past 800,000 years, except for the past 150 years where they reached 335 ppmv in 1980 (ice cores) to 390 in 2010 (firn and air). Nothing to do with (much) higher levels many millions of years ago, when geological conditions were quite different.
As there is a rather linear correlation between temperature and CO2 (Vostok, Dome C) of about 8 ppmv/K, there is no reason to expect that the current increase is caused by temperature (which should imply a 12 K increase to explain the 100 ppmv increase of the past 150 years).
More in a second message.