Guest Post by Willis Eschenbach
There seem to be a host of people out there who want to discuss whether humanoids are responsible for the post ~1850 rise in the amount of CO2. People seem madly passionate about this question. So I figure I’ll deal with it by employing the method I used in the 1960s to fire off dynamite shots when I was in the road-building game … light the fuse, and run like hell …
First, the data, as far as it is known. What we have to play with are several lines of evidence, some of which are solid, and some not so solid. These break into three groups: data about the atmospheric levels, data about the emissions, and data about the isotopes.
The most solid of the atmospheric data, as we have been discussing, is the Mauna Loa CO2 data. This in turn is well supported by the ice core data. Here’s what they look like for the last thousand years:
Figure 1. Mauna Loa CO2 data (orange circles), and CO2 data from 8 separate ice cores. Fuji ice core data is analyzed by two methods (wet and dry). Siple ice core data is analyzed by two different groups (Friedli et al., and Neftel et al.). You can see why Michael Mann is madly desirous of establishing the temperature hockeystick … otherwise, he has to explain the Medieval Warm Period without recourse to CO2. Photo shows the outside of the WAIS ice core drilling shed.
So here’s the battle plan:
I’m going to lay out and discuss the data and the major issues as I understand them, and tell you what I think. Then y’all can pick it all apart. Let me preface this by saying that I do think that the recent increase in CO2 levels is due to human activities.
Issue 1. The shape of the historical record.
I will start with Figure 1. As you can see, there is excellent agreement between the eight different ice cores, including the different methods and different analysts for two of the cores. There is also excellent agreement between the ice cores and the Mauna Loa data. Perhaps the agreement is coincidence. Perhaps it is conspiracy. Perhaps it is simple error. Me, I think it represents a good estimate of the historical background CO2 record.
So if you are going to believe that this is not a result of human activities, it would help to answer the question of what else might have that effect. It is not necessary to provide an alternative hypothesis if you disbelieve that humans are the cause … but it would help your case. Me, I can’t think of any obvious other explanation for that precipitous recent rise.
Issue 2. Emissions versus Atmospheric Levels and Sequestration
There are a couple of datasets that give us amounts of CO2 emissions from human activities. The first is the CDIAC emissions dataset. This gives the annual emissions (as tonnes of carbon, not CO2) separately for fossil fuel gas, liquids, and solids. It also gives the amounts for cement production and gas flaring.
The second dataset is much less accurate. It is an estimate of the emissions from changes in land use and land cover, or “LU/LC” as it is known … what is a science if it doesn’t have acronyms? The most comprehensive dataset I’ve found for this is the Houghton dataset. Here are the emissions as shown by those two datasets:
Figure 2. Anthropogenic (human-caused) emissions from fossil fuel burning and cement manufacture (blue line), land use/land cover (LU/LC) changes (white line), and the total of the two (red line).
While this is informative, and looks somewhat like the change in atmospheric CO2, we need something to compare the two directly. The magic number to do this is the number of gigatonnes (billions of tonnes, 1 * 10^9) of carbon that it takes to change the atmospheric CO2 concentration by 1 ppmv. This turns out to be 2.13 gigatonnes of carbon (C) per 1 ppmv.
Using that relationship, we can compare emissions and atmospheric CO2 directly. Figure 3 looks at the cumulative emissions since 1850, along with the atmospheric changes (converted from ppmv to gigatonnes C). When we do so, we see an interesting relationship. Not all of the emitted CO2 ends up in the atmosphere. Some is sequestered (absorbed) by the natural systems of the earth.
Figure 3. Total emissions (fossil, cement, & LU/LC), amount remaining in the atmosphere, and amount sequestered.
Here we see that not all of the carbon that is emitted (in the form of CO2) remains in the atmosphere. Some is absorbed by some combination of the ocean, the biosphere, and the land. How are we to understand this?
To do so, we need to consider a couple of often conflated measurements. One is the residence time of CO2. This is the amount of time that the average CO2 molecule stays in the atmosphere. It can be calculated in a couple of ways, and is likely about 6–8 years.
The other measure, often confused with the first, is the half-life, or alternately the e-folding time of CO2. Suppose we put a pulse of CO2 into an atmospheric system which is at some kind of equilibrium. The pulse will slowly decay, and after a certain time, the system will return to equilibrium. This is called “exponential decay”, since a certain percentage of the excess is removed each year. The strength of the exponential decay is usually measured as the amount of time it takes for the pulse to decay to half its original value (half-life) or to 1/e (0.37) of its original value (e-folding time). The length of this decay (half-life or e-folding time) is much more difficult to calculate than the residence time. The IPCC says it is somewhere between 90 and 200 years. I say it is much less, as does Jacobson.
Now, how can we determine if it is actually the case that we are looking at exponential decay of the added CO2? One way is to compare it to what a calculated exponential decay would look like. Here’s the result, using an e-folding time of 31 years:
Figure 4. Total cumulative emissions (fossil, cement, & LU/LC), cumulative amount remaining in the atmosphere, and cumulative amount sequestered. Calculated sequestered amount (yellow line) and calculated airborne amount (black) are shown as well.
As you can see, the assumption of exponential decay fits the observed data quite well, supporting the idea that the excess atmospheric carbon is indeed from human activities.
Issue 3. 12C and 13C carbon isotopes
Carbon has a couple of natural isotopes, 12C and 13C. 12C is lighter than 13C. Plants preferentially use the lighter isotope (12C). As a result, plant derived materials (including fossil fuels) have a lower amount of 13C with respect to 12C (a lower 13C/12C ratio).
It is claimed (I have not looked very deeply into this) that since about 1850 the amount of 12C in the atmosphere has been increasing. There are several lines of evidence for this: 13C/12C ratios in tree rings, 13C/12C ratios in the ocean, and 13C/12C ratios in sponges. Together, they suggest that the cause of the post 1850 CO2 rise is fossil fuel burning.
However, there are problems with this. For example, here is a Nature article called “Problems in interpreting tree-ring δ 13C records”. The abstract says (emphasis mine):
THE stable carbon isotopic (13C/12C) record of twentieth-century tree rings has been examined1-3 for evidence of the effects of the input of isotopically lighter fossil fuel CO2 (δ 13C~-25‰ relative to the primary PDB standard4), since the onset of major fossil fuel combustion during the mid-nineteenth century, on the 13C/12C ratio of atmospheric CO2(δ 13C~-7‰), which is assimilated by trees by photosynthesis. The decline in δ13C up to 1930 observed in several series of tree-ring measurements has exceeded that anticipated from the input of fossil fuel CO2 to the atmosphere, leading to suggestions of an additional input ‰) during the late nineteenth/early twentieth century. Stuiver has suggested that a lowering of atmospheric δ 13C of 0.7‰, from 1860 to 1930 over and above that due to fossil fuel CO2 can be attributed to a net biospheric CO2 (δ 13C~-25‰) release comparable, in fact, to the total fossil fuel CO2 flux from 1850 to 1970. If information about the role of the biosphere as a source of or a sink for CO2 in the recent past can be derived from tree-ring 13C/12C data it could prove useful in evaluating the response of the whole dynamic carbon cycle to increasing input of fossil fuel CO2 and thus in predicting potential climatic change through the greenhouse effect of resultant atmospheric CO2 concentrations. I report here the trend (Fig. 1a) in whole wood δ 13C from 1883 to 1968 for tree rings of an American elm, grown in a non-forest environment at sea level in Falmouth, Cape Cod, Massachusetts (41°34’N, 70°38’W) on the northeastern coast of the US. Examination of the δ 13C trends in the light of various potential influences demonstrates the difficulty of attributing fluctuations in 13C/12C ratios to a unique cause and suggests that comparison of pre-1850 ratios with temperature records could aid resolution of perturbatory parameters in the twentieth century.
This isotopic line of argument seems like the weakest one to me. The total flux of carbon through the atmosphere is about 211 gigtonnes plus the human contribution. This means that the human contribution to the atmospheric flux ranged from ~2.7% in 1978 to 4% in 2008. During that time, the average of the 11 NOAA measuring stations value for the 13C/12C ratio decreased by -0.7 per mil.
Now, the atmosphere has ~ -7 per mil 13C/12C. Given that, for the amount of CO2 added to the atmosphere to cause a 0.7 mil drop, the added CO2 would need to have had a 13C/12C of around -60 per mil.
But fossil fuels in the current mix have a 13C/12C ration of ~ -28 per mil, only about half of that requried to make such a change. So it is clear that the fossil fuel burning is not the sole cause of the change in the atmospheric 13C/12C ratio. Note that this is the same finding as in the Nature article.
In addition, from an examination of the year-by-year changes it is obvious that there are other large scale effects on the global 13C/12C ratio. From 1984 to 1986, it increased by 0.03 per mil. From ’86 to ’89, it decreased by -0.2. And from ’89 to ’92, it didn’t change at all. Why?
However, at least the sign of the change in atmospheric 13C/12C ratio (decreasing) is in agreement with with theory that at least part of it is from anthropogenic CO2 production from fossil fuel burning.
CONCLUSION
As I said, I think that the preponderance of evidence shows that humans are the main cause of the increase in atmospheric CO2. It is unlikely that the change in CO2 is from the overall temperature increase. During the ice age to interglacial transitions, on average a change of 7°C led to a doubling of CO2. We have seen about a tenth of that change (0.7°C) since 1850, so we’d expect a CO2 change from temperature alone of only about 20 ppmv.
Given all of the issues discussed above, I say humans are responsible for the change in atmospheric CO2 … but obviously, for lots of people, YMMV. Also, please be aware that I don’t think that the change in CO2 will make any meaningful difference to the temperature, for reasons that I explain here.
So having taken a look at the data, we have finally arrived at …
RULES FOR THE DISCUSSION OF ATTRIBUTION OF THE CO2 RISE
1. Numbers trump assertions. If you don’t provide numbers, you won’t get much traction.
2. Ad hominems are meaningless. Saying that some scientist is funded by big oil, or is a member of Greenpeace, or is a geologist rather than an atmospheric physicist, is meaningless. What is important is whether what they say is true or not. Focus on the claims and their veracity, not on the sources of the claims. Sources mean nothing.
3. Appeals to authority are equally meaningless. Who cares what the 12-member Board of the National Academy of Sciences says? Science isn’t run by a vote … thank goodness.
4. Make your cites specific. “The IPCC says …” is useless. “Chapter 7 of the IPCC AR4 says …” is useless. Cite us chapter and verse, specify page and paragraph. I don’t want to have to dig through an entire paper or an IPCC chapter to guess at which one line you are talking about.
5. QUOTE WHAT YOU DISAGREE WITH!!! I can’t stress this enough. Far too often, people attack something that another person hasn’t said. Quote their words, the exact words you think are mistaken, so we can all see if you have understood what they are saying.
6. NO PERSONAL ATTACKS!!! Repeat after me. No personal attacks. No “only a fool would believe …”. No “Are you crazy?”. No speculation about a person’s motives. No “deniers”, no “warmists”, no “econazis”, none of the above. Play nice.
OK, countdown to mayhem in 3, 2, 1 … I’m outta here.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.




Jeff Glassman says:
My total response dissapeared in Cyberspace, probably due to too many links. Now in several parts:
Even if what you say were true about North Pole and South Pole data, that does not make any South Pole data, today or in the paleo past global. That is not supported by logic. IPCC admits that a measurable east-west gradient exists in global CO2, and that the north-south gradient is 10 times as great as the east-west gradient.
The NH-SH gradient in yearly averages is less than 5 ppmv on a level of 390 ppmv or less than 2%. I call that well-mixed, which is true for 95% of the atmosphere, from the North Pole to the South Pole, including MLO. Well mixed doesn’t imply that at all places on earth at the same time one can find exactly the same levels. But away from huge sources and sinks, within a reasonable mixing time, the levels are within small boundaries.
Thus if the South Pole data are within 2% of the North Pole data within a year, with increasing emissions in the NH today, I may assume that the ice core data from Antarctica represent 95% of the ancient atmosphere, be it smoothed over a long(er) period.
IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line.
Sorry, this is a bridge too far. The raw hourly data, not adultered or changed in any way, including all outliers are available for checking and comparison for anyone like you and me, at least for four stations, including MLO (I even received a few days of the raw 10-second voltage data for check of the calculations, on simple request). If you have any proof that the data were changed by anyone, or even that the selection procedure has any significant influence on the level, averages or trends, well then we can discuss that. If you have no proof, then this is simply slander.
To the contrary, and as my remark stated, IPCC merged the ice core records into the instrument records. This is unacceptable science. What the laboratories did in creating laboratory data is quite unimportant. The fraud starts with the IPCC.
The merging of the ice core data and the instrument record is acceptable, as that is based on a 20 year overlap of the ice cores from Law Dome with the South Pole direct measurements. There is no difference between CO2 at the South Pole and at the top layer of the firn at the drilling site. There is a small gradient (about 10 ppmv) in the firn from top to start closing depth, which means that the CO2 level at closing depth is about 7 years older than in the atmosphere and there is no difference in CO2 levels between open (firn) air and already closed bubbles in the ice at closing depth. The average closing time is 8 years. See:
Law Dome overlap
The original article from Etheridge e.a. from 1996 (unfortunately after a paywall) is at:
Etheridge e.a.
Further, there are succesive overlaps between ice cores, all within a few ppmv for the same gas age. That means that CO2 levels were (smoothed) between 180 and 310 ppmv for the past 800,000 years, except for the past 150 years where they reached 335 ppmv in 1980 (ice cores) to 390 in 2010 (firn and air). Nothing to do with (much) higher levels many millions of years ago, when geological conditions were quite different.
As there is a rather linear correlation between temperature and CO2 (Vostok, Dome C) of about 8 ppmv/K, there is no reason to expect that the current increase is caused by temperature (which should imply a 12 K increase to explain the 100 ppmv increase of the past 150 years).
More in next message…
Sorry for the repeated (parts) of my message, something got wrong at posting…
Ferdinand Engelbeen says on 6/22/10 at 2:29 pm said,
>> The NH-SH gradient in yearly averages is less than 5 ppmv on a level of 390 ppmv or less than 2%. I call that well-mixed, which is true for 95% of the atmosphere, from the North Pole to the South Pole, including MLO. Well mixed doesn’t imply that at all places on earth at the same time one can find exactly the same levels. But away from huge sources and sinks, within a reasonable mixing time, the levels are within small boundaries.
And
>>>>IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line.
>>Sorry, this is a bridge too far. The raw hourly data, not adultered or changed in any way, …
IPCC graphed its processed MLO and Baring Head records in AR4, Figure 2.3. These are not the hourly data. The bridge is one you built.
So while your observations about the value of the hourly data is encouraging, limiting the depth of IPCC’s fraud, they are quite irrelevant. In Figure 2.3, the trends at Baring Head and MLO follow one another within about one line width for the plots. A line width is 0.404 ppm(v), not 5 ppmv. That is more than an order of magnitude less than your figure, and provides an indication of what well-mixed means to IPCC. While IPCC relies heavily on the well-mixed conjecture, it never defines it and only implies it by the graphs that it manufactures.
The overlap of the Baring Head and MLO records is not credible, nor is the smoothness of the MLO record all by itself. These kind of results do not occur in nature, but are the product of heavy smoothing and “calibrations”, by which is meant data fudging.
You will find an analysis of Figure 2.3 at rocketscientistsjournal.com, SGW, Figure 27. Especially interesting with respect to fraud or slander is part (b) of that figure. IPCC scaled and shifted the graph of delta 13C to parallel the emissions record, and then to conclude that it had a human fingerprint of ACO2 in the CO2 measurements. That should be a criminal offense.
Earlier I brought various IPCC calibrations to your attention. They include interstation calibrations which are “becoming progressively more extensive and better intercalibrated.” IPCC reports do not include its calibration data, its smoothing algorithms for CO2 stations, or its CO2 data reconstruction methods. The CRU documents revealed an IPCC algorithm that adjusted data to look more like the instrument record. IPCC’s results are not credible on their face, and a little investigation reveals an effort to extract trillions of dollars from world governments and to cripple industry. There’s no slander here, but instead a record of a dozen abuses of science and dishonesty.
You wrote,
>> As there is a rather linear correlation between temperature and CO2 (Vostok, Dome C) of about 8 ppmv/K, there is no reason to expect that the current increase is caused by temperature (which should imply a 12 K increase to explain the 100 ppmv increase of the past 150 years).
We’ve had this discussion previously. See rocketscientistsjournal.com, “The Acquittal of Carbon Dioxide”. As shown there, a far better fit than linear or even quadratic to the relationship between CO2 and temperature in the Vostok record is a fit to the solubility curve.
You wrote,
>> The partial pressure of CO2 in water may be a fiction (I don’t thinks so), but the equilibrium with the air above is measured (since many decades) nowadays continuously on seaships and is the driving force for uptake or release of CO2 from/to the air above it. Much more realistic than some theoretical calculation from Henry’s Law which doesn’t take into account other factors than temperature.
To the contrary, Henry’s Law and Henry’s Coefficients also depend on pressure, wind, and salinity. A reasonable conjecture is that the coefficients might also depend on molecular weight, yet a fifth order dependence. And if IPCC’s conjecture were true, it would also depend on ionic concentrations in the solvent. But that is novel physics necessary to make AGW look feasible. You can’t escape from the reality of CO2 solubility in sea water by discarding it as theoretical.
Moreover, the solubility curve is evident in data used by IPCC. See rocketscientistsjournal.com, “The Acquittal of Carbon Dioxide”, Figure 21. Another example is IPCC’s Second Order Draft, Figure 7.3.10(a), where the solubility curve was uncovered in the Panel’s attempt to quantify the Revelle buffer factor. In the final version of AR4, IPCC concealed this relationship. See discussion, rocketscientistsjournal.com, Figures 3, 4, and 5. Solubility is not a conjecture to be discarded to justify AGW. It is the engine for the carbon cycle, which IPCC does get right any more than it does the hydrological cycle.
We have about 90 Gtons of CO2 coming out of the ocean each year. And if we used a mass balance equation, the 120 Gtons now attributed to terrestrial sources might prove to be a misattribution. It wouldn’t be the first IPCC misattribution. That 90 Gtons is not accounted for in the Takahashi diagram. Also, that diagram gives you support for your conjecture that not much is going on through the law of solubility. That natural emission comes out of the water because of the law of solubility. It’s around 15 times man’s emissions.
Solubility would be key in the climate model, if only CO2 were more significant as a greenhouse gas, and if only greenhouse gases were significant to global warming.
Ferdinand Engelbeen says on 6/22/10 at 5:03 am said,
>> But your fastest time constant is wrong, as it is based on the residence time, not the decay time needed to reduce any excess CO2 above (dynamic) equilibrium.
I commented on this sentence earlier, but I would like add a few items.
First, the decay time and the residence time are not directly comparable. Decay time refers to the time for a mass to reduce to a stated level. For our purposes here, the decay time is exponential. Time in the exponent is made dimensionless by multiplying by what is called the decay constant, or it is divided by one of the characteristic times of half-life (scaled by dividing by ln(2)), e-folding time, average lifetime, or turnover time. The average lifetime is average time before dissolution of the molecules, and it is equal to the e-folding time. Both are equal to the reciprocal of the decay constant. Decay time is a function of time, and all the other terms are constants, characteristic of the process.
Second, decay time is the time to reduce the mass to a stated level, and has nothing to do with the state of its surroundings. Neither the atmosphere nor the decaying CO2 is ever in equilibrium, nor is such an assumption needed.
You wrote,
>> Thus the turnover time (which is based on the 150 GtC exchange rate) has simply nothing to do with the decay time of an extra mass of CO2 brought into the atmosphere (which is based on the 3 GtC/year removal rate), which is much longer than the turnover time.
This is not true. Turnover time, as you agreed, is defined as T = M/S, where M is the mass in the reservoir and S is the removal rate. For the process of dissolution, S = k*M, where k is the decay constant and M is the instantaneous mass. So if we write these parameters as functions of time, T(t) = M(t)/S(t) = M(t)/(k*M(t)), then T(t) = 1/k. Turnover time is a constant and its equal to the residence time, etc. The key assumption in this model is that the rate of removal, S, is proportional to the instantaneous mass. This is the assumption that leads to the exponential as the unique functional solution.
Turnover time is not based on any numerical exchange rate or numerical removal rate. Also we should note that what is being discussed here is the rate of removal of a pulse of CO2 in the atmosphere. It is not the mass of CO2 in the atmosphere, because old pulses are being removed while new pulses are being added. To solve this problem, a mass balance analysis is required. Some writers have jumped from the mass of a pulse to the mass in the atmosphere, which is an unwarranted change in parameters that leads to the wrong conclusion about turnover time.
As I wrote above, the time constants, whether called residence times or average life times or e-folding times, that I computed were 1.5 years using IPCC data, 3.2 years using University of Colorado data, or 4.9 years using Texas A&M data. IPCC provides the following:
>> The CO2 response function used in this report is based on the revised version of the Bern Carbon cycle model used in Chapter 10 of this report (Bern2.5CC; Joos et al. 2001) using a background CO2 concentration value of 378 ppm. The decay of a pulse of CO2 with time t is given by
>>a_0 = sum(a_i*exp(-t/tau_i), i=1,3
>>Where a_0 = 0.217, a_1 = 0.259, a_2 = 0.338, a_3 = 0.186, tau_1 = 172.9 years, tau_2 = 18.51 years, and tau_3 = 1.186 years. AR4, Table 2.14, p. 213, fn. a.
IPCC’s fastest time constant is 1.186 years, even faster than mine.
This formula appears to have been from Archer (2005). AR4, ¶7.3.1.2, p. 514. The following associations seem reasonable: the fastest (1.186 years) refers to the surface layer as a reservoir in DIC form, the middle value (18.51 years) refers to intermediate water and carbon consumption by photosynthesis, and the slowest (172.9 years) refers to production of calcareous shells in the deep ocean. These are the solubility pump, the organic carbon pump, and the calcium carbonate pump. IPCC diagrams them in Figure 7.10, p. 530, a figure with several errors (e.g., backward arrows, “solution pump”). Functionally, however, the organic carbon pump and the calcium carbonate pump should not connect to the atmosphere, but instead to the surface layer. These chemical processes need to connect to ionized CO2, not gaseous CO2.
The formula is not feasible. It provides that 21.7% of atmospheric CO2 remains in the atmosphere forever. It then provides 18.6% to solubility, 25.9% to photosynthesis, and 33.8% to sequestration. No physical mechanism exists by which these four portions might be directed to the various pumps. What will happen is that the solubility pump will drain the atmosphere (neglecting, as always, replenishment) of any slug of CO2 at the rapid time constant until it is effectively reduced to zero. This won’t starve the other pumps, assuming they are correctly connected to the surface layer.
Jeff Glassman
In my own thread about CO2 over at the Air Vent I added the various threads on the subject carried here more recently at WUWT.
http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/
You may find many of the comments of interest, but one in particular said that the Co2 outgassing (and presumably also the absorption) was some 8ppmv per 1 degree C change in the temperature of the ocean.
Does that roughly equate to your 90Gton estimate?
Tonyb
tonyb
Jeff Glassman says:
June 23, 2010 at 10:41 pm
IPCC graphed its processed MLO and Baring Head records in AR4, Figure 2.3. These are not the hourly data. The bridge is one you built.
So while your observations about the value of the hourly data is encouraging, limiting the depth of IPCC’s fraud, they are quite irrelevant. In Figure 2.3, the trends at Baring Head and MLO follow one another within about one line width for the plots. A line width is 0.404 ppm(v), not 5 ppmv. That is more than an order of magnitude less than your figure, and provides an indication of what well-mixed means to IPCC.
The overlap of the Baring Head and MLO records is not credible, nor is the smoothness of the MLO record all by itself. These kind of results do not occur in nature, but are the product of heavy smoothing and “calibrations”, by which is meant data fudging.
Again these are a lot of (false) accusations, where you don’t give the slightest proof. The IPCC didn’t alter any data, the NOAA (and many others) sampled and did filter the data, by rejecting any data which might be contaminated by local sources. If you are interested in volcanic outgassing, measure at the mouth of the gas exits. If you are interested in what vegetation takes up from the atmosphere, measure in the middle of the vegetation. If you are interested in background data, measure in the trade winds and don’t use the data which are contaminated with the previous ones.
The criteria for rejecting data for inclusion in averages are clear and predefined, not made after the results are known. But again, because you may not like it, in bold: the average and trend from all data, including all outliers, and the average and trend of selected only data , excluding all outliers, don’t differ with more than a few tenths of a ppmv That includes the seasonal trend in the region of where the station is located. So far the “fraud of the IPCC”.
The difference between MLO and Baring Head is a few ppmv, if you take the MLO averages through the seasonal trend. The hourly data from Baring Head are not online, but these from Samoa and the South Pole are, which span the area in the SH from near the equator to the South Pole.
I have plotted the raw hourly data together with the selected (according to the pre-defined criteria) daily averages for Mauna Loa and Samoa:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_raw_select_2008.jpg
As you prefer the real scale: here the differences on full scale:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_raw_select_2008_fullscale.jpg
The 2008 average for the raw hourly data of Samoa is 384.00 ppmv, for the selected daily data it is 393.91. For Mauna Loa: raw 385.34, selected 385.49.
I suppose that I may say that at least for Mauna Loa and Samoa (but also for all other baseline stations) the atmosphere is very well mixed…
IPCC scaled and shifted the graph of delta 13C to parallel the emissions record, and then to conclude that it had a human fingerprint of ACO2 in the CO2 measurements. That should be a criminal offense.
I know your aversion against non-full scale graphs, but even on full scale, there is an extremely good correlation between emissions, increase in the atmosphere and ocean surface and (inverse) with d13C levels. Which points to a human fingerprint…
Earlier I brought various IPCC calibrations to your attention. They include interstation calibrations which are “becoming progressively more extensive and better intercalibrated.” IPCC reports do not include its calibration data, its smoothing algorithms for CO2 stations, or its CO2 data reconstruction methods.
Again, you are accusing the IPCC of manipulation, although they have nothing to do with CO2 measurements. The calibration is done by NOAA and a lot of other laboratories from different organisations in different countries. That is about the calibration of the calibration gases and the equipment used. I know that it is scientific practice to do an intercalibration, so that the different equipment used in the world gives the same value for the same level of CO2 in a calibration gas. That has nothing to do with manipulation or biasing of the data to show similar results.
Further, as already delivered, the calibration and calculation procedures and the selection criteria for CO2 at MLO (and all baseline stations) are fully described in detail at:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html
More later…
Further discussion…
To the contrary, Henry’s Law and Henry’s Coefficients also depend on pressure, wind, and salinity. A reasonable conjecture is that the coefficients might also depend on molecular weight, yet a fifth order dependence. And if IPCC’s conjecture were true, it would also depend on ionic concentrations in the solvent. But that is novel physics necessary to make AGW look feasible. You can’t escape from the reality of CO2 solubility in sea water by discarding it as theoretical.
Wait a minute, the graphs you use only show the temperature – CO2 relationship, according to Henry’s Law. No trace of other influences. If you include the other factors, we simply agree. And never seen a pH-CO2 curve for seawater? A very small change in pH (whatever the source) has an enormous effect on CO2 solubility, which makes temperature a bleak substitute. That is even used by some fellow sceptics to prove that a fast release from the oceans is possible. Unfortunately for them, the uptake is the other way out…
We have about 90 Gtons of CO2 coming out of the ocean each year. And if we used a mass balance equation, the 120 Gtons now attributed to terrestrial sources might prove to be a misattribution. It wouldn’t be the first IPCC misattribution. That 90 Gtons is not accounted for in the Takahashi diagram. Also, that diagram gives you support for your conjecture that not much is going on through the law of solubility. That natural emission comes out of the water because of the law of solubility. It’s around 15 times man’s emissions.
Again you are confusing parts of a continuous or seasonal exchange with the net effect of an extra addition. The equator-poles is a continuous stream of CO2, while the mid-latitudes of the oceans show a seasonal exchange. Thus while there is a 9 or 90 or 900 GtC exchange over a year between oceans and atmosphere (and a similar between vegetation and atmosphere), that is not of the slightest interest for the mass balance. Only the difference at the end of the year is of interest, and that is some (measured) 3.5 +/- 3 GtC/year sink capacity for all natural flows together, whatever the individual flows may be.
The increase of CO2 in the atmosphere is simply a direct function of the emissions (extremely linear!) and of temperature. The first adds CO2 to the total mass, increasing the difference with the “normal” equilibrium, the latter shifts the base equilibrium. That has nothing to do with the capacity of CO2 as a greenhouse gas, that is a complete different, unrelated discussion.
Jeff Glassman says:
June 24, 2010 at 12:07 am
I am completely at loss with your definitions of turnover and decay.
As far as my English goes, turnover or residence time is about the possibility that a single molecule (whatever the origin) in the atmosphere is catched or released by another reservoir, both ways. That is ruled by the exhange rate, in this case about 150 GtC on 800 GtC present in the atmosphere. You say:
This is not true. Turnover time, as you agreed, is defined as T = M/S, where M is the mass in the reservoir and S is the removal rate.
But S is the exchange rate, both ways, not the “removal” rate.
Decay rate indeed is defined as you describe. And we agree that this refers to a return to a stated level (in this case the stated level is temperature dependent). I called that a “dynamic” equilibrium, but have no problem with a “stated” level.
Thus we are discussing the decay rate of an extra pulse (or continuous pulses) of human emissions of CO2 here. The decay rate according to you is somewhat less than 5 years, but have a better look at the IPCC decay rates you are using:
IPCC’s fastest time constant is 1.186 years, even faster than mine.
Yes, but that is only for 18.6% of the extra CO2, according to the Bern model. The rest is absorbed in other compartiments at much slower rates:
33.8% with a time constant of 18.51 years
25.9% with a time constant of 172.9 years
and 21.7% of the initial pulse remains in the atmosphere forever!
Thus according to the Bern model, only 18.6% of the extra CO2 will be removed fast via the fastest decay rate. That may be the upper ocean level and/or vegetation, but both are limited in capacity, thus this may be defendable, but the other limits aren’t under current conditions, especially not the deep ocean capacity.
I don’t support the IPCC model, as that only may be right for extreme amounts of emissions. At the current emission level, there is hardly any influence on deep ocean CO2 levels, thus the deep ocean uptake and return is not influenced at all. This leads to a much more realistic decay rate: see the paper of Peter Dietze at the late John Daly’s website: http://www.john-daly.com/carbon.htm
I haven’t looked at the other decay rates you mention, as I have no direct reference for them.
Tonyb on 6/24/10 at 4:24 am asked,
>>You may find many of the comments of interest, but one in particular said that the Co2 outgassing (and presumably also the absorption) was some 8ppmv per 1 degree C change in the temperature of the ocean.
>>Does that roughly equate to your 90Gton estimate?
No. As an initial point of order, the 90 GtC/yr is IPCC’s estimate, not mine.
90 GtC/yr = 90 PgC/yr
Stoichiometry: 12 gC = 44 gCO2
Units: 31557600 sec = 1 yr
Units: 10^15 = 1 Peta (P)
Uptake Temperature: 0ºC
Outgas Temperature: 30ºC (max, nominal)
Uptake solubility: 0.3346 gCO2/100gH2O (rocketscientistsjournal.com, “The Acquittal of CO2”, Figure 6)
Outgas solubility: 0.1257 gCO2/100gH2O (id.)
Solubility, net: 0.2089 gCO2/100gH2O
Units: 1000 g = 1 kg
Density H2O: 1000 kg/m^3
Units: 1 Sv = 10^6 m^3/sec
Result: 90 PgC/yr ~ 5.091 Sv at 30ºC, 5.006 at 29ºC.
This is in the ballpark of the wide range of values for the THC, aka the MOC. For example, an estimate for the bottom flow from the Antarctic is 4.3 Sv. Gent, PR, “Will the North Atlantic Ocean thermohaline circulation weaken during the 21st century?”, Geo.Phys.Res.Ltrs., vol. 28, no. 6, pp. 1023-1026, 3/15/01, p. 1024. Is there a similar number for the bottom flow from the Arctic?
Other numbers in Gent that pop out for different conditions are 29 ± 7 Sv, 20 Sv, 17-18 Sv, and 15 Sv. Those high numbers could reasonably be the result of minor branches of the THC surfacing around the globe at much lower temperatures than the Equatorial estimate of 30ºC. The figure of 5.2 Sv should be read as an effective THC for the purposes of estimating the atmosphere/ocean flux.
The outgassing should drop from 90 PgC/yr to 88.5 PgC/yr for 1ºC drop in SST at the Equator. This is a flow rate of change of 1.68%. If that change applied to the nominal 385 ppm at MLO today, the drop would be 6.45 ppm, and close enough to the 8ppmv/ºC. A more direct comparison would be between the change in outgassing and anthropogenic emissions, ACO2.
Lowering the effective temperature at the Equator raises the inferred THC flow rate. At a flow rate of 10 Sv and 90 PgC/yr, the effective temperature is 10.3ºC and the sensitivity is about 7 PgC/ºC, the equivalent of the ACO2 estimate. The solution to the problem is robust, able to accommodate a wide range of initial conditions or operating points.
The conclusion is that IPCC’s estimate of 90 PgC/yr is consistent with Henry’s Law.
Further…
As I wrote above, the time constants, whether called residence times or average life times or e-folding times, that I computed were 1.5 years using IPCC data, 3.2 years using University of Colorado data, or 4.9 years using Texas A&M data. IPCC provides the following:
Here again there is confusion: the 1.5 years of the IPCC is a decay rate of a pulse of CO2 in the atmosphere into the fastest (but limited) receiving reservoir. The Colorado and Texas A&M data clearly are about a residence time, which has nothing to do with a decay rate…
Ferdinand Engelbeen says on 6/24/10 at 5:03 am said,
You are confusing what IPCC has done in its Reports with your experience with laboratory data. The two have nothing to do with one another. I have specifically referenced the troublesome data reductions by IPCC and you ignore them to go back to data culling and calibrations in the laboratories around the world. Your observations about the laboratory data is irrelevant. I asked you to consider the calibration that occurs after the laboratory data are prepared, and you ignore the request to reopen immaterial considerations.
You invite me to re-examine laboratory data. At one time I tried, and gave up. I was especially interested in the wind corrections, but found no wind data. Also the volume of the data was immense, and not suitable for desktop operations. Regardless, the laboratory data are irrelevant to IPCC’s fraudulent data reductions. What might be of interest are the calibrations by which IPCC reduced the laboratory data to its charts. However, the fraud is evident even to a layman without such miniscule details, and a viable alternative for global temperatures is on the table.
The graphs show only temperature because it is the dominant, first order effect. That does not mean that minor, second order and lower effects do not exist.
I have indeed seen a pH CO2 curve for seawater. I believe you are referring to the Bjerrum plot. That plot is the solution to the stoichiometric equations of equilibrium. It appears in the Zeebe & Wolf-Gladrow papers, which IPCC relied upon without showing the Bjerrum plot. This reliance is one of IPCC’s fatal errors in its modeling. See rocketscientistsjournal.com, “IPCC’s Fatal Errors”. That the surface layer might be in equilibrium so that those equations would apply is preposterous. I discussed this in my post of 6/20/10 at 2:16 pm, above, which you seem to have ignored to persist in an invalid model. Modeling the surface layer as being in equilibrium is a fatal error.
You say “S is the exchange rate, both ways, not the ‘removal’ rate. I introduced the turnover time as defined by IPCC, quoting it in full on 6/21/10 at 2:46 pm. You accurately quoted me and it in your response on 6/22/10 at 2:46 pm. The definition specifies S to be “the total rate of removal S”, not the exchange rate.
I suspect that in retracting the definition and inserting an exchange rate that you might be thinking about the mass of CO2 in the atmosphere, subject to many additions and removals, and not the mass of a pulse of CO2 added to the atmosphere. This is a change of parameters that leads to an incorrect analysis. The decay of a pulse is modeled à priori as exponential. Nothing like that applies to the concentration of CO2, or any species of CO2, in the atmosphere.
I have not accused IPCC of manipulating data. I accuse it of outright fraud in its reports on Anthropogenic Global Warming. I rely not on a simple, single error, nor on an alternative data set, nor on an alternative global warming model. Instead I rely on a raft of abuses of honesty, of science, and of the scientific method in IPCC Third and Fourth Assessment Reports.
I disagree with your discussion about the “9 or 90 or 900 GtC exchange”. The outgassing of 90 GtC is ancient water, saturated with CO2 at the partial pressure about a millennium past and at a temperature of about 0ºC. It is outgassed dominantly at the current SST at the Equator, where the amount released depends on the solubility curve at that current SST. As I discussed above in response to Tonyb, the amount outgassed is dependent on the current temperature in an amount of the same order of magnitude as the estimated fossil fuel emissions.
One of IPCC’s fatal errors is to model what it admits is a “highly nonlinear” (a meaningless phrase – something is either linear or it is not; likewise, a system is in equilibrium or it is not) by the sum of two parts, the natural carbon cycle and an anthropogenic carbon cycle. This is the radiative forcing paradigm. Because the system is nonlinear, the total response is not equal to the sum of its responses to individual forcings applied separately. This applies nowhere so much as it does in outgassing. It is nonlinear, being inversely proportional to the partial pressure of total CO2 in the atmosphere. This can be solved with a mass balance analysis, but the result cannot be assumed to be the response to natural outgassing plus the response to fossil fuel burning or an assumed ACO2 cycle. The mass balance analysis omitted by IPCC is necessary.
IPCC’s equation is cited to the Bern model and to Archer (2005), but it is not quoted. I haven’t bothered to see whether IPCC’s equation is faithful to any reference. That would be a magnificent waste of time because the equation is pure nonsense. There is no mechanism in the ocean by which the four separate processes have their separate reservoirs. I explained this in detail today, 6/24/10, at 12:07 am. You repeat the nonsense again in your post of 6/24/10 3:12 pm, citing my 12:07 post, but totally ignoring my criticism.
I also wrote in the 12:07 post that the terms resident time, e-folding time, turnover time, and average lifetime are all the same. Now today, 6/24/10, at 3:43 you ignore what I have written and plow ahead with the same distinction without a difference. What you call a “(limited) receiving reservoir” appears to be the non-existing partition in IPCC’s formula.
By the way, the IPCC, Colorado and Texas A&M data are about fluxes, not residence times or any of the other characteristic process constants. The computation of the two sets of three parameters for those sources is entirely mine, as is the name for my results.
A goodly number of posts back dialog stop working. Time can be better spent than responding to the nonresponsive.
Jeff Glassman says:
June 24, 2010 at 4:44 pm
You are confusing what IPCC has done in its Reports with your experience with laboratory data. The two have nothing to do with one another.
As said repeatedly, the IPCC doesn’t hold, manipulate or reduce the CO2 data. The CO2 data are measured at some 10 “baseline” stations under NOAA, including Mauna Loa, some 60 other “background” stations at different places, under different organisations and some 400 stations over land, used to measure local/regional CO2 fluxes. Data selection, averaging over days, months and years is done by these organisations, not the IPCC. The IPCC only uses the (selected) data delivered by the other organisations, they don’t change them in any way.
No matter what the IPCC did do on other items, the CO2 data are not filtered, reduced or manipulated, fraudulent or not, by the IPCC. If you have proof of otherwise on this item only, please provide it.
The only way that after-the-fact recalculation is necessary is when an intercalibration of apparatus and/or calibration gases shows a deviation. Then all original voltage readings are reused with the new calibration values. Again that has nothing to do with data manipulation to shift the data from different places/organisations towards each other. And the IPCC has nothing to do with this all.
That the surface layer might be in equilibrium so that those equations would apply is preposterous.
We may discuss that in length of times, but the calculations of pCO2 and the real time measurements do confirm that the calculations, based on temperature, DIC, pH,… are right. Thus while your calculation based on temperature only may be a good approximation, Feely’s one, based on pCO2 and (non)equilibrium, is better.
From the IPCC:
Turnover time (T) (also called global atmospheric lifetime) is the ratio of the mass M of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal S from the reservoir: T = M / S.
Indeed right, although “removal” is a triggy word in this discussion. One may replace “total rate of removal S from the reservoir” in this definition as well as with “total rate of emission S to the reservoir”, as both emissions to and removal from the reservoir are (near) equal. Or simpler “total rate of exchange with another reservoir”. That is the point: although the flow is huge, that has no effect on the total mass in the reservoir, except if there is a discrepancy between the inflow and outflow.
The Colorado university description makes it very clear:
note that since in fluxes equal out fluxes, the RT is the same relative to the sum of the out fluxes or simply the throughflux.
There is no 90 GtC pulse going into the atmosphere, neither a 92 GtC negative pulse out of the atmosphere. There is only a continuous and seasonal (relative small) flow which emits CO2 at one side (or season) and absorbs CO2 at the other side (or season). Integrated over a year, that accounts for 90 GtC exchange, but only 2 GtC per year is the net amount which is really removed out of the atmosphere.
As I discussed above in response to Tonyb, the amount outgassed is dependent on the current temperature in an amount of the same order of magnitude as the estimated fossil fuel emissions.
That may be right for the first year: a drop of 1 K at the equator gives a decrease in outflow of around 8 GtC. Without human emissions, this would give a drop in atmospheric CO2 of about 1%, assuming that the poles remain the same sink. But as the pCO2 pressure in the atmosphere drops with 1% after a year, the outflow of the oceans at the equator increases and the inflow at the polar oceans decreases. Until a new equilibrium is found at about 8 ppmv less (for an average ocean temperature drop of 1 K), according to the Vostok ice core…
Thus after a few years, the new fluxes out and in are again equal to each other at a different level of CO2 in the atmosphere. Contrary to this, the human emissions go on, year by year, directly into the atmosphere. Part of the added mass is absorbed by oceans and vegetation, part remains in the atmosphere (again as mass, not as “anthro” CO2).
It is nonlinear, being inversely proportional to the partial pressure of total CO2 in the atmosphere. This can be solved with a mass balance analysis, but the result cannot be assumed to be the response to natural outgassing plus the response to fossil fuel burning or an assumed ACO2 cycle.
There are several mass balances in use, including by the IPCC. The IPCC doesn’t assume a separate aCO2 cycle (as mass, but they do for isotope changes), as all emissions simply are mixed into the natural CO2 cycle. And of course, if one adds something extra to a cycle where inputs and outputs are (near) equal, that influences both the inputs and the outputs, as physics dictate.
About the Bern model: you can’t say that the model is wrong (as I do too), and at the same time use the fastest decay rate as proof for your thesis of a rapid decay… The IPCC also defines the fastest rate only for a portion of the pulse, not the whole pulse.
I also wrote in the 12:07 post that the terms resident time, e-folding time, turnover time, and average lifetime are all the same.
They are all the same and have nothing to do with a decay time (except that a decay is also e-folding) of an extra CO2 pulse in the atmosphere. That is what you don’t understand.
By the way, the IPCC, Colorado and Texas A&M data are about fluxes, not residence times or any of the other characteristic process constants. The computation of the two sets of three parameters for those sources is entirely mine, as is the name for my results.
Either you have your own definition of residence time, or you haven’t read the Colorado course. They look at the through fluxes (in and out) to define the RT (residence time), not any word about decay rates. See:
http://www.colorado.edu/GeolSci/courses/GEOL1070/chap04/chapter4.html
Indeed it is difficult to have a good discussion with somebody who has already made up his mind…
Re Ferdinand Engelbeen on 6/24/10 at 4:44 pm,
I am only interested in IPCC’s model. I don’t care much anymore what the laboratory data says. IPCC did not present the laboratory data.
I became discouraged because the laboratory that discarded data for an unfavorable wind vector did not record the wind vector! An alternative to the terrestrial biology cause for the seasonal effects in CO2 concentration is the seasonal wind. Because of the laboratory failure to record wind data, the issue cannot be resolved.
Data reduction includes going from laboratory data to graphs like the MLO/South Pole and MLO/Baring Head overlays previously cited for you, but which you chose to ignore. These are reductions by IPCC. They do not have the appearance of legitimate data. As IPCC has said, and you ignore, intercalibration is used to bring the stations into agreement. Also, the individual records have lost the expected real world variability. And IPCC conceals the calibration values and techniques applied.
Are you suggesting that the surface layer is, as IPCC claims, in equilibrium?
The changes you suggest to the definition of S are unacceptable. The change of parameters and changes the concept of the formula, and its validity.
Your defense against a 90 GtC or 92 GtC pulse is meaningless, since I made no such claim, and you give no reference where someone else made such a claim. The parameters are 90 GtC/yr and 92 GtC/yr. These are fluxes, not pulses. I admire you for your language skills, but you can’t hide behind hypothetically limited English for the argumentative willy-nilly changing of units and parameters. You need to stick to names and definitions.
A drop in “pCO2 pressure in the atmosphere” does not cause a decrease in the “inflow at the polar oceans”. It would cause an increase in uptake, in dissolution. This is a consequence of Henry’s Law, which discount, ignore, or misunderstand.
You say “Until a new equilibrium is found”. You need to define what you mean by equilibrium. It is certainly not true of thermodynamic equilibrium.
Contrary to your assertion, IPCC does have separate ACO2 and nCO2 cycles. These it details in its carbon cycle figure. The ACO2 values are in red, the nCO2 values are in black. AR4, Figure 7.3, p. 515.
You say “a decay is also e-folding”. This is to get around your improper use of “decay time”, which I criticized. Now you introduce a new term, “decay”, which is neither “decay constant” or “decay time”, terms used in the previous dialog. Your newest response is meaningless. You cannot claim to be rational and change words in midcourse.
I do not have my own definition of residence time. I have IPCC’s, and I provided it in writing. Should other definitions exist, that would be quite irrelevant to exposing IPCC’s fraud.
My mind is well made up in a whole multitude of respects. What I have done though, and you chose to ignore, is to provide you with full support and evidence for my conclusions.
Jeff Glassman says:
June 25, 2010 at 9:44 am
Re Ferdinand Engelbeen on 6/24/10 at 4:44 pm,
I am only interested in IPCC’s model. I don’t care much anymore what the laboratory data says. IPCC did not present the laboratory data.
This – again – is a false accusation. The IPCC shows the “cleaned” monthly averages, exactly as supplied by the different laboratories.
An alternative to the terrestrial biology cause for the seasonal effects in CO2 concentration is the seasonal wind. Because of the laboratory failure to record wind data, the issue cannot be resolved.
If you really want to resolve such a question, simply ask the data from the meteorological station at Mauna Loa, next door to the CO2 measuremnt station. The MLO lab needs to ask them too, as they have no wind speed/direction apparatus. But as the stations at Barrow and all other stations in the NH show the same (even more pronounced) seasonal pattern, including a reverse correlation with d13C variability, it is quite clear that vegetation growth in the NH (more land) is the cause.
Data reduction includes going from laboratory data to graphs like the MLO/South Pole and MLO/Baring Head overlays previously cited for you, but which you chose to ignore. These are reductions by IPCC.
Are you really so hard to convince that the IPCC didn’t invent or alter the CO2 data? The IPCC only used the data, already selected by the NOAA, or in the case of Baring Head by CDIAC. The selected laboratory data are not altered by the IPCC in any way. You chose to ingore the measured data, because you don’t like them in your believe that the data are “too smooth” thus must be manipulated by the IPCC. Well, look at the raw data and the monthly averages from selected data from Mauna Loa and the South Pole. If you have any indication that the IPCC didn’t use or altered the monthly averages in any way or if the averaged data don’t fit the raw data, well then you have a point. Otherwise stop with your allegations. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
The monthly averages is what the IPCC uses in all its graphs. Nothing else.
As IPCC has said, and you ignore, intercalibration is used to bring the stations into agreement. Also, the individual records have lost the expected real world variability. And IPCC conceals the calibration values and techniques applied.
Again false accusations. As repeatedly said and shown with references to the (inter)calibration procedures, which you obviously choose to ignore, intercalibration is common practice in any type of laboratory to assure the correct operation of equipment and the correct value of calibration gases. This has nothing to do with bringing the observations into agreement. The observations are what they are, if you like them or not.
And the IPCC has no bussiness with the calibration and techniques used.
Are you suggesting that the surface layer is, as IPCC claims, in equilibrium?
Yes and no: locally at the very thin skin it is, deeper depends of a lot of constraints like wind speed and over the total ocean surface: no, as there is a pCO2 gradient which pushes some 2 GtC per year into the oceans.
The changes you suggest to the definition of S are unacceptable. The change of parameters and changes the concept of the formula, and its validity.
Not at all. If the inflow = throughput = outflow, it doesn’t make any difference if you use the inflow or the outflow, as both represent the exchanges in mass with another reservoir, in this case the oceans. The people of the Colorado University show that both may may be used. That doesn’t change the concept, neither its validity.
Your defense against a 90 GtC or 92 GtC pulse is meaningless, since I made no such claim
Not directly, but if I may cite Jeff Glassman:
So IPCC attributes all the observed rise in CO2 that has accumulated in the atmospheric during the industrial era to ACO2. The 119.6 PgC/yr from terrestrial sources, the 90.6 PgC/yr from the ocean do not accumulate.
According to you, contrary to the IPCC, all these flows do accumulate, thus form a “pulse” of extra CO2, for which the 8 GtC/yr from humans is negligible?
A drop in “pCO2 pressure in the atmosphere” does not cause a decrease in the “inflow at the polar oceans”. It would cause an increase in uptake, in dissolution.
Novel physics here? if the pCO2 in the atmosphere drops, the pressure difference between the atmosphere and the oceans decreases, thus less is going into the oceans at for the same cold temperature (worked some time in a cola bottlery, you know).
With “equilibrium” I mean an equilibrium between CO2 pressure in the atmosphere and release of CO2 in the tropics (according to Henry’s Law or not) and absorption near the poles. If the temperature drops near the equator (or oceanwide), do you agree that the consequence is that about 8 ppmv less CO2 will be left in the atmosphere for each K drop in temperature?
Contrary to your assertion, IPCC does have separate ACO2 and nCO2 cycles.
OK, this is the first time that I have looked at that graph. It looks identical to the NASA graph (which I thought it was), except that these make no differentiation between nCO2 and aCO2. It looks like a best guess of the partitioning between aCO2 and nCO2 in the total flows not as really separate cycles (except for the emissions of course). I suppose that the IPCC tried to show how much CO2 has increased (as mass) in different compartments as result of the emissions, but there is certainly not that much aCO2 in the atmosphere (at maximum some 8%) and I don’t think that the flows between oceans and atmosphere increased with 20 GtC/yr due to the emissions…
This is simply bad work.
You say “a decay is also e-folding”. This is to get around your improper use of “decay time”, which I criticized. Now you introduce a new term, “decay”, which is neither “decay constant” or “decay time”, terms used in the previous dialog. Your newest response is meaningless. You cannot claim to be rational and change words in midcourse.
Sorry for the confusion. I only used the word “decay” because the decay of a pulse of extra CO2 and the residence of a 14C pulse (from the atomic bomb testing) both have an e-folding time or halve life if you want. But quite different: the 14C pulse didn’t change the total mass of CO2 and its decrease only depends of the exchange flows (residence time), while the extra CO2 pulse only depends of the difference in in- and outflows. Both have quite different half life times: the first about 5 years, the second about 40 years.
I do not have my own definition of residence time.
A residence time has nothing to do with a decay time for an extra CO2 pulse. That is where you are confused.
What I have done though, and you chose to ignore, is to provide you with full support and evidence for my conclusions.
Of which several are simply wrong…
On 6/21/10 at 4:54 pm, Ferdinand Engelbeen defined laboratory data for us:
>>Before you accuse someone of manipulating the data, please have a look at the (raw) data yourself. These are available on line for four stations: Barrow, Mauna Loa, Samoa and South Pole: ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/ These are the calculated CO2 levels, based on 2 x 20 minutes 10-second snapshots voltages of the cell + a few minutes voltages measured from three calibration gases. Both the averages and stdv of the calculated snapshots are given. These data are not changed in any way and simply give the average CO2 level + stdv of the past hour.
I prefer to call raw data the output when the technician first calibrates transducer outputs, e.g. voltages, into the physical units being measured. That Engelbeen still might refer to hourly averages from the snapshots as raw data is OK for the purposes of discussion here.
But then on 6/25/10, he says,
>>The monthly averages is what the IPCC uses in all its graphs. Nothing else.
>>>>[quoting me] As IPCC has said, and you ignore, intercalibration is used to bring the stations into agreement. Also, the individual records have lost the expected real world variability. And IPCC conceals the calibration values and techniques applied.
>>Again false accusations. As repeatedly said and shown with references to the (inter)calibration procedures, which you obviously choose to ignore, intercalibration is common practice in any type of laboratory to assure the correct operation of equipment and the correct value of calibration gases.
Again, Engelbeen limits the meaning of calibration to the raw data level (his definition or mine) because he discusses calibration in the context of the equipment (transducers) and calibration gases. He ignores the other IPCC calibrations even after being given baker’s dozen of examples (6/20/10, 1:42 pm).
IPCC says,
>> The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically > Because of the favorable site location, continuous monitoring, and careful selection and scrutiny of the data, the Mauna Loa record is considered to be a precise record and a reliable indicator of the REGIONAL TREND in the concentrations of atmospheric CO2 in the middle layers of the troposphere. Caps added.
Note that the authors consider MLO valid as regional data, and in the trend. They assert no validity with respect to either global concentrations or seasonal variations. IPCC shows how the data plot on top of one another, matching in trends. How did that happen, and who did it? Where is it all published? Is Engelbeen alleging that laboratory technicians did it all on their own?
IPCC says,
>>The high-accuracy measurements of atmospheric CO2 concentration, initiated by Charles David Keeling in 1958, constitute the master time series documenting the changing composition of the atmosphere. These data have iconic status in climate change science as evidence of the effect of human activities on the chemical composition of the global atmosphere. KEELING’S MEASUREMENTS ON MAUNA LOA IN HAWAII PROVIDE A TRUE MEASURE OF THE GLOBAL CARBON CYCLE, AN EFFECTIVELY CONTINUOUS RECORD OF THE BURNING OF FOSSIL FUEL. They also maintain an accuracy and precision that allow scientists to separate fossil fuel emissions from those due to the natural annual cycle of the biosphere, demonstrating a long-term change in the seasonal exchange of CO2 between the atmosphere, biosphere and ocean. Later observations of parallel trends in the atmospheric abundances of the 13CO2 isotope and molecular oxygen (O2) uniquely identified this rise in CO2 with fossil fuel burning. Caps added, citations deleted, AR4, ¶1.3.1 The Human Fingerprint on Greenhouse Gases, p. 100.
IPCC claims in 2007 that C. D. Keeling’s measurement program at MLO is global, while is son, R. F. Keeling, in 2009 says the data are regional. Physics is not on IPCC’s side.
CDIAC provides the following note on its MLO data sheet:
>> Values above represent monthly concentrations adjusted to represent 2400 hours on the 15th day of each month. Units are parts per million by volume (ppmv) expressed in the 2003A SIO manometric mole fraction scale. The “annual average” is the arithmetic mean of the twelve monthly values where no monthly values are missing.
However, the note on its South Pole and Baring Head data, reads
>> Values above are taken from a curve consisting of 4 harmonics plus a stiff spline and a linear gain factor, fit to monthly concentration values adjusted to represent 2400 hours on the 15th day of each month. Data used to derive this curve are shown in the accompanying graph. Units are parts per million by volume (ppmv) expressed in the 2003A SIO manometric mole fraction scale. The “annual average” is the arithmetic mean of the twelve monthly values.
Why are MLO data reduced differently than South Pole and Baring Head data? How can similar data be compared under different rules of data reduction? What exactly is the “curve consisting of 4 harmonics”? And the “stiff spline”? But especially note the “linear gain factor”. This is exactly the factor by which one could “calibrate” the stations to look alike. Is it different for the two stations? Is it a constant or a variable?
R. F. Keeling and S. Piper were IPCC contributing authors for both the TAR and AR4.
To be continued.
Continuing, on 6/25/10, Ferdinand Engelbeen says,
>>>> (quoting me) Are you suggesting that the surface layer is, as IPCC claims, in equilibrium?
>>Yes and no: locally at the very thin skin it is, deeper depends of a lot of constraints like wind speed and over the total ocean surface: no, as there is a pCO2 gradient which pushes some 2 GtC per year into the oceans.
and
>>>>(quoting me) A drop in “pCO2 pressure in the atmosphere” does not cause a decrease in the “inflow at the polar oceans”. It would cause an increase in uptake, in dissolution.
>>Novel physics here? if the pCO2 in the atmosphere drops, the pressure difference between the atmosphere and the oceans decreases, thus less is going into the oceans at for the same cold temperature (worked some time in a cola bottlery, you know).
The yes part of his two-way answer is wrong. Nowhere is the ocean in equilibrium, which is the ultimate state of stagnation. In equilibrium, there are no currents and no heat transfer (to use the redundant term). One cannot even say that something is close to equilibrium. A system either is or it is not in equilibrium. The surface layer is in turmoil, including all thin slices of it.
IPCC urges, and Engelbeen it seems would agree,
>> The air-sea exchange of CO2 is determined largely by the air-sea gradient in pCO2 between atmosphere and ocean. Equilibration of surface ocean and atmosphere occurs on a time scale of roughly one year. Gas exchange rates increase with wind speed and depend on other factors such as precipitation, heat flux, sea ice and surfactants. The magnitudes and uncertainties in local gas exchange rates are maximal at high wind speeds. In contrast, the equilibrium values for partitioning of CO2 between air and seawater and associated seawater pH values are well established (Zeebe and Wolf-Gladrow, 2001; see Box 7.3). Citation deleted, AR4, ¶7.3.4.1 Overview of the Ocean Carbon Cycle, p. 528.
This is not correct. The conclusion from Zeebe, et al., is for a fictional surface layer perpetually restrained to be in equilibrium. That conclusion relies on the stoichiometric equations of equilibrium, and the solution given graphically in the Bjerrum plot. The uptake and outgassing of CO2 is governed by Henry’s Law. Dissolution does not depend on the pressure difference or pressure gradient. Except for the fact that this air-sea exchange model is crucial to justifying AGW, it is a surprising error from a prominent, contributing, PhD professor of geophysics, and from someone who claims credentials as a chemist.
On 6/22/10 at 2:00 pm Engelbeen wrote,
>>>>(quoting me) Actually the partial pressure of a gas in water is a fiction. It is taken to be the partial pressure of the gas in the gas state in contact with the water and in equilibrium with it.
>>The partial pressure of CO2 in water may be a fiction (I don’t thinks so), but the equilibrium with the air above is measured (since many decades) nowadays continuously on seaships and is the driving force for uptake or release of CO2 from/to the air above it. Much more realistic than some theoretical calculation from Henry’s Law which doesn’t take into account other factors than temperature.
Just for a moment, he seemed to recognize the fiction of partial pressure of a gas dissolved in solvent. Because of that fiction, and what is taken as the meaning of that partial pressure, the pressure difference and the pressure gradient do not exist.
Engelbeen dismisses solubility, also known as dissolution and Henry’s Law, repeatedly. Here, he dismisses it as if someone had made a “theoretical calculation”. Once again, he refers refer to something no one said.
More important is that Henry’s Law informs us of the physics involved in a qualitative way, as fundamental as the recognition that balls roll down hill. Dissolution depends on the partial pressure of the gas above the water, and the temperature of the water, and not the reverse of either. Variations in atmospheric pressure over the ocean are rather insignificant. What counts is the temperature of the ocean, which varies greatly from the tropics to the poles.
Jeff Glassman,
“The yes part of his two-way answer is wrong. Nowhere is the ocean in equilibrium, which is the ultimate state of stagnation. In equilibrium, there are no currents and no heat transfer (to use the redundant term). One cannot even say that something is close to equilibrium. A system either is or it is not in equilibrium. The surface layer is in turmoil, including all thin slices of it.”
I don’t think you actually understand the definition of equilibrium if you think an equilibrium is a state of stagnation, with no currents and no heat transfer. A system in equilibrium can have currents, heat transfer, and not be stagnated. What is in equilibrium is that all flows balance out and all inputs equal outputs. Rocket scientists should know their thermal dynamics…
Re mikelorrey’s misunderstanding of equilibrium, 6/26/10 at 1:39 pm:
>>[W]e shall use the symbols Y and X for [a] pair of independent coordinates. … A state of a system in which Y and X have definite values which remain constant so long as the external conditions are unchanged is called an equilibrium state. Zemansky, M. W., “Heat and Thermodynamics”, McGraw-Hill, Fourth Ed., 1957, p. 5.
Or,
>>When there is no unbalanced force in the interior of a system and also none between a system and its surroundings, the system is said to be in a state of mechanical equilibrium. … When a system in mechanical equilibrium does not tend to undergo a spontaneous change of internal structure, such as a chemical reaction, or a transfer o matter from one part of the system to another, such as diffusion or solution, however slow, then it is said to be in a state of chemical equilibrium … Thermal equilibrium exists when there is no spontaneous change in the coordinates in mechanical and chemical equilibrium when it is separated from its surroundings by a diathermic [“like a thin metal sheet”] wall. In thermal equilibrium, all parts of a system are at the same temperature, and this temperature is the same as that of the surroundings. … When the conditions for all three types of equilibrium are satisfied, the system is said to be in a state of thermodynamic equilibrium; in this condition, it is apparent that there will no tendency whatever for any change of state either of the system or of the surroundings to occur. States of thermodynamic equilibrium can be described in terms of macroscopic coordinates that do not involve time … . Id., pp. 24-25.
Short form: stagnation.
Zemansky earned an international reputation as the creator of the foundations for teaching thermodynamics, and the work cited is a classic.
Glassman,
I note you misdefined mechanical equilibrium because you are ignoring it as a system of particles.:
“The necessary conditions for mechanical equilibrium for a system of particles are:
(i)The vector sum of all external forces is zero;
(ii) The sum of the moments of all external forces about any line is zero.”
– John L Synge & Byron A Griffith (1949). Principles of Mechanics (2nd Edition ed.). McGraw-Hill. p. 45–46.
I also point out that you completely ignored the concept of dynamic equilibrium:
“A dynamic equilibrium exists when a reversible reaction ceases to change its ratio of reactants/products, but substances move between the chemicals at an equal rate, meaning there is no net change. It is a particular example of a system in a steady state. In thermodynamics a closed system is in thermodynamic equilibrium when reactions occur at such rates that the composition of the mixture does not change with time. Reactions do in fact occur, sometimes vigorously, but to such an extent that changes in composition cannot be observed. Equilibrium constants can be expressed in terms of the rate constants for elementary reactions.”
http://en.wikipedia.org/wiki/Dynamic_equilibrium Atkins, P.W.; de Paula, J. (2006). Physical Chemistry (8th. ed.). Oxford University Press. ISBN 0198700725.
Re mikelorrey’s continuing misunderstanding of equilibrium, 6/26/10 at 2:15 pm:
You misread my post. What I provided was not some personal definition of equilibrium, nor of mechanical equilibrium, by which I went wrong. They were Mark Zemansky’s. And as applied to Zemansky, he surely included your elaborate vector definition. Zemansky just said it simply as “no unbalanced force”. Unbalanced means the vector sum is other than zero, and “no” means in no way, about no line nor plane nor surface, etc.
As to “dynamic equilibrium”, you have introduced a new term, defined in a special way, for some other application. It is also rather worthless here because it applies only to reversible reactions, which are another idealization altogether. Thermodynamics, oceanography, and climate are not reversible.
Nice try. If you want to win at hip-shooting disses, at quick draw ad hominems, aim at the man.
Zemansky is still standing.
Glassman,
No, he isn’t. Stagnation is zero movement or change, equilibrium is zero NET movement or change. They are completely different things.
Jeff Glassman says:
June 26, 2010 at 1:30 pm
I prefer to call raw data the output when the technician first calibrates transducer outputs, e.g. voltages, into the physical units being measured.
Agreed. These are available on simple request, but as these represent many millions of 10-second snapshots per year, these are not directly in line. I have checked a few days of calculations from these raw data and the hourly averages and stdv are as made available on line.
He ignores the other IPCC calibrations even after being given baker’s dozen of examples (6/20/10, 1:42 pm).
The dozen examples have nothing to do with CO2 levels. The IPCC isn’t involved in calibrations or procedures around CO2. I understand that this is difficult to believe, because of the other dozen examples, but it is the truth.
How did that happen, and who did it? Where is it all published? Is Engelbeen alleging that laboratory technicians did it all on their own?
With some very little effort, the Internet is a great source. It shows that CDIAC/Keeling Sr. was the master brain, beginning it all, and they have published (and still publish their own independent results) on the net and in different papers:
http://cdiac.ornl.gov/trends/co2/contents.htm
Nowadays, NOAA is the leading organisation where the central preparation and checking of calibration gases is done. An interactive plotting site of a lot of data can be found at:
http://www.esrl.noaa.gov/gmd/ccgg/iadv/ with some background, but ftp sites exist to download the (mostly cleaned) data from a lot of sites.
I already gave the link to the raw hourly averages of four stations. Although a heavy load, Excel can handle it.
the Mauna Loa record is considered to be a precise record and a reliable indicator of the REGIONAL TREND in the concentrations of atmospheric CO2 in the middle layers of the troposphere.
Yes, all CO2 measurements are local, some are regional, but most important: those far away from direct sources and sinks, that is in the middle of the oceans, deserts (including ice deserts), on mountain tops and coastal with seaside wind all show very similar averages and trends, differing less than 1% within one hemisphere, less than 2% between the NH and the SH for yearly averages. That represents 95% of the atmosphere. Thus it doesn’t matter if the IPCC uses Mauna Loa or Barrow or South Pole data, or the average of them. These are near equally “global”, only Mauna Loa has the longest continuous record, so that is mostly used. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends.jpg
All series show near the same trend but with a NH-SH lag.
Why are MLO data reduced differently than South Pole and Baring Head data? How can similar data be compared under different rules of data reduction? What exactly is the “curve consisting of 4 harmonics”? And the “stiff spline”? But especially note the “linear gain factor”. This is exactly the factor by which one could “calibrate” the stations to look alike.
That may be a matter of timing: the MLO sheet may be of an older date, but in fact it doesn’t matter. I have plotted the raw data, the unsplined, but selected daily averages and the splined and selected monthly averages from MLO and SPO in one graph:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
If you disagree with the way that the monthly averages represent the real measurements, I (and probably NOAA) am interested to hear of an alternative method of data reduction.
The “linear gain factor” simply is the year-by-year increase in level: while the seasonal variability is more or less constant, there is more variability in the increase. Used together with the average curvatory of previous years to adjust monthly averages to the middle of the month, if only a limited number of days for averaging is available at the beginning or end of the month.
More tomorrow, need some sleep now…
Re mikelorrey’s continuing misunderstanding of equilibrium, 6/26/10 at 2:15 pm:
You left out one important word when you wrote,
“Stagnation is zero movement or change, equilibrium is zero NET movement or change. They are completely different things.”
That’s according to the definition you cited. You should have written,
“Stagnation is zero movement or change, dynamic equilibrium is zero NET movement or change. They are completely different things.”
(This is assuming in each instance that you are talking about changes in macroscopic variables.)
Imagine two systems in radiation balance, that is, in dynamic equilibrium. Take away one of the systems. The other will lose its thermal energy through radiation. Neither system was in thermodynamic equilibrium at any time.
(a) Dynamic equilibrium is not thermodynamic equilibrium.
(b) Dynamic equilibrium applies to reversible processes, which certainly excludes climate.
Ferdinand.
“It shows that CDIAC/Keeling Sr. was the master brain, beginning it all.
Keeling knew nothing whatsoever about measuring Cio2 when he was recruited for the job and took it because he wanterd to spend time in the open air rather than in a stuffy office. There is no doubt he was greatly influenced by Callendar who had his own reasons for his selection of historic Co2 data.
It is instructive to read Keeling’s autobiography where he confirms his lack of knowledge and to go through Callendars archives.
Both were great men in their own way but it is stretching a point to say Keeling was the master brain.
Tonyb
tonyb says:
June 26, 2010 at 3:52 pm
Neither system was in thermodynamic equilibrium at any time.
(a) Dynamic equilibrium is not thermodynamic equilibrium.
(b) Dynamic equilibrium applies to reversible processes, which certainly excludes climate.
Like on many items, you have your own ideas about definitions, which differs from what others mean. In the case of two systems in radiation balance, the whole process is in thermodynamic equilibrium and each of them is in thermodynamic equilibrium as their state doesn’t change, as for each of them and both together, all the inputs equal the outputs. Where for near all people in the world with some technical/scientific knowledge is implied the word “dynamic”, without mentioning it, except for you.
Climate never is in dynamic equilibrium, as the inputs and outputs continuously change and mostly not in an equal way. But it certainly is reversible.
tonyb says:
June 26, 2010 at 3:52 pm
Hi Tony, some time ago…
The discussion mentioning Keeling was about the Mauna Loa data, where Keeling indeed was the master brain of discovering the reason why over land there was such high variability (vegetation uptake/breathing), choosing a better location (South Pole first, Mauna Loa second), and inventing a continuous sampling method + calibration which was about 100 times more accurate than most chemical methods used before him…
Jeff Glassman says:
June 26, 2010 at 1:30 pm
R. F. Keeling and S. Piper were IPCC contributing authors for both the TAR and AR4.
Thus every contributing author of the IPCC (including Spencer, McIntyre,…) are on your personal blacklist of fraudsters of the IPCC?
The yes part of his two-way answer is wrong. Nowhere is the ocean in equilibrium, which is the ultimate state of stagnation. In equilibrium, there are no currents and no heat transfer (to use the redundant term). One cannot even say that something is close to equilibrium. A system either is or it is not in equilibrium. The surface layer is in turmoil, including all thin slices of it.
OK, if you insists: include in all mentionings of “equilibrium” that 99% of all engineers in the world talk about a dynamic equilibrium, never a static one, as that doesn’t exist in the real world. Thus the ultimate surface layer of the oceans always is in dynamic equilibrium with the atmosphere, although a lot of molecules can be transfered both ways… But as the layers below it aren’t in equilibrium with the atmosphere (at most places), there is always a difference in transfer rates. This results in a lot of CO2 degassing at the equator and a lot of absorbance near the poles. But in the past at least 420,000 years, the whole CO2 system was in dynamic equilibrium, where the level in the atmosphere was only influenced by temperature changes. That changed 150 years ago with the human emissions.
This is not correct. The conclusion from Zeebe, et al., is for a fictional surface layer perpetually restrained to be in equilibrium. That conclusion relies on the stoichiometric equations of equilibrium, and the solution given graphically in the Bjerrum plot. The uptake and outgassing of CO2 is governed by Henry’s Law. Dissolution does not depend on the pressure difference or pressure gradient. Except for the fact that this air-sea exchange model is crucial to justifying AGW, it is a surprising error from a prominent, contributing, PhD professor of geophysics, and from someone who claims credentials as a chemist.
If you think that “Dissolution does not depend on the pressure difference or pressure gradient.”, you simply demonstrate that you don’t understand the physics and chemistry involved. If there was no pressure gradient between free CO2 in the water and in the atmosphere (that means equal transfer of molecules from water to air as reverse), then there was zero (net) flux (or a dynamic equilibrium).
What you are saying is that only temperature (via Henry’s Law) is involved in the amount of (free) CO2 in seawater and hence the flux in either direction between ocean and atmosphere. This is completely wrong. The amount of free CO2 in seawater depends of a lot of other items than temperature alone: pH, salt content, DIC content. From
http://www.soest.hawaii.edu/oceanography/faculty/zeebe_files/Publications/WolfGladrowMarChem07.pdf
you can learn (page 289) that pH and DIC have a direct influence on (dissolved free) CO2 concentration at a constant temperature and salt content. Thus any use of temperature alone doesn’t show what is really happening in solution, thus not what will happen in reality. pCO2, measured or calculated, is the only realistic parameter which may give the difference between oceanic and atmospheric CO2 pressure, thus the direction and to a certain extent the quantity of the flux.
This is your error, not somebody’s else (including many who have published, measured and calculated pCO2 from the oceans).
Engelbeen dismisses solubility, also known as dissolution and Henry’s Law, repeatedly. Here, he dismisses it as if someone had made a “theoretical calculation”. Once again, he refers refer to something no one said.
Henry’s law holds for one level of pH, DIC and salt content of seawater. Change one of these parameters and the curve according to Henry’s Law moves, as the concentration of free CO2 (thus the pressure to go in/out solution) changes. Using one curve of Henry’s Law for one level of the others is completely wrong.
The pCO2 of seawater is measured routinely on seaships by simply spraying seawater in a closed air system at the temperature of the seawater and measuring the CO2 level of that air. Thus pCO2 of seawater is what the atmosphere would get if both seawater and air were in dynamic equilibrium at that temperature. Any partial pressure of CO2 in the real atmosphere above that would give a flux into the oceans and vv. Temperature is important, but only one of the parameters involved. pCO2 gives the right answer…
Ferdinand
Your 12.38 message was presumably aimed at Jeff Glassman-I never said anything about the subject 🙂
With regards to your 12.46. Perhaps Keeling was the master brain EVENTUALLY but when he first joined up he knew nothing of the subject and took considerable advice from Callendar -who he greatly admired- and took his historic readings from him.
Callendars archives are instructive, as he clearly took historic concentration levels that suited his theory-that man was causing climate change-and discarded others.
Keeling later says in his autobiography that the 19th Century scientists were more accurate than he had initially believed (as a young, untried, inexperienced PHD) in measuring historic CO2 concentrations.
Keeling undoubtedly did a lot of intersting work but we shall continue to agree to differ as to whether his records are 100 times more acurate than the 19th Century scientists he subsequently came to admire 🙂
All the best
Tonyb