Some people claim, that there's a human to blame …

Guest Post by Willis Eschenbach

There seem to be a host of people out there who want to discuss whether humanoids are responsible for the post ~1850 rise in the amount of CO2. People seem madly passionate about this question. So I figure I’ll deal with it by employing the method I used in the 1960s to fire off dynamite shots when I was in the road-building game … light the fuse, and run like hell …

First, the data, as far as it is known. What we have to play with are several lines of evidence, some of which are solid, and some not so solid. These break into three groups: data about the atmospheric levels, data about the emissions, and data about the isotopes.

The most solid of the atmospheric data, as we have been discussing, is the Mauna Loa CO2 data. This in turn is well supported by the ice core data. Here’s what they look like for the last thousand years:

Figure 1. Mauna Loa CO2 data (orange circles), and CO2 data from 8 separate ice cores. Fuji ice core data is analyzed by two methods (wet and dry). Siple ice core data is analyzed by two different groups (Friedli et al., and Neftel et al.). You can see why Michael Mann is madly desirous of establishing the temperature hockeystick … otherwise, he has to explain the Medieval Warm Period without recourse to CO2. Photo shows the outside of the WAIS ice core drilling shed.

So here’s the battle plan:

I’m going to lay out and discuss the data and the major issues as I understand them, and tell you what I think. Then y’all can pick it all apart. Let me preface this by saying that I do think that the recent increase in CO2 levels is due to human activities.

Issue 1. The shape of the historical record.

I will start with Figure 1. As you can see, there is excellent agreement between the eight different ice cores, including the different methods and different analysts for two of the cores. There is also excellent agreement between the ice cores and the Mauna Loa data. Perhaps the agreement is coincidence. Perhaps it is conspiracy. Perhaps it is simple error. Me, I think it represents a good estimate of the historical background CO2 record.

So if you are going to believe that this is not a result of human activities, it would help to answer the question of what else might have that effect. It is not necessary to provide an alternative hypothesis if you disbelieve that humans are the cause … but it would help your case. Me, I can’t think of any obvious other explanation for that precipitous recent rise.

Issue 2. Emissions versus Atmospheric Levels and Sequestration

There are a couple of datasets that give us amounts of CO2 emissions from human activities. The first is the CDIAC emissions dataset. This gives the annual emissions (as tonnes of carbon, not CO2) separately for fossil fuel gas, liquids, and solids. It also gives the amounts for cement production and gas flaring.

The second dataset is much less accurate. It is an estimate of the emissions from changes in land use and land cover, or “LU/LC” as it is known … what is a science if it doesn’t have acronyms? The most comprehensive dataset I’ve found for this is the Houghton dataset. Here are the emissions as shown by those two datasets:

Figure 2. Anthropogenic (human-caused) emissions from fossil fuel burning and cement manufacture (blue line), land use/land cover (LU/LC) changes (white line), and the total of the two (red line).

While this is informative, and looks somewhat like the change in atmospheric CO2, we need something to compare the two directly. The magic number to do this is the number of gigatonnes (billions of tonnes, 1 * 10^9) of carbon that it takes to change the atmospheric CO2 concentration by 1 ppmv. This turns out to be 2.13 gigatonnes  of carbon (C) per 1 ppmv.

Using that relationship, we can compare emissions and atmospheric CO2 directly. Figure 3 looks at the cumulative emissions since 1850, along with the atmospheric changes (converted from ppmv to gigatonnes C). When we do so, we see an interesting relationship. Not all of the emitted CO2 ends up in the atmosphere. Some is sequestered (absorbed) by the natural systems of the earth.

Figure 3. Total emissions (fossil, cement, & LU/LC), amount remaining in the atmosphere, and amount sequestered.

Here we see that not all of the carbon that is emitted (in the form of CO2) remains in the atmosphere. Some is absorbed by some combination of the ocean, the biosphere, and the land. How are we to understand this?

To do so, we need to consider a couple of often conflated measurements. One is the residence time of CO2. This is the amount of time that the average CO2 molecule stays in the atmosphere. It can be calculated in a couple of ways, and is likely about 6–8 years.

The other measure, often confused with the first, is the half-life, or alternately the e-folding time of CO2. Suppose we put a pulse of CO2 into an atmospheric system which is at some kind of equilibrium. The pulse will slowly decay, and after a certain time, the system will return to equilibrium. This is called “exponential decay”, since a certain percentage of the excess is removed each year. The strength of the exponential decay is usually measured as the amount of time it takes for the pulse to decay to half its original value (half-life) or to 1/e (0.37) of its original value (e-folding time). The length of this decay (half-life or e-folding time) is much more difficult to calculate than the residence time. The IPCC says it is somewhere between 90 and 200 years. I say it is much less, as does Jacobson.

Now, how can we determine if it is actually the case that we are looking at exponential decay of the added CO2? One way is to compare it to what a calculated exponential decay would look like. Here’s the result, using an e-folding time of 31 years:

Figure 4. Total cumulative emissions (fossil, cement, & LU/LC), cumulative amount remaining in the atmosphere, and cumulative amount sequestered. Calculated sequestered amount (yellow line) and calculated airborne amount (black) are shown as well.

As you can see, the assumption of exponential decay fits the observed data quite well, supporting the idea that the excess atmospheric carbon is indeed from human activities.

Issue 3. 12C and 13C carbon isotopes

Carbon has a couple of natural isotopes, 12C and 13C. 12C is lighter than 13C. Plants preferentially use the lighter isotope (12C). As a result, plant derived materials (including fossil fuels) have a lower amount of 13C with respect to 12C (a lower 13C/12C ratio).

It is claimed (I have not looked very deeply into this) that since about 1850 the amount of 12C in the atmosphere has been increasing. There are several lines of evidence for this: 13C/12C ratios in tree rings, 13C/12C ratios in the ocean, and 13C/12C ratios in sponges. Together, they suggest that the cause of the post 1850 CO2 rise is fossil fuel burning.

However, there are problems with this. For example, here is a Nature article called “Problems in interpreting tree-ring δ 13C records”. The abstract says (emphasis mine):

THE stable carbon isotopic (13C/12C) record of twentieth-century tree rings has been examined1-3 for evidence of the effects of the input of isotopically lighter fossil fuel CO2 (δ 13C~-25‰ relative to the primary PDB standard4), since the onset of major fossil fuel combustion during the mid-nineteenth century, on the 13C/12C ratio of atmospheric CO2(δ 13C~-7‰), which is assimilated by trees by photosynthesis. The decline in δ13C up to 1930 observed in several series of tree-ring measurements has exceeded that anticipated from the input of fossil fuel CO2 to the atmosphere, leading to suggestions of an additional input ‰) during the late nineteenth/early twentieth century. Stuiver has suggested that a lowering of atmospheric δ 13C of 0.7‰, from 1860 to 1930 over and above that due to fossil fuel CO2 can be attributed to a net biospheric CO2 (δ 13C~-25‰) release comparable, in fact, to the total fossil fuel CO2 flux from 1850 to 1970. If information about the role of the biosphere as a source of or a sink for CO2 in the recent past can be derived from tree-ring 13C/12C data it could prove useful in evaluating the response of the whole dynamic carbon cycle to increasing input of fossil fuel CO2 and thus in predicting potential climatic change through the greenhouse effect of resultant atmospheric CO2 concentrations. I report here the trend (Fig. 1a) in whole wood δ 13C from 1883 to 1968 for tree rings of an American elm, grown in a non-forest environment at sea level in Falmouth, Cape Cod, Massachusetts (41°34’N, 70°38’W) on the northeastern coast of the US. Examination of the δ 13C trends in the light of various potential influences demonstrates the difficulty of attributing fluctuations in 13C/12C ratios to a unique cause and suggests that comparison of pre-1850 ratios with temperature records could aid resolution of perturbatory parameters in the twentieth century.

This isotopic line of argument seems like the weakest one to me. The total flux of carbon through the atmosphere is about 211 gigtonnes plus the human contribution. This means that the human contribution to the atmospheric flux ranged from ~2.7% in 1978 to 4% in 2008. During that time, the average of the 11 NOAA measuring stations value for the 13C/12C ratio decreased by -0.7 per mil.

Now, the atmosphere has ~ -7 per mil 13C/12C. Given that, for the amount of CO2 added to the atmosphere to cause a 0.7 mil drop, the added CO2 would need to have had a 13C/12C of around -60 per mil.

But fossil fuels in the current mix have a 13C/12C ration of ~ -28 per mil, only about half of that requried to make such a change. So it is clear that the fossil fuel burning is not the sole cause of the change in the atmospheric 13C/12C ratio. Note that this is the same finding as in the Nature article.

In addition, from an examination of the year-by-year changes it is obvious that there are other large scale effects on the global 13C/12C ratio. From 1984 to 1986, it increased by 0.03 per mil. From ’86 to ’89, it decreased by -0.2. And from ’89 to ’92, it didn’t change at all. Why?

However, at least the sign of the change in atmospheric 13C/12C ratio (decreasing) is in agreement with with theory that at least part of it is from anthropogenic CO2 production from fossil fuel burning.

CONCLUSION

As I said, I think that the preponderance of evidence shows that humans are the main cause of the increase in atmospheric CO2. It is unlikely that the change in CO2 is from the overall temperature increase. During the ice age to interglacial transitions, on average a change of 7°C led to a doubling of CO2. We have seen about a tenth of that change (0.7°C) since 1850, so we’d expect a CO2 change from temperature alone of only about 20 ppmv.

Given all of the issues discussed above, I say humans are responsible for the change in atmospheric CO2 … but obviously, for lots of people, YMMV. Also, please be aware that I don’t think that the change in CO2 will make any meaningful difference to the temperature, for reasons that I explain here.

So having taken a look at the data, we have finally arrived at …

RULES FOR THE DISCUSSION OF ATTRIBUTION OF THE CO2 RISE

1. Numbers trump assertions. If you don’t provide numbers, you won’t get much traction.

2. Ad hominems are meaningless. Saying that some scientist is funded by big oil, or is a member of Greenpeace, or is a geologist rather than an atmospheric physicist, is meaningless. What is important is whether what they say is true or not. Focus on the claims and their veracity, not on the sources of the claims. Sources mean nothing.

3. Appeals to authority are equally meaningless. Who cares what the 12-member Board of the National Academy of Sciences says? Science isn’t run by a vote … thank goodness.

4. Make your cites specific. “The IPCC says …” is useless. “Chapter 7 of the IPCC AR4 says …” is useless. Cite us chapter and verse, specify page and paragraph. I don’t want to have to dig through an entire paper or an IPCC chapter to guess at which one line you are talking about.

5. QUOTE WHAT YOU DISAGREE WITH!!! I can’t stress this enough. Far too often, people attack something that another person hasn’t said. Quote their words, the exact words you think are mistaken, so we can all see if you have understood what they are saying.

6. NO PERSONAL ATTACKS!!! Repeat after me. No personal attacks. No “only a fool would believe …”. No “Are you crazy?”. No speculation about a person’s motives. No “deniers”, no “warmists”, no “econazis”, none of the above. Play nice.

OK, countdown to mayhem in 3, 2, 1 … I’m outta here.

0 0 votes
Article Rating
611 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
michel
June 7, 2010 12:47 am

Yes, entirely reasonable. It seems most likely that human activities, not confined to fossil fuel burning, are indeed raising the CO2 ppm in the atmosphere.
It is also clear that this will contribute a modest warming effect, that is just physics.
The debate is what, if anything, happens next. Does the effect get amplified by positive feedback, or reduced by negative feedbacks, or overwhelmed by other factors.

Larry Huldén
June 7, 2010 1:01 am

Dear Willis!
Out of topic but text for Figure 2 includes … land use/land cover (LU/LC) changes (green line), … It looks white to me (unless I am white/green colour blind).
Good luck with your work!

Darkinbad the Brightdayler
June 7, 2010 1:05 am

“So if you are going to believe that this is not a result of human activities”
I’m not comfortable with the use of the words “Believe” or “Disbelieve” in a scientific context. These words are more appropriate to discussions about religion and concepts which are not open to a process of proof.
To pull them into a scientific debate is to allow participants to think and respond in a less rigorous way than they ought.

Manfred
June 7, 2010 1:11 am

I don’t have an issue with CO2 concentrations, however regarding your first and “most solid” argument, I wonder if the ice core data has not been “calibrated” or “adjusted” deliberately to match the Mauna Loa record.

Alex
June 7, 2010 1:13 am

Very convincing!
Question: when drawing Fig. 4, which lifetime have you assumed for CO2?

HR
June 7, 2010 1:15 am

BANG!!!!!!!

Steveta_uk
June 7, 2010 1:16 am

The very flat CO2 records pre 1800 may be in part due to CO2 diffusion in ice, which potentially smooths variations that may be present during MWP, for example.
http://catalogue.nla.gov.au/Record/3773250
Not sure this changes any of your post-1800 arguments, tho.

Harry
June 7, 2010 1:16 am

RULES FOR THE DISCUSSION OF ATTRIBUTION OF THE CO2 RISE
Amen!

Baa Humbug
June 7, 2010 1:23 am

Was it something we said Willis?

John Finn
June 7, 2010 1:34 am

So if you are going to believe that this is not a result of human activities, it would help to answer the question of what else might have that effect….
…and to bear in mind that the effect is not a ‘bump’ i.e. it’s unlikely to be not a one-off event or ‘shift’.
PS I’m not a supporter of the ‘catastrophic’ AGW argument. I’ve argued on RC with Michael Mann about the validity of his HS reconstruction, for example.

richard telford
June 7, 2010 1:35 am

In addition to the evidence presented above, there are at least two further lines of evidence supporting the hypothesis that the CO2 increase is caused by humans:
– the decline in atmospheric O2 concentrations, measured by Ralph Keeling’s group. See https://bluemoon.ucsd.edu/images/ALLo.pdf Such declines are expected if the CO2 rise is due to combustion, but not if it were due to volcanism or ocean outgassing.
– the ocean surface is on average undersaturated in CO2 and there is net uptake CO2. Hence the rise in CO2 cannot be use to ocean outgassing, or submarine volcanoes. This uptake of CO2 will cause the ocean to become more acidic (==less alkaline). See for example https://bora.uib.no/handle/1956/2090?language=no

Mooloo
June 7, 2010 1:36 am

I’ve never used warmist as a term of abuse. What are we meant to call people who believe the “CO2 causes warming” theory?
My only objection to being called a denier is its lack of specificness. Especially since I don’t deny that the world is warming – although I do not believe some of he claims of the rate of warming. But as a term, per se, I don’t find denier offensive.

Stephen Wilde
June 7, 2010 1:39 am

I’m inclined to accept that there is prima facie evidence for human activity being the cause.
However I would prefer to exclude all other possibilities before accepting that as definitive. Bear in mind that it matters not if the climate effect is negligible as seems likely for various reasons.
Areas where I have doubts are as follows:
i) How accurate are the historical methods of measurement on short timescales of say less than 500 years ? The MWP and LIA are not shown by historical CO2 records but current methods do pick up even seasonal variability at Mauna Loa. Perhaps the historical records pre 1850 are just too coarse ?
ii) Mauna Loa shows rapid seasonally related movements in the amount of CO2 recorded so the suggested 800 year lag does not seem to apply on shorter timescales.
iii) How variable is oceanic uptake in global as opposed to local terms ? Could it be that the oceans can provoke substantial changes in the atmospheric content of CO2 over certain timescales with a natural 500/1000 year cycling amounting to as much as say 50 % of the current background level ? The period 1850 to date covers a period of recovery from the LIA and the current ongoing methods of CO2 measurement show a corresponding trend but the historical methods pre 1850 show no such corresponding CO2 and temperature trends either up or down.
iv) We saw a slowdown in the CO2 upward trend in the mid 20th century when there was slight atmospheric cooling yet no corresponding changes with ongoing temperature changes appear in the historical record.
The evidence suggests a significant disjunction between the accuracy of the pre 1850 historical CO2 proxies as against post 1850 instrumental methods similar to the famous disjunction (the hockey stick) from the mid 20th century between tree ring based historical temperature proxies and the late 20th century thermometer recordings. In both cases one gets a hockey stick pattern which should not be apparent in light of what we know about the MWP and LIA from multiple other sources.
I wonder if there is also significance in both temperatures and CO2 levels being involved in tree growth. Could a similar problem with proxy methods be upsetting all of the pre 1850 non thermometer temperature records, pre 1850 CO2 and pre 1950 (that is when they seem to have started to go awry) tree growth proxies ?
There are enough questions to give doubt to the significance of the prima facie evidence of anthropogenic causation.
Certainly modern measuring methods clearly reflect a CO2 link with recent temperature changes but the proxy methods seem to lose the temperature signal altogether apart from what may be a seperate longer term signal in the form of that 800 year lag in the much older samples.

Xi Chin
June 7, 2010 1:41 am

I agree with you.
But there is an argument that increased temperature can cause increased CO2 levels. I am not saying the temperature has increased and that is what has caused the CO2 concentration “hockey stick”. I am asking, what is the sensitivity of CO2 concentration to temperature… i.e. what kind of temperature increased would be required to produce that change in CO2? Presumably they would be massive temperature changes? Just wondering if anyone knows the figures.
Please let me stress the hypothetical nature of my point. I do not support it as a reason for the increase in CO2. I agree that the most likely reason (and the only sensible one I know of) is anthropogenic emmissions.

tonyb
Editor
June 7, 2010 1:42 am

I would like to give some historic context to the CO2 debate in as much according to Willis’ graph the constancy of CO2 at 280ppm but the variability of temperatures over thousands of years appears to show that CO2 is a weak climate driver.
Graph 1 http://www.ourcivilisation.com/aginatur/cycles/fig3.htm
The above shows reconstructed temperatures to 1400. Many periods within the LIA were surprisingly warm as well as extremely cold -all of this apparently happening with a constant level of CO2.
Graph 2 Shows the temperature diagram used in the IPCC assessment 1990 (figure 7c page 202 assessment 1) This is at the top of page.
http://climateaudit.org/2008/05/09/where-did-ipcc-1990-figure-7c-come-from-httpwwwclimateauditorgp3072previewtrue/
Graph 3 The above was based on a number of graphs from Hubert Lamb (shown lower down the article linked above). The one below shows Winter severity in Europe, 1000 – 1900. Note two cold periods in the 15th and 17th centuries. Based on Lamb, 1969 / Schneider and Mass, 1975.1
Graph 4 Ice cores show constant levels of co2 on which Michael Mann based his hockey stick illustrating constant levels of temperature until the modern era. However when actual real world temperatures (CET) are graphed against total CO2 emissions we see that temperatures are not constant-in fact they are highly variable.
http://c3headlines.typepad.com/.a/6a010536b58035970c0120a7c87805970b-pi
Graph 5 If CET data to 1659 could be extended back in time from 1300AD to around 800AD (Lamb) it would cover the Medieval Warm Period with temperature levels somewhat higher than today, but again with its peaks and troughs. The Roman optimum warm period-around 300 BC to 400AD would also show temperatures at similar levels to the MWP but again with peaks and troughs. (Few extended climatic periods are unremittingly warm or cold).
Temperatures have trended up slowly since the low point of the LIA in the 1690’s. The following link contains a graph showing CET again.
http://cadenzapress.co.uk/download/beck_mencken_hadley.jpg
Looking at the climatic peaks and troughs illustrated in the graph stretching back from the modern era-and extending it with the various graphs through the LIA- it is a reasonable conclusion to draw that at a constant 280ppm that either CO2 is a weak climate driver, or that history has erased higher CO2 measurements that might explain those variations prior to the last half century, when our emissions are thought to be of such a significance that they are changing our climate.
This latter supposition was the approach I took in plotting a fraction of Beck’s records (shown as green dots) against CET records back to 1660 which appear on the graph linked above. Total cumulative man made CO2 emissions throughout this period are represented by the blue line along the bottom and come from CDIAC. In this respect it can be seen in context against all emissions plotted in graph 4 above.
The temperature spikes make much more sense with these additional CO2 measurement points, and bearing in mind the well documented temperatures back to Roman times and beyond-to levels greater than and less than today- it is reasonable to conclude that in as much CO2 is a contributor to the climate driver mechanism, it is as part of natural CO2 variability within the overall carbon cycle whereby nature makes a far greater contribution than man.
If there are no CO2 spikes (high and low) to match the temperature spikes (high and low) either;
a) The temperature spikes did not exist and Dr Mann is correct, or;
b) CO2 levels have had little or no effect on temperature in the past and it needs to be argued why they have suddenly become such a driving force today (despite temperatures today being unremarkable in a historic context).
Tonyb

RobinL
June 7, 2010 1:42 am

‘Rules for discussion’, what a piece of work. Should be chiselled on the wall of every academic establishment everywhere. IMO.

Steve Schapel
June 7, 2010 1:53 am

Thank you once again, Willis, for the incredible amount of time and thought that goes into your articles, and for your clear exposition of the topic.
I guess it is an important question. If the increase of atmospheric CO2 is attributable to human activity, then it follows that it is possible for changes in human activity to reduce the rate of increase.
But of course, that is only relevant to those who think that reducing the rate of increase is important or desirable.

MikeC
June 7, 2010 2:04 am

Okay Willis… you econo-freaka-nature you!

June 7, 2010 2:04 am

My brain always seems to find tangents to a topic that keep me entertained for hours. I am happy to accept that Willis has, as usual, done his homework properly and that he has made a properly reasoned and validated case for us humints being the cause of the rise of the quantity of CO2 in the atmosphere – but if CO2 is mostly plant food, is that not a GOOD THING that will help our food crops grow? And why does everyone bang on about fossil fuels? As I see it, the burning of non-fossil fuels such as dried cow dung, trees etc is also a problem due to soots etc given off, deforestation, etc.

Telboy
June 7, 2010 2:12 am

Mr.Eschenbach, are you crazy? No, you’re not; you’ve given me a lot to think about. Thank you.

charles nelson
June 7, 2010 2:12 am

looks like the ‘sequestration curve’ is chasing the emissions curve. Is the difference going into oceans or biomass?

George Tetley
June 7, 2010 2:13 am

BOOM !!!

geronimo
June 7, 2010 2:19 am

Is anyone seriously suggesting that the CO2 increase hasn’t, at least in part, been due to humans burning fossil fuels? Can we differentiate natural burning of fossil fuels from the data, and what contribution do they make to the overall CO2 source?

Phillip Bratby
June 7, 2010 2:21 am

You have Willis’ surname spelt incorrectly at the top.

Griz
June 7, 2010 2:21 am

Willis,
Thanks for another informative post.
I really don’t care what camp someone is from, as long as their numbers add up.

JoeH
June 7, 2010 2:30 am

Its probably me being stupid on a monday morning, but I couldn’t find a link to the actual data that the anthropogenic emissions are based on. Is there one?

Alexander Vissers
June 7, 2010 2:31 am

An interesting summary of atmospheric CO2 trend. Moreover the recognition of our relative ignorance on the fluctuations is putting us back on our feet. Maybe another good advice: don’t claim what you don’t know.

JoeH
June 7, 2010 2:32 am

Sorry, meant to say – a link to the data on which the “ESTIMATIONS” of anthropogenic emissions are based on.

Slioch
June 7, 2010 2:41 am

Willis
Another informative post.
How nice to see the mother (I’m speaking literally, of course) of all hockey sticks (Fig.1) displayed so prominently on WUWT!!!
However, with respect to your treatment of the rate of absorption of CO2 by the oceans, you say:
“Suppose we put a pulse of CO2 into an atmospheric system which is at some kind of equilibrium. The pulse will slowly decay, and after a certain time, the system will return to equilibrium.”
You suggest the rate of decay is exponential. However, for a truly exponential decay, the instantaneous rate of absorption must be proportional only to the concentration of CO2 in the atmosphere. This requires that the absorbing agent – the oceans – are unaffected by the process (ie. are essentially infinite) and hence do not affect the rate of absorption. But that is not the case.
The oceans are limited in their capacity to absorb CO2 for (at least) two reasons:
Firstly, the volume of ocean available to the atmosphere is relatively small compared to the total volume of the oceans. For CO2 to be absorbed into the bulk of the oceans, and removed from contact with the atmosphere, it needs to be absorbed in cold polar regions with downwelling currents. Elsewhere the CO2 tends to remain in the surface layers of the ocean.
Secondly, the reaction whereby CO2 is most readily absorbed is NOT by simple reaction with water, ie:
CO2 + H2O H2CO3 H+ + HCO3- 2H+ + CO3–
Rather, it is by reaction with carbonate ion (CO3–), which is itself largely derived from weathering of terrestrial rocks, and is present in limited quantities, thus:
CO2 + CO3– + H2O 2HCO3- [the HCO3- can also interact as above]
This limited amount of CO3– present in the oceans further ensures that the oceanic sink does not behave as if it is infinite, and therefore further removes the situation from that of exponential decay of atmospheric CO2.
So, what should we expect? In the early decades of a pulse of CO2 being added to the atmosphere, with a “fresh” ocean awaiting, the near exponential decay of CO2 is possible. But as the surface layers of the ocean become more saturated with CO2, its ability to absorb more CO2 declines, and the removal of CO2 from the atmosphere departs from the exponential, and becomes much slower. A number of published studies suggest that between about one fifth and one third of a pulse of CO2 would remain in the atmosphere for long periods, only being eventually removed over millennia as the slow weathering of rocks delivers more CO3– to the oceans.
[I may not be able to respond further – I have to go elsewhere]

Richard S Courtney
June 7, 2010 2:43 am

Willis:
Thankyou for this. Perhaps now some rational debate can occur on this subject.
You have made clear that you “think that the preponderance of evidence shows that humans are the main cause of the increase in atmospheric CO2”.
But it is important to understand that there is no evidence which could be said to prove the matter.
I do not know if the cause of the increase is in part or in whole either anthropogenic or natural, but I want to know. And I am frequently offended by assertions of people that they do know. Such assertions hinder both the obtaining and the evaluation of empirical data that pertains to the issue.
But it seems that there are people who want to believe in an anthropogenic cause of the rise, so they assert that the cause must be anthropogenic. Their argument was repeatedly stated in another thread on WUWT and – as demonstration – I quote one such assertion of that type from there.
“XXXX. says:
June 5, 2010 at 4:29 pm
Dr XXXX says:
June 5, 2010 at 4:15 pm
“I think that Mauna Loa CO2 measurements are valid. However, I haven’t seen any evidence that man is responsible for the increase. Given that the fossil fuel derived percentage of atmospheric CO2, is estimated at 1-4%, is seems doubtful that burning fossil fuels is the cause of the increase.”
Since the measured annual accumulation in the atmosphere is about half the amount released into the atmosphere by fossil fuel combustion it’s impossible for it to be otherwise!”
The flaw in such assertions is that they assume the only addition of CO2 to the carbon cycle is anthropogenic. But this is not the case. The rapid changes to atmospheric CO2 concentration during each year show that the system of the carbon cycle very rapidly adjusts to seasonal changes in atmospheric CO2 concentration that are an order of magnitude greater than the anthropogenic emission each year. The anthropogenic emission is to the air, but the rapid changes in seasonal atmospheric CO2 concentration do not suggest that the system is near to saturation that would prevent the system from sequestering the anthropogenic emission from the air.
CO2 is emitted to the atmosphere from various sources and is sequestered from the atmosphere by various sinks. Hence, there is a turnover of CO2 in the atmosphere. An imbalance between the amounts emitted and sequestered will result in a change to the amount of CO2 in the atmosphere, but no subset of the emitted molecules accumulates in the atmosphere (all the molecules are subjected to the exchanges between the sources and sinks). In one of our 2005 papers
(ref. Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005) )
we used very conservative estimates that exaggerate any effect on the carbon cycle of the anthropogenic emission, and we reported:
“At present the yearly increase of the anthropogenic emissions is approximately 0.1 GtC/year. The natural fluctuation of the excess consumption (i.e. consumption processes 1 and 3 minus production processes 2 and 4) is at least 6 ppmv (which corresponds to 12 GtC) in 4 months. This is more than 100 times the yearly increase of human production, which strongly suggests that the dynamics of the natural processes here listed 1-5 can cope easily with the human production of CO2.”
The system is easily capable of sequestering all the emission (both ‘natural’ and anthropogenic).
Simply, the anthropogenic emission is observed to be so trivial a proportion of the total emission that it cannot overcome the ability of the sinks to sequester all the emission (including the anthropogenic proportion). At issue is why – according to the Mauna Loa data – the system does not sequester all the emission in each year since 1958, and our paper considered that issue.
As an aside, I address your point concerning the ice-core data because I think it is a distraction. There are two pertinent issues with the ice core results; viz. validation and interpretation.
Stomata data consistently show much higher (about 15%) and much more variable atmospheric CO2 concentration than ice core data.
(ref. e.g. Lenny L. R. Kouwenberg, Jennifer C. McElwain, Wolfram M. Kürschner, Friederike Wagner, David J. Beerling, Francis E. Mayle and Henk Visscher, ‘Stomatal frequency adjustment of four conifer species to historical changes in atmospheric CO2’ American Journal of Botany (2003) )/
A good – but one-sided – consideration of this subject in a form accessible to laymen is at
http://www.geocraft.com/WVFossils/stomata.html
Hence, the ice-core data are shown to be wanting when validated against stomata data.
As Kouwenberg, et.al. 2005 (Laboratory of Palaeobotany and Palynology, Utrecht University, Netherlands) reported in 2005;
“Stomatal data increasingly substantiate a much more dynamic Holocene CO2 evolution than suggested by ice core data.”
It should be noted that ice core data are inherently incapable of revealing high and low atmospheric concentrations of the gases. There are several reasons for this with the most notable being that gases diffuse from regions of high concentration in unsealed firn in the decades before the ice sealed, and high values of the gas concentrations measured in the ice cores are deleted from the data sets using the assumption that high values are ‘biogenic artefacts’. The diffusion also reduces the observed rates of change to gas concentrations indicated by the ice core data. Stomata data do not suffer from these problems and indicate that the recent rates of change to atmospheric concentration of carbon dioxide have repeatedly occurred in recent millennia and during transition from the last ice age.
So, there is – at very least – adequate reason to assess the recent changes in atmospheric CO2 concentration as indicated at Mauna Loa, Barrow, etc. on the basis of the behaviour of the carbon cycle since 1958 (when measurements began at Mauna Loa).
Comparison of the recent rise in atmospheric CO2 concentration with paleo data merely provides a debate as to
(a) the validity of the ice-core data (which provides the ‘hockey stick’ graph you reproduce above)
and
(b) the validity of the stomata data that shows the recent rise in atmospheric CO2 concentration is similar to rises that have repeatedly happened previously.
Having said that, I copy below from the message that I posted on the other thread.
“Please note how trivial the anthropogenic emission is to the total CO2 flowing around the carbon cycle.
According to NASA estimates, the carbon in the air is less than 2% of the carbon flowing between parts of the carbon cycle. And the recent increase to the carbon in the atmosphere is less than a third of that less than 2%.
And NASA provides an estimate that the carbon in the ground as fossil fuels is 5,000 GtC and humans are transferring it to the carbon cycle at a rate of ~7 GtC per year.
In other words, the annual flow of carbon into the atmosphere from the burning of fossil fuels is less than 0.02% of the carbon flowing around the carbon cycle.
It is not obvious that so small an addition to the carbon cycle is certain to disrupt the system because no other activity in nature is so constant that it only varies by less than +/- 0.02% per year.
In one of our papers
(ref. Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005) )
we considered the most important processes in the carbon cycle to be:
SHORT-TERM PROCESSES
1. Consumption of CO2 by photosynthesis that takes place in green plants on land. CO2 from the air and water from the soil are coupled to form carbohydrates. Oxygen is liberated. This process takes place mostly in spring and summer. A rough distinction can be made:
1a. The formation of leaves that are short lived (less than a year).
1b. The formation of tree branches and trunks, that are long lived (decades).
2. Production of CO2 by the metabolism of animals, and by the decomposition of vegetable matter by micro-organisms including those in the intestines of animals, whereby oxygen is consumed and water and CO2 (and some carbon monoxide and methane that will eventually be oxidised to CO2) are liberated. Again distinctions can be made:
2a. The decomposition of leaves, that takes place in autumn and continues well into the next winter, spring and summer.
2b. The decomposition of branches, trunks, etc. that typically has a delay of some decades after their formation.
2c. The metabolism of animals that goes on throughout the year.
3. Consumption of CO2 by absorption in cold ocean waters. Part of this is consumed by marine vegetation through photosynthesis.
4. Production of CO2 by desorption from warm ocean waters. Part of this may be the result of decomposition of organic debris.
5. Circulation of ocean waters from warm to cold zones, and vice versa, thus promoting processes 3 and 4.
LONGER-TERM PROCESSES
6. Formation of peat from dead leaves and branches (eventually leading to lignite and coal).
7. Erosion of silicate rocks, whereby carbonates are formed and silica is liberated.
8. Precipitation of calcium carbonate in the ocean, that sinks to the bottom, together with formation of corals and shells.
NATURAL PROCESSES THAT ADD CO2 TO THE SYSTEM
9. Production of CO2 from volcanoes (by eruption and gas leakage).
10. Natural forest fires, coal seam fires and peat fires.
ANTHROPOGENIC PROCESSES THAT ADD CO2 TO THE SYSTEM
11. Production of CO2 by burning of vegetation (“biomass”).
12. Production of CO2 by burning of fossil fuels (and by lime kilns).
Several of these processes are rate dependant and several of them interact.
At higher air temperatures, the rates of processes 1, 2, 4 and 5 will increase and the rate of process 3 will decrease. Process 1 is strongly dependent on temperature, so its rate will vary strongly (maybe by a factor of 10) throughout the changing seasons.
The rates of processes 1, 3 and 4 are dependent on the CO2 concentration in the atmosphere. The rates of processes 1 and 3 will increase with higher CO2 concentration, but the rate of process 4 will decrease.
The rate of process 1 has a complicated dependence on the atmospheric CO2 concentration. At higher concentrations at first there will be an increase that will probably be less than linear (with an “order” <1). But after some time, when more vegetation (more biomass) has been formed, the capacity for photosynthesis will have increased, resulting in a progressive increase of the consumption rate.
Processes 1 to 5 are obviously coupled by mass balances.
Our paper assessed the steady-state situation to be an oversimplification because there are two factors that will never be “steady”:
I. The removal of CO2 from the system, or its addition to the system.
II. External factors that are not constant and may influence the process rates, such as varying solar activity.
Modeling this system is difficult because so little is known concerning the rate equations. However, some things can be stated from the empirical data.
At present the yearly increase of the anthropogenic emissions is approximately 0.1 GtC/year. The natural fluctuation of the excess consumption (i.e. consumption processes 1 and 3 minus production processes 2 and 4) is at least 6 ppmv (which corresponds to 12 GtC) in 4 months. This is more than 100 times the yearly increase of human production, which strongly suggests that the dynamics of the natural processes here listed 1-5 can cope easily with the human production of CO2.
A serious disruption of the system may be expected when the rate of increase of the anthropogenic emissions becomes larger than the natural variations of CO2. But the above data indicates this is not possible.
The accumulation rate of CO2 in the atmosphere (1.5 ppmv/year which corresponds to 3 GtC/year) is equal to almost half the human emission (6.5 GtC/year). However, this does not mean that half the human emission accumulates in the atmosphere, as is often stated. There are several other and much larger CO2 flows in and out of the atmosphere. The total CO2 flow into the atmosphere is at least 156.5 GtC/year with 150 GtC/year of this being from natural origin and 6.5 GtC/year from human origin. So, on the average, 3/156.5 = 2% of all emissions accumulate.
The above qualitative considerations suggest the carbon cycle cannot be very sensitive to relatively small disturbances such as the present anthropogenic emissions of CO2. However, the system could be quite sensitive to temperature. So, our paper considered how the carbon cycle would be disturbed if – for some reason – the temperature of the atmosphere were to rise, as it almost certainly did between 1880 and 1940 (there was an estimated average rise of 0.5 °C in average surface temperature.
Please note that the figures I use above are very conservative estimates that tend to exaggerate any effect of the anthropogenic emission.
Our paper then used attribution studies to model the system response. Those attribution studies used three different basic models to emulate the causes of the rise of CO2 concentration in the atmosphere in the twentieth century. They each assumed
(a) a significant effect of the anthropogenic emission
and
(b) no discernible effect of the anthropogenic emission.
Thus we assessed six models.
These numerical exercises are a caution to estimates of future changes to the atmospheric CO2 concentration. The three basic models used in these exercises each emulate different physical processes and each agrees with the observed recent rise of atmospheric CO2 concentration. They each demonstrate that the observed recent rise of atmospheric CO2 concentration may be solely a consequence of the anthropogenic emission or may be solely a result of, for example, desorption from the oceans induced by the temperature rise that preceded it. Furthermore, extrapolation using these models gives very different predictions of future atmospheric CO2 concentration whatever the cause of the recent rise in atmospheric CO2 concentration.
Each of the models in our paper matches the available empirical data without use of any ‘fiddle-factor’ such as the ‘5-year smoothing’ the UN Intergovernmental Panel on Climate Change (IPCC) uses to get its model to agree with the empirical data. Please note this:
the ‘budget’ model uses unjustifiable smoothing of the empirical data to get the model to fit the data, but each of our models fits the empirical data that is not adjusted in any way.
So, if one of the six models of our paper is adopted then there is a 5:1 probability that the choice is wrong. And other models are probably also possible. And the six models each give a different indication of future atmospheric CO2 concentration for the same future anthropogenic emission of carbon dioxide.
Data that fits all the possible causes is not evidence for the true cause.
Data that only fits the true cause would be evidence of the true cause.
But the above findings demonstrate that there is no data that only fits either an anthropogenic or a natural cause of the recent rise in atmospheric CO2 concentration.
Hence, the only factual statements that can be made on the true cause of the recent rise in atmospheric CO2 concentration are
(a) the recent rise in atmospheric CO2 concentration may have an anthropogenic cause, or a natural cause, or some combination of anthropogenic and natural causes,
but
(b) there is no evidence that the recent rise in atmospheric CO2 concentration has a mostly anthropogenic cause or a mostly natural cause.
Hence, using the available data it cannot be known what if any effect altering the anthropogenic emission of CO2 will have on the future atmospheric CO2 concentration. This finding agrees with the statement in Chapter 2 from Working Group 3 in the IPCC’s Third Assessment Report (2001) that says; “no systematic analysis has published on the relationship between mitigation and baseline scenarios”.”
Richard

Ben M
June 7, 2010 2:45 am

Are you sure the CDIAC dataset is accurate?
I thought a recent paper threw a bucket of cold water on it.
http://www.sciencemag.org/cgi/content/full/328/5983/1241 (subscription req’d)
http://theresilientearth.com/?q=content/guessing-co2-emissions (summary)

June 7, 2010 2:52 am

Willis,
A good explanation of many things, especially the e-folding time. But I don’t agree with your calculation of the 31 year period. I think you have calculated as if each added ton of CO2 then decays exponentially back to the 1850 equilibrium level. But the sea has changed. It is no longer in equilibrium with 280 ppm CO2. Of course it has its own diffusion timescale, and lags behind the air in pCO2. You could think of the decay as being back to some level at each stage intermediate between 1850 and present.
If you apply that process to the emission curve, you’ll match the airborne fraction with a slower decay (longer time constant) where the decay has less far to go.

June 7, 2010 2:53 am

Just one thing puzzles me – ocean is the biggest storage of CO2. Solubility of CO2 follows the Henry’s law. There was MWP and LIA with cca 2 deg C difference. 2 deg C causes some 10% change in CO2 solubility.
http://www.rocketscientistsjournal.com/2006/10/_res/CO2-06.jpg
Why there is no sign of MWP/LIA in the ice core CO2 data? If we consider only a surface layer with certain thickness (not the whole ocean volume) and calculate the 10% degassing, it should have been visible in the ice core record.
Today, the rate of CO2 rise plays well with SST data.
http://climate4you.com/images/CO2%20MaunaLoa%20Last12months-previous12monthsGrowthRateSince1958.gif1998 El Nino is clearly visible, also La Nina and volcanic eruptions. But strange that 2007 La Nina is not visible. More, as oceans start to cool, the rate of rise stabilizes.

crosspatch
June 7, 2010 2:53 am

“As I said, I think that the preponderance of evidence shows that humans are the main cause of the increase in atmospheric CO2. ”
Ok, fine. Is there anything that would lead anyone to believe that the increase in CO2 is harmful in any way?

Slioch
June 7, 2010 2:57 am

Re my previous post: unfortunately the equilibrium signs between the various chemical species have not shown up, making understanding of them a little difficult. Also, carbonate ion is shown with only one minus sign – it should have two.
Oh well. Those familiar with chemistry will have to make their own adjustments. Sorry about that.

Slabadang
June 7, 2010 3:11 am

Willy!
I’m thinking about the rollercoaster graph of historical (temp/co2) that MR Al Gore/Gavin Smith presented.
If warmth has a restricted inpact of 20ppm since 1850? Things surely don’t add up in any aspect. And cause/effect co2/temp will still be written with question marks for a long time.
To me the work on climate science more describes how little we know, rather than how much.
There is a big potential that many science articles are right in some aspect. Spencer, Lindzen, Milkolzci, Scafetta, IPCC, Svensmark also. The big tragedy is that so much focus has been on the CO2 hype, which has certainly crippled the science.
Blaming CO2 without understanding the negative and positive feedabacks is is like eating dinner before you cooked it! CO2 is a gas for life. It’s very possible that we are increasing the chances for life on the planet by increasing CO2 turnover.
But no one is calculating the benefits.

tonyb
Editor
June 7, 2010 3:12 am

I have added this thread- plus the previous one- to my own thread carried at air vent.
http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/#comment-29876
Collectively the articles/links/comments provide a huge reservoir of information on the subject. However the controversy remains as the science is not settled.
I think the comments from Richard S Courtney above warrant very close reading and I for one remain very suspicious of the ice core data after reading numerous articles on the subject.
It would be very useful to have a companion article on ice core data here at WUWT that uses the very latest research.
Tonyb

June 7, 2010 3:13 am

Good job, Willis. Takes guts to put stuff out there these days. For me, it is always curious what the facts are. Truth versus opinion: Can we distinguish the two?
While I am way out of my realm, commenting about origins of carbon dioxide, climate science or even responding to your article, I have been fully engaged in advising government and industry on environment, resource conservation, pollution prevention for over 30 years. Still scratch my head at the final priorities established and decisions made.
We have to be open to examine man’s role on the planet more holistically. Our world is much more complex than its response to any one chemical or molecule or organism present. Yet, our minds tend to fixate on the one thing that is the source and cause of all eco-problems, then seek its eradication. Governments and large institutions have become very skilled at chemical (flu organism) demonization, crisis creation and urgent-reactive solution, through the mass media. This is why I find resources such as WUWT so interesting and instructive. We are prompted to think first.
The average person, if there is such a person, is much more inquisitive and open to ideas, presentations, facts, data and its interpretation than institutions are willing to believe or want to accept. Computer simulations are illusive to them. You would have to have worked a government risk assessment, for example, to know how large an impact assumptions, rolled together, can have, especially on the decided course of action. Even then, we assume adopting no effect level standards for multiple materials equate to a clean soup.
Is the planet warming or cooling? It just makes no intuitive sense that lowly man could have such an effect on a trend that could be happening over hundreds if not thousands of years. One would have to assume earth is the solar system and universe.
Isn’t it the tornado that ripped through our yard last night that matters? Confining a massive oil leak after it happens to the smallest area? Planning and protecting for disruptive events is key to the survival of the human race. Ending fossil fuel use helps that how exactly? Oil will still leak naturally into oceans.
Like it our not, we are caretakers on this planet. Caretakers of each other and the environment we necessarily access. Our goal should always be the next generation. Otherwise, why procreate?
Before we obsess on fossil fuel combustion, we might want to examine the way we settle ourselves. If we did, we might reconsider our utilization of resources. This is not a federal government or world order level of intention, it is much more at the individual, family, or local community levels. Economic collapse is testing our response.
Personally, I have always had more faith in individual people than institutions. Those who come here, read your article today, think about the carbon dioxide and your suggested rules, might take a moment to reflect on context before they leap. Thank you.

Andrew W
June 7, 2010 3:19 am

Of course, those nasty warmists have been trying to explain for years just how solid the evidence is that it is indeed human activity that’s causing the CO2 rise, it’s laughable that many “skeptics” are only capable of accepting the reasoning when it’s explained to them by one of the good people at WUWT.

William Gray
June 7, 2010 3:19 am

Willis how do you manage it.
From Co2 science theres a graph showing the amplifacation of the seasonal Co2 cycle, with the claim that “about one fifth is due to human contributions.” And from WUWT an article stating soil fauna emit the same isotope as fossil fuels do, that being Co13,14. Please forgive me for not providing the links, sorry. Now a simple observation if I may. Plants love this isotope more so than Co2 (12), and coupled with warming has produced the current stat.

Richard S Courtney
June 7, 2010 3:23 am

richard telford:
At June 7, 2010 at 1:35 am you assert:
“In addition to the evidence presented above, there are at least two further lines of evidence supporting the hypothesis that the CO2 increase is caused by humans:
– the decline in atmospheric O2 concentrations, measured by Ralph Keeling’s group. See https://bluemoon.ucsd.edu/images/ALLo.pdf Such declines are expected if the CO2 rise is due to combustion, but not if it were due to volcanism or ocean outgassing.
– the ocean surface is on average undersaturated in CO2 and there is net uptake CO2. Hence the rise in CO2 cannot be use to ocean outgassing, or submarine volcanoes. This uptake of CO2 will cause the ocean to become more acidic (==less alkaline). See for example https://bora.uib.no/handle/1956/2090?language=no”
I address each of these “lines of evidence” in turn.
The cause of the O2 decline may or may not be related to the burning of fossil fuels. And the O2 decline is certainly NOT “evidence supporting the hypothesis that the CO2 increase is caused by humans”.
Both O2 and CO2 concentrations in the atmosphere are affected by biological activity (all the O2 is in the air because it is released by plants). Consumption of CO2 by photosynthesis takes place in green plants. CO2 from the air and water are coupled to form carbohydrates and O2 is liberated.
Hence, your first point merely introduces a debate about variation of the oxygen cycle and, therefore, adds confusion to discussion of the cause(s) of recent rise to atmospheric CO2 concentration.
Your point about “uptake of CO2” by the oceans cuts both ways. The great bulk of carbon flowing around the carbon cycle is in the oceans. An equilibrium state exists between the atmospheric CO2 concentration and the carbon concentration in the ocean surface layer. So, all other things being equal, if the atmospheric CO2 concentration increases then – as you say – the ocean surface layer will dissolve additional CO2 and alkalinity of the layer will reduce. However, the opposite is also true.
If the alkilinity of the ocean surface layer reduces then the equilibrium state will alter to increase the atmospheric CO2 concentration and to reduce the carbon in the ocean surface layer. The pH change required to achieve all of the recent rise in atmospheric CO2 concentration (i.e. since 1958 when measurements began at Mauna Loa) is less than 0.1 which is much, much too small for it to be detectable. And changes of this magnitude can be expected to occur.
Surface waters sink to ocean bottom, travel around the globe for ~800 years then return to ocean surface. They can be expected to dissolve S and Cl from exposure to undersea volcanism during their travels. So, the return to the surface of these waters will convey the S and Cl ions to the surface layer centuries after their exposure to the volcanism, and this could easily reduce the surface layer pH by more than 0.1. Hence, variations in undersea volcanism centuries ago could be completely responsible for the recent rise in atmospheric CO2 concentration.
Please note that the fact that these volcanic variations could be responsible for the recent rise does not mean they are responsible (which is the same logic as the fact that the anthropogenic emissions could be responsible does not mean that they are).
However, Tom Quirk observes that the geographical distribution of atmospheric carbon isotopes provides a better fit to the undersea volcanism hypothesis than to the anthropogenic hypothesis as a cause of the rise: see
http://climaterealists.com/attachments/database/A%20Clean%20Demonstration%20of%20Carbon%2013%20Isotope%20Depletion.pdf
There are many possible causes of the recent rise in atmospheric CO2 concentration. They each warrant investigation, and there is not sufficient evidence to champion any one of them.
Richard

William Gray
June 7, 2010 3:27 am

Use the LOVE

John Trigge
June 7, 2010 3:27 am

Thanks, Willis, for the several hours of reading and cogitation ahead.
In your ‘Anthropogenic Emission, 1850 – 2005’ graph, why are there no spikes covering the 2 World Wars? I would have expected the scales to be fine enough to show at least a noticable upward ‘blip’ given the enormous activity in fossil fuel use (and, I expect, explosives would also contribute) during these periods.
Back to reading/cogitating.

June 7, 2010 3:33 am

Baa Humbug says: (June 7, 2010 at 1:23 am)      Was it something we said Willis?
    Nice, Barr. Needed that twist of wit to break the clasp of the furrowed brow muscles right at about that point.

William Gray
June 7, 2010 3:35 am

If our influence wasn’t so controversial we could use it to advantage. Ego and greed tisk tisk tisk.

Peter Miller
June 7, 2010 3:42 am

I am always intrigued by any discussion which involves carbon dioxide levels in the oceans, as simple maths exposes the ‘problem’ to be completely insignificant.
Volume of oceans: 1.35 billion cubic kilometres.
Human production of carbon dioxide: ~27 billion tonnes per year.
Therefore: If all humanity’s production of carbon dioxide was absorbed by the oceans, their concentration of carbon dioxide would increase by one part in 50 million per year.
However, the oceans only absorb around half our carbon dioxide production, so the actual increase (before use by marine organisms) would be one part in 100 million per year.
The present average carbon doxide levels in the ocean are ~90 parts per million. To increase this by one part per million (or less than 1%) would therefore take around 100 years. In reality, most of this carbon dioxide would be absorbed in the upper levels of the oceans.
Also of interest is that carbon dioxide makes up 15.1% of all gases absorbed in the oceans, versus 0.03% of all gases in the atmosphere.
I would suggest the oceans’ ability to absorb additional carbon dioxide is enormous, even at much higher temperatures than those prevailing today.
Reference: http://www.google.pt/url?sa=t&source=web&cd=9&ved=0CFEQFjAI&url=http%3A%2F%2Fwww.seafriends.org.nz%2Foceano%2Fseawater.htm&ei=5MUMTO6gJ6aL4gbb3KGrAQ&usg=AFQjCNHn2eJEZBZLJAf3v3ERHylgws5kxw

Britannic no-see-um
June 7, 2010 3:49 am

How trivial or significant is the direct additional respiratory and food production CO2 emission produced by increases in human longevity and population density since medieval times?

gilbert
June 7, 2010 3:51 am

And again you tell me all this after I’ve had to figure it out for myself.
I’m curious what you think of the following analysis of CO2 attribution by Ferdinand Engelbeen:
http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html
I agree that it doesn’t make any substantial difference although I’m having a bit of difficulty resolving the some of the differences between your thermostat theory and Ferenc Miskolczi.

Oslo
June 7, 2010 3:53 am

Well, as you say – your first graph resembles the Mann hockey stick, and perhaps for good reason, as it seems to utilize the good old “trick” – splicing the instrumental record onto the proxy data.
Here is another graph, clearly showing the instrumental data (red) disjointed from the proxies:
http://www.geo.cornell.edu/eas/energy/_Media/ice_core_co2.png
So the question is, as with the hockey stick: do the figures from the two methods even belong on the same graph?

Richard S Courtney
June 7, 2010 3:58 am

I write to support a point made by tonyb at June 7, 2010 at 1:42 am .
Either the ice core data are right or they are wrong.
If the ice core data are right then climate variability is not discernibly affected by atmospheric CO2 concentration: i.e. the ice core data show no variation in atmospheric CO2 coincentration during the Roman Warm Period (RWP), the Dark Age Cool Period (DACP), the Medieval Warm Perod (MWP) and the Little Ice Age (LIA). These climate variations must have been caused by something other than atmospheric CO2 concentration, and there is no reason to suppose that recent variations in climate are not a result of the “something other”.
Altenatively, the ice core data are wrong so they should be ignored.
In either case, more research is needed before any definitive statements can be made concerning the causes of atmospheric CO2 concentration and climate variability on the basis of ice core data.
Richard

anna v
June 7, 2010 3:59 am

I will like to comment on fig 1.
1) ice core records, evidently, come from regions largely depleted of fauna. The CO2 there is what the winds carry. It is not surprising that they show such a stable value, ignoring little ice ages and medieval warming periods ( Henry’s law). This argues that the values measured are sort of homogenized at least over centuries.
The rise in recent data is notable, but in a diffusion model, not enough time and pressure of over covering ice has passed for the recent years where so smartly the trick is played with the Mauna Loa data.
2) Mauna Loa data is also depleted by construction, so making the hockey stick shape is not hard in the overlap region. The real question is, if one had better resolution in ice core data, , for example for the medieval warm period, would it show values of the order of 350 ppm even in this depleted region ? The whole anthropogenic CO2 argument rests on this assumption, that the rise in recent years is unprecedented. Beck’s data, and I now learn from Courtney’s post above, stomata data speak differently.
I have no doubt that the temperatures have been increasing since the little ice age, and therefore expect, by Henry’s law the CO2 to be increasing . It is possible that part of the increase is due to anthropogenic causes, but I am not convinced by the data presented.
Tons do not mean much , as global numbers. example: Pollution remains close to the source, and I do not see why CO2 would be different. Most pollution sources are close to cities. It rains more over cities, (http://earthobservatory.nasa.gov/Features/UrbanRain/urbanrain3.php ) because of the pollution, and rain washes down CO2 too, mixing it with the water that ends up in the seas. How can this process be quantified? i.e. how much of anthropogenic CO2 ends up in the “pure” background of the antarctic and arctic and Mauna Loa?
The whole field is rife with speculations and assumptions served as certainties.

Ken Hall
June 7, 2010 4:01 am

“Yes, entirely reasonable. It seems most likely that human activities, not confined to fossil fuel burning, are indeed raising the CO2 ppm in the atmosphere.
It is also clear that this will contribute a modest warming effect, that is just physics.
The debate is what, if anything, happens next. Does the effect get amplified by positive feedback, or reduced by negative feedbacks, or overwhelmed by other factors.”
Agreed, and this is where the models featured in IPCC reports and where “scientific consensus” appears to break down.
“Man produces CO2 and CO2 has a warming effect on the atmosphere.” That is about as far as the “scientific consensus” goes. How much warming, what feedbacks they trip, whether this is likely to be catastrophic, or whether it will be moderate and hidden by natural variability, or whether this anthropogenic CO2 will trigger negative feedbacks greater than positive ones… There are many scientists who disagree on all of these things.
Anyone who claims scientific consensus is selling something without a warranty.

Daniel H
June 7, 2010 4:02 am

As Richard Telford mentioned above, another line of evidence implicating fossil fuels as the source of atmospheric CO2 increase is the change in the atmospheric O2:N2 ratio. This is discussed in AR4 WG1, The Physical Science Basis, Chapter 2, Section 2.3.1 and illustrated here. My only problem with this record is that it’s impossible to verify since the raw data are “protected” behind a firewall at the Scripps Institution of Oceanography web site and therefore cannot be accessed by the general public. It would be great if someone at WUWT like Willis Eschenbach or Anthony Watts could convince Keeling to release his data since it was funded by mine and your tax dollars the NSF and NOAA and therefore ought to be in the public domain.
For more information, click the link “Lab Data”, located here:
https://bluemoon.ucsd.edu/data.html
Note: the Scripps/UCSD web site inexplicably uses an untrusted security certificate for their SSL connection which might trigger a browser security warning. However, this can be safely ignored.

Espen
June 7, 2010 4:06 am

Juraj V. says:
Why there is no sign of MWP/LIA in the ice core CO2 data?
I have the same question. Though there seems to be a faint sign: Have a look at the graph on the top of the article, at least the LIA seems to be visible. But the signal is weaker than what would be expected, which strengthens the hypothesis that ice core data are not suitable for measuring CO2 at this high resolution some hundred years ago, but rather represents a (multi-)centennial moving average. In this regard, Richard S Courtney’s links to stomata estimates above are highly interesting.

BBk
June 7, 2010 4:11 am

“So, what should we expect? In the early decades of a pulse of CO2 being added to the atmosphere, with a “fresh” ocean awaiting, the near exponential decay of CO2 is possible. But as the surface layers of the ocean become more saturated with CO2, its ability to absorb more CO2 declines, and the removal of CO2 from the atmosphere departs from the exponential, and becomes much slower. ”
This assertion ignores diffusion of CO2 from the surface to the lower levels of the ocean. If diffusion (removal of CO2 from the surface to the lower volume) happens at a faster or equal rate to the absorbtion of CO2 from the atmosphere then the ocean can be considered “fresh” until the entire volume “fills.” While, in theory, eventually the ocean would saturate, the rate would be very slow.
Have there been any studies about the rate of diffusion of CO2 through the ocean layers?
My gut feeling is that since we’re dealing with Volume vs Area, that diffusion would, indeed, be a much larger value.

Stephen Wilde
June 7, 2010 4:13 am

Would it not be the case that ANY system of measurement that failed to reflect the climate events of the MWP and LIA would also fail to reflect the modern warming and so would ALWAYS produce a ‘hockey stick’ shape when grafted onto modern more sensitive systems of measurement ?

Ken Hall
June 7, 2010 4:15 am

“Of course, those nasty warmists have been trying to explain for years just how solid the evidence is that it is indeed human activity that’s causing the CO2 rise, it’s laughable that many “skeptics” are only capable of accepting the reasoning when it’s explained to them by one of the good people at WUWT.”
Nonsense. I do not know of anyone within the climate realist community that doubts (or ever doubted) that atmospheric CO2 concentration has increased, nor that the increase is largely caused by man.
What realists believe is what is proven scientifically using the full scientific method, and that is that there is NO PROOF that current warming, (such as it is, and that depends very much an when you start and end the measurements and whether you have faith in the veracity and accuracy of those measurements and the analysis of those measurements) is entirely or mostly caused by those emissions or that the outcome of those emissions will be catastrophic.
The real actual earth upon which we all live and rely on for our life is NOT a greenhouse, nor is it a computer model. It behaves a little bit like both, but crucially, it behaves a lot like neither and we simply do not know how the climate works in enough detail to be able to predict with any level of certainty what will happen next.
The UN IPCC had become far too reliant on former scientists who abandoned parts of the scientific method to push a theory when the scientific method failed to provide proof.
To wit: altering or amending or omitting data to make it fit the theory, closing down peer review to an “incestuous” in-group, bullying publishers and using threats against “sceptical” scientists are all methods that not only fail the scientific method, but indeed contradict it and render the scientists involved in such dishonest practices as advocates, rather than scientists.

Stephan
June 7, 2010 4:20 am

arctic ice is either melting or dissipating like mad
http://ocean.dmi.dk/arctic/icecover.uk.php
or its a mistake once again lol

Joe Lalonde
June 7, 2010 4:23 am

Willis,
I enjoy these mind manipulation response games so, here goes.
One thing I am find is that trace elements attached to elements such as O2 or CO2 or even H20 16 0r H2O 18 have a bearing on the mass weight of these and also can change our preception of how they interact in a magnetic field setting.
Think hard my young scholar on what EXACTLY is gravity? What one element is most involved even on a small level.
No question that we are the major contibutors of CO2 but trying to tie this with temperatures is fool hardy. We have not included rotation of planet, elasticity of the atmosphere it pulls or the pressure buildup we have caused.
Science has made physical evidence into theories and theories are the all mighty as long as math (not science!) is involved.

Curiousgeorge
June 7, 2010 4:35 am

A philosophical question, Willis. Assume for the moment that we lacked the capability to measure CO2 ( or temperature other than what we feel on our skin ). Would we then perceive our current environment as beneficial or detrimental ?

D.A. Neill
June 7, 2010 4:35 am

The anthropogenic emissions line in your figure 2 tracks closely with the world fuel consumptions statistics cited by Klashtorin and Lyubushin (in L.B. Klyashtorin and A.A. Lyubushin, “On the coherence between the dynamics of the world fuel consumption and global temperature anomaly”, Energy & Environment, Vol. 14, No. 6 (2003), Figure 1). K&L take the argument to the next logical step, however, and add in the global temperature anomaly line, and then check the correlation between delta T and WFC. The coefficient of correlation runs at +0.92 from 1861-1875; -0.71 from 1875-1910; +0.28 from 1910-1940; -.088 from 1940-1975; +0.94 from 1975-2000. In other words, there is no linear correlation between global temperature anomalies and world fuel consumption. The lack of a linear correlation falsifies the AGW thesis, the heart of which is the IPCC’s contentions (a) that anthropogenic additions to atmospheric carbon dioxide concentrations “have caused the largest forcing of the industrial [post-1950] period” (4th AR WG1, Chapter 2, 136), and (b) that the amplitude of the large-scale pattern of response will scale linearly with the forcing (4th AR WG1, Chapter 2, 670). If they pick their analytical end-points right, they can just barely make it work. Of course, if you pick your end-points right, you can make ANY argument work.
If your figure 1 is accurate (and I have no reason to doubt your numbers), then the flat line from 1000 to 1850 or so, in the context of the MWP, the LIA, and the modern warming, demonstrates that there is no correlation whatsoever between atmospheric CO2 concentrations and average global temperatures. Yet ice core data over the last four glaciations demonstrates that there is a relationship between delta T and delta CO2, with the latter lagging the former (J.R. Petit, et al., “Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica”, Nature 399 (1999), 429-436.). If there is, in fact, an 800-1200 year lag between average global temperature increase and atmospheric CO2 increase (as the Vostok cores seem to demonstrate, with CO2 concentrations varying by as much as 100 ppm in response to a multi-degree swing in average temperature), then seeing as how we’re about 1200 years past the onset of the significant temperature change of the MWP, might not the current rise in CO2 concentrations that began 200 years ago be a lagging artefact of the MWP, with human CO2 emissions a contributing factor perhaps, but a largely inconsequential one?
Just wondering; after all, INACS.*
It’s hard to argue with your numbers, and I certainly have none better than yours to offer. But I can’t help but wonder if we’re missing something. Humans, our technological hubris notwithstanding, really are bit players on a planetary scale.
* Obligatory self-abnegation: “I’m Not A Climate Scientist”
P.S. I work in a scientific organization, and I’ve cut’n’pasted your rules for discussion to all my colleagues. Every scientist should have them tattooed over his heart.

899
June 7, 2010 4:36 am

Willis,
CONCLUSION
As I said, I think that the preponderance of evidence shows that humans are the main cause of the increase in atmospheric CO2.

But that doesn’t square with your prior comment:
In addition, from an examination of the year-by-year changes it is obvious that there are other large scale effects on the global 13C/12C ratio. From 1984 to 1986, it increased by 0.03 per mil. From ’86 to ’89, it decreased by -0.2. And from ’89 to ’92, it didn’t change at all. Why?
If humans are to be seen as the major contributors to atmospheric CO2, then how is it that the second statement quoted above shows quite the opposite? Did not the use of mineral crude actually increase in those time spans?
Aside from that, I’m not going to worry much over the matter, inasmuch as CO2 has been effectively debunked as a so-called ‘greenhouse’ gas, what with NASA alluding to such, albeit not directly.
I might start worrying when I may no longer lay on the sand at the beach without having to wear an oxygen mask due to the low lying CO2 …

anna v
June 7, 2010 4:37 am

of course that should be flora, not fauna in my
anna v says:
June 7, 2010 at 3:59 am
Although humans as fauna contribute something like half a ton of CO2 a year, and as we are 6 billion that is 3 billion tons a year from our respiration cycle. to be compared with 8 or so gigatones from fossil etc, a factor of 1000 less. ( somebody asked)

June 7, 2010 4:45 am

All I know is that I have read comments from people who are offended by it. I use “AGW supporter”
The problem with “AGW supporter” is that it first presupposes that the case for AGW is “settled” (which it may or may not be, to a greater or lesser extent, distinct from CAGW which is anything but), and then subsequently implies that the person thinks it’s good – i.e. “supports” it. It all breaks down, literally, in too many ways to fairly represent the views of the person or the thing they believe in.
I propose “AGW believer”.

June 7, 2010 4:49 am

If the ice core data shows a lag between warming and CO2 rise, surely the last hundred years are the start of the normal CO2 rise from the MVP. That may explain the change in isotope ratios.

Douglas Cohen
June 7, 2010 4:49 am

Neither you nor most of the people commenting here seem to be thinking about the approximately 800 year delay between an increase in temperature and the corresponding increase in CO2 that is said to be revealed by the ice core measurements. Is that delay still supported by the latest data? If it is, then the recent rise in CO2 could be mostly due to the Medieval Warm period — 800 years ago — indeed it could be taken as a proxy measurement for exactly how much warmer the world climate was back then (using the rule 7 deg C leads to a doubling of atmospheric CO2).

Grumbler
June 7, 2010 4:50 am

“Curiousgeorge says:
June 7, 2010 at 4:35 am
A philosophical question, Willis. Assume for the moment that we lacked the capability to measure CO2 ( or temperature other than what we feel on our skin ). Would we then perceive our current environment as beneficial or detrimental ?”
It’s not philosophical but practical. It’s fortuitous how all this happened just as we developed satellites and supercomputers. What great luck that our worst climate disaster coincided with huge advances in measuring and modelling!!
cheers David

FrankS
June 7, 2010 4:51 am

Thanks Willis for the mention of “half life”, that just makes it so easy to understand why the IPCC estimates must always be larger than churn rates.
Missing from your analysis is any reference, other than concluding that humans are the main cause of the increase in atmospheric CO2
For instance the ice cores show a 800 year lag between temp and CO2. So should there be an estimate for this type of change factored in. If for instance non human CO2 was naturally rising during this period then the amount sequestered (orange) and remaining CO2 (red) would would include a portion of increased non human CO2 as well. The effect here would be to lengthen the e-folding time of CO2 beyond 31 years.

Det
June 7, 2010 4:53 am

I also would like to point out that the deforestation really took of well with inventing the steam engine in the UK and the need to burn wood for that.
But also consider deforestation to gain farm land and later to expand settlements.
Even now, it still happens in the Amazon region and in Africa.
Aren’t we loosing the storage capacity of CO2 from the forests worldwide?
Considering the fastest method today is still to burn everything down!
A working forest holds moisture and water, creates O2, removes dust and other pollutents out of the air and creates shade (local cooling).
Big cities with lots of concrete and asphalt becoming hot spots, using up water and are usually funnel wind and get dust airborne!
Why is this not considered as causing in warming or CO2 concentration rise models?

Grumbler
June 7, 2010 4:54 am

Hold on – CO2 straight line for 800 years and climate fluctuates dramatically over that period? And CO2 drives climate? What am I missing?
cheers David

Malaga View
June 7, 2010 4:58 am

Mmmmmmm… thanks for the excellent food for thought…
I have trawled through my mental archives after reading the article and comments…
Now if I remember rightly there is a TRICK somewhere… now what is it?
A TRICK that helps me understand Hockey Stick graphs….
A TRICK that helps me create Hockey Stick graphs…
Ahhhhhhh… I remember now 🙂

Take historic proxy data (of imprecise worth – like tree ring or ice core data) and then splice on some modern day observations. This recipe seems to work every time. Then flavour the latest data with a bit of Tabasco to ensure the end result is red and hot!!!!!

Gail Combs
June 7, 2010 4:59 am

Willis,
Fred H. Haynie the Retired EPA Research Scientist, mentioned to you a pdf covering this subject that he spent the last four years researching. He addresses the problems with the Ice Core CO2 data (it is too low) the carbon isotope issue and others.
My question is have you read the paper and can you refute his points? Especially the point on the ice core data being too low and the differential absorption rate of the carbon isotopes.
The PDF: http://www.kidswincom.net/climate.pdf

Hoppy
June 7, 2010 5:00 am

Does the CO2 level in the trapped ice represent the composition of the original air or is it the final equilibrium concentration between the trapped air and compressed snow. If it is an equilibrium then it would be a low level and very constant like that shown in Figure 1.
http://www.igsoc.org/journal/21/85/igs_journal_vol21_issue085_pg291-300.pdf
CO2 in Natural Ice
Stauffer, B | Berner, W
Symposium on the Physics and Chemistry of Ice; Proceedings of the Third International Symposium, Cambridge (England) September 12-16, 1977. Journal of Glaciology, Vol. 21, No. 85, p 291-300, 1978. 3 fig, 5 tab, 18 ref.
Natural ice contains approximately 100 ppm (by weight) of enclosed air. This air is mainly located in bubbles. Carbon dioxide is an exception. The fraction of CO2 present in bubbles was estimated to be only about 20%. The remaining part is dissolved in the ice. Measurements of the CO2 content of ice samples from temperate and cold glacier ice as well as of freshly fallen snow and of a laboratory-grown single crystal were presented. It is probable that a local equilibrium is reached between the CO2 dissolved in the ice and the CO2 of the surroundings and of the air bubbles. The CO2 content of ancient air is directly preserved neither in the total CO2 concentration nor in the CO2 concentration in the bubbles. Possibly the CO2 content of ancient air may at least be estimated if the solubility and the diffusion constant of CO2 in ice are known as a function of temperature. (See also W79-09342) (Humphreys-ISWS)
Descriptors: Ice | Carbon dioxide | Snow | Gases | Laboratory tests | Testing procedures | Instrumentation | Measurement | Hail | Alkalinity | Hydrogen ion concentration | Analysis | Analytical techniques | Data collections | Dissolved gases

Po
June 7, 2010 5:04 am

This graph is too ‘smooth’.
I’d like to see the graph of each site individually and with the x axis stretched out a little. The reason is that elsewhere (Geophysical Research Letters 33) Law Dome CO2 level is reported as being relatively flat from 1940-1955 while general emissions were rising.
Variations within individual sites which may cast doubt upon the relentlessly increasing CO2 hypothesis are masked by this ‘conglomerate graph’ and by compression of the axes. This graph gives the impression that there is little or no variation within or between site records which could well be false, though masked by the presentation.

Bob
June 7, 2010 5:05 am

Off Topic, but I’ve been following the evolution of David Hathaway’s “Solar Cycle Predictions” of the sunspot cycle for a while. I’ve noticed that when he posts the current month’s real data, he quietly adjusts his predicted curve to fit the real data.
This month he has the cycle peaking in 2013 and having a peak of about 65. Last month, the modeled peak was 70. November 2009 it was about 78. And back in July 2007 his model predicted a peak of 150 in mid 2010.

Jack T
June 7, 2010 5:06 am

One thing that has always struck me about atmospheric CO2, as measured by ice cores, is how stable it appears to be over thousands of years, pre-modern times. Certainly, the climate has not been stable during all those years, yet there is CO2 basically doing nothing. Based on the volatility of climate and Earth in general, I got this gut instinct that something is wrong with the ice core CO2 records – they’re just too damn stable when it comes to CO2.
Here’s a short-term chart I ran across that adds to the thought that old ice core data for the long-term is not accurately reflecting CO2 levels existing during past climate conditions. Obviously, CO2 ppm level growth is influenced by climate changes, at least in the short run.
http://www.c3headlines.com/2010/02/hold.html
Are the ice core data just too coarse to reflect accurate CO2 levels from natural phenomenon, such as the major ocean oscillations? Or, do the ice cores “lose” the majority of the CO2 signal over time?

June 7, 2010 5:13 am

Daniel H says:
June 7, 2010 at 4:02 am

It would be great if someone at WUWT like Willis Eschenbach or Anthony Watts could convince Keeling to release his data since it was funded by mine and your tax dollars the NSF and NOAA and therefore ought to be in the public domain.

That would be a bit difficult, as Keeling died in 2005, refer to my recent links back to Anthony’s discussion with Pieter Tans, the current MLO director.
MLO does release a lot of data, can you be more explicit about what you are looking for that you can’t get from them?
Your comment wasn’t very explicit. One starting point is:
http://www.esrl.noaa.gov/gmd/obop/mlo/livedata/livedata.html

Steve in SC
June 7, 2010 5:17 am

Willis, you owe Jimmy Buffet big time.
Be ashamed!

Slioch
June 7, 2010 5:25 am

Douglas Cohen
June 7, 2010 at 4:49 am
The several hundred year delay between an increase in temperature and the corresponding increase in CO2 revealed by the ice core measurements, (eg Vostok) during glacial to interglacial transitions requires a net transfer of CO2 from the oceans and terrestrial biosphere to the atmosphere.
In contrast, between 1850 and 2000, human caused emissions of CO2 from burning fossil fuels equalled 1620 billion tons CO2, whereas the amount of CO2 in the atmosphere increased by only 640 billion tons. (data from Carbon Dioxide Information Analysis Centre). There is no way in which that can be explained other than by a net transfer of CO2 from the atmosphere to the oceans/biosphere, the opposite of the former process.
Human emissions of CO2 are more than able to explain recent increases in atmospheric CO2.

June 7, 2010 5:26 am

Ben M says:
Are you sure the CDIAC dataset is accurate
I think CDIAC is reasonable data but might be exaggerated as the real conversion to CO2 is not as efficient as calculated. Maybe the sequestered CO2 is about the same as the remaining CO2. If that’s true, it is easy to remember and tell others.
Where is all that sequestered CO2 going?
The oceans yes, but things sure are looking green here in Georgia. When we moved from drought two years ago to more rain than we know what to do with, this part of the planet has blossomed like a peach tree.

June 7, 2010 5:26 am

Slabadang says:
June 7, 2010 at 3:11 am

Blaming CO2 without understanding the negative and positive feedabacks is is like eating dinner before you cooked it! CO2 is a gas for life. It’s very possible that we are increasing the chances for life on the planet by increasing CO2 turnover.
But no one is calculating the benefits.

Asking someone to prove a negative is a bit of a dirty trick, but it’s easy to disprove your “no one” claim. Just ask Google. Very little is all good or all bad, and more research is warranted and I’m sure in progress.
http://news.nationalgeographic.com/news/2002/11/1122_021125_CropYields.html
http://www.co2science.org/articles/V13/N10/B2.php

June 7, 2010 5:37 am

Oslo says:
June 7, 2010 at 3:53 am

Well, as you say – your first graph resembles the Mann hockey stick, and perhaps for good reason, as it seems to utilize the good old “trick” – splicing the instrumental record onto the proxy data.
Here is another graph, clearly showing the instrumental data (red) disjointed from the proxies:
http://www.geo.cornell.edu/eas/energy/_Media/ice_core_co2.png
So the question is, as with the hockey stick: do the figures from the two methods even belong on the same graph?

I don’t understand your complaint – in Willis’ graph, the MLO record is displayed as the lowest level and with large dots. That allows the icecore data to stand out on top of the MLO data where there is overlap. Mann discarded data he didn’t like, used data he liked, and obscured what he did. Willis’ graph is above board on all three accounts.
The graph that you offer is interesting, but since it covers 650,000 years, the MLO record is only about one pixel wide and the time aspect is is compressed into essentially no information.
The instrumental record on Willis’ graph is plenty disjoint from the proxy data by style, color, and number of samples. At least, that’s my reading of it.

Enneagram
June 7, 2010 5:38 am

The trouble is one thing is “trapped heat” other the supposed “greenhouse effect”, and:
CO2 follows temperature, not the other way. Open a coke and you´ll see it: The more you have it in your warm hand the more gas will go out when you open it.
CO2 is the transparent gas we all exhale (SOOT is black=Carbon dust) and plants breath with delight, to give us back what they exhale instead= Oxygen we breath in.
CO2 is a TRACE GAS in the atmosphere, it is the 0.038% of it.
There is no such a thing as “greenhouse effect”, “greenhouse gases are gases IN a greenhouse”, where heated gases are trapped and relatively isolated not to lose its heat so rapidly. If greenhouse effect were to be true, as Svante Arrhenius figured it out: CO2 “like the window panes in a greenhouse”, but…the trouble is that those panes would be only 3.8 panes out of 10000, there would be 9996.2 HOLES.
See:
http://www.scribd.com/doc/28018819/Greenhouse-Niels-Bohr
CO2 is a gas essential to life. All carbohydrates are made of it. The sugar you eat, the bread you have eaten in your breakfast this morning, even the jeans you wear (these are made from 100% cotton, a polymer of glucose, made of CO2…you didn´t know it, did you?)
You and I, we are made of CARBON and WATER.
CO2 is heavier than Air, so it can not go up, up and away to cover the earth.
The atmosphere, the air can not hold heat, its volumetric heat capacity, per cubic cemtimeter is 0.00192 joules, while water is 4.186, i.e., 3227 times.
This is the reason why people used hot water bottles to warm their feet and not hot air bottles.
Global Warmers models (a la Hansen) expected a kind of heated CO2 piggy bank to form in the tropical atmosphere, it never happened simply because it can not.
If global warmers were to succeed in achieving their SUPPOSED goal of lowering CO2 level to nothing, life would disappear from the face of the earth.
So, if no CO2 NO YOU!

Bill Marsh
June 7, 2010 5:44 am

“During the ice age to interglacial transitions, on average a change of 7°C led to a doubling of CO2. We have seen about a tenth of that change (0.7°C) since 1850, so we’d expect a CO2 change from temperature alone of only about 20 ppmv.”
Is the relationship linear? It appears that the assumption here is that it is. Although, even if it is exponential the difference at this stage (.7C) of CO2 attributed to temperature rise would not be substantially different than if the relationship was linear.

Edward Boyle
June 7, 2010 5:51 am

This valuable post mentions a large source of CO2 which has not been given wide publicity – cement manufacture. The warmers will now start lobbying for reduction in thickness of roads, cobblestone pavements, smaller buildings, no new bridges or other large concrete construction. With the new smaller cars which will give greater gas mileage, weaker roadways should be adequate, and the future of the earth should be assured.

Steve Fitzpatrick
June 7, 2010 6:00 am

Willis,
“I’m outta here.”
A very wise decision.

pawelek
June 7, 2010 6:04 am

test comment
[Note: there is a “Test” menu on the page mast head. ~dbs, mod.]

June 7, 2010 6:09 am

Claims that cement manufacture introduce significant CO2 are the work of people who don’t understand science. Sadly that includes the US Government. When the cement is mixed with water, it absorbs the CO2 back from the atmosphere. Without CO2 the cement would never harden.
http://en.wikipedia.org/wiki/Portland_cement
“Carbon dioxide is slowly absorbed to convert the portlandite (Ca(OH)2) into insoluble calcium carbonate. “

Pyromancer76
June 7, 2010 6:10 am

Willis, I am concerned about your frustration level with AGWers coupled with your great intelligence and precise math. You wrote, “People seem madly passionate about this question. So I figure I’ll deal with it by employing the method I used in the 1960s to fire off dynamite shots when I was in the road-building game … light the fuse, and run like hell …” Yes, but….The dynamite is carefully placed in a surveyed and planned area for the roadway — and, of course, human activities and the geology around that future road have been carefully taken into consideration.
Yes, sometimes you can take the “opposition’s” figures and research for granted and work out the science. However, if those figures are not accurate or do not take the complexity of conditions into account, I am not sure they can be valid. As I first read the essay, a number of red flags went up in my mind:
1,2,and3. Ice cores, ice cores, ice cores. Too much depends on CO2 registration in ice core data which has been neatly been fitted to Mann’s Hockey Stick without enough checking and cross checking. Next are you taking the CDIAC dataset at face value when we know how much, dare I say it, dishonest, fudging the data upward has been done with the temperature (thermometer) data? Can you assert that this data can be fully trusted in its current “official” state. Then there is the chemistry, perhaps more complex than you suggest (Slioch 2:41 am). Then there is the rather amazing history of climate and human activities during warm periods (tonyb 1:42 am). Are you really sure that there was no rise in CO2 because of (at least) outgassing from the oceans during those times — like 1850 to today? Is the flat line from ice cores reasonable to a reasonable mind?
Abadang 3:11 seems appropriate: “To me the work on climate science more describes how little we know, rather than how much.” I think you give the “warmists” (I wish it were warming) too much credit for accurate data and interpretation. Richard Courtney 2:43 am) gives a long disposition on the complexities you dismiss: “But it is important to understand that there is no evidence which could be said to prove the matter.” Your argument is too close, too similar, to that of the pseudo-scientists who published in Nature — since there was a mass migration of humans to the New World before the Holocene and because we know they killed off the megafauna, and because warmth of the earth depended on the methane in megafauna burps, we know that humans were the cause of the Younger Dryas — just like we are the cause of warming today. Watts Up With That argument?
Anyway, I like your ending. Thanks for your coninuing efforts to keep the discussion-debate lively. Next time, don’t run like hell.

June 7, 2010 6:13 am

Statement written for the Hearing before the US Senate Committee on Commerce, Science, and Transportation
Climate Change: Incorrect information on pre-industrial CO2
March 19, 2004
Statement of Prof. Zbigniew Jaworowski
Chairman, Scientific Council of Central Laboratory for Radiological Protection
Warsaw, Poland
Figures 1A and 1B
The data from shallow ice cores, such as those from Siple, Antarctica[5, 6], are widely used as a proof of man-made increase of CO2 content in the global atmosphere, notably by IPCC[7]. These data show a clear inverse correlation between the decreasing CO2 concentrations, and the load-pressure increasing with depth (Figure 1 A). The problem with Siple data (and with other shallow cores) is that the CO2 concentration found in pre-industrial ice from a depth of 68 meters (i.e. above the depth of clathrate formation) was “too high”. This ice was deposited in 1890 AD, and the CO2 concentration was 328 ppmv, not about 290 ppmv, as needed by man-made warming hypothesis. The CO2 atmospheric concentration of about 328 ppmv was measured at Mauna Loa, Hawaii as later as in 1973[8], i.e. 83 years after the ice was deposited at Siple.
Sorry the figure did not copy. However, for Willis to be correct Prof. Jaworowski must be wrong. I doubt he intentionally lied to Congress.

Steve Keohane
June 7, 2010 6:18 am

Good analysis Willis. My only concern is the relative amount of CO2, anthropogenic vs. not. I have the following stored from a few years ago, without the source, which seems to be the basis for the 3% anthro-CO2 contribution to the annual CO2 budget that is often quoted.
CO2 EMISSIONS :
1. Respiration Humans, Animals, Phytoplankton 43.5 – 52 Gt C/ year
2. Ocean Outgassing (Tropical Areas) 90 – 100 Gt C/year
3. Volcanoes, Soil degassing 0.5 – 2 Gt C/ year
4. Soil Bacteria, Decomposition 50 – 60 Gt C/ year
5. Forest cutting, Forest fires 0.6 – 2.6 Gt C/year
Anthropogenic emissions (2005) 7.5 – 7.5 Gt C/year
TOTAL 192 to 224 Gt C/ year
The table shows the range of estimates of natural CO2 and human production in 2005 (Gt C/year is Gigatons of Carbon per year). Accuracy has not improved since. Notice the human contribution is within the error range of three (1, 2, & 4) of the natural sources. The total error range is almost 5 times the amount of total human production.

Further, I have read that the amount of CO2 emitted by termites is enormous, 50 gigatons/year. In thinking about termites, it occurs to me that what they are consuming is wood that is probably at least 40 years old, often older, considering the life cycle of trees. Would their diet of old wood skew the C12/13 ratios by releasing the C that was sequestered in the wood decades earlier?
Source for termite CO2 production: http://www.sciencemag.org/cgi/content/short/218/4572/563

tallbloke
June 7, 2010 6:23 am

Harry says:
June 7, 2010 at 1:16 am
RULES FOR THE DISCUSSION OF ATTRIBUTION OF THE CO2 RISE
Amen!

Didn’t work too well on the debate about Ravetz’ theory, where the main protagonist of ad hominem attacks and extreme language was…. Willis Eschenbach.
😉

Suzanne
June 7, 2010 6:25 am

The Stauffer/Berner paper addresses the same points made by Jarorowski about the validity of the Ice cores as a measurement of previous atmospheric CO2 levels. Jarowowski maintained that the ice was not a closed system as pertains to CO2.
When one looks at the Vostoc ice core record, it is apparent that the older the core, the more obvious the lag between the change in temperature and the change in CO2. The point has already been made that the stomatal measurements of CO2 show much more variability and do not track the ice core records. There has been no mention of Beck’s compilation of >90,000 historical CO2 measurements which showed a post war rise in CO2 to about 400 ppm, which would make the peak just after the warm period of the 30’s and during WWII. Callender showed similar variability but then selected those values he thought fit the theory. It is surprising that no one is actually publishing research on the question of movement of CO2
through glacial ice in light of the evidence that ice may not give an accurate picture of CO2 levels.

Martin Brumby
June 7, 2010 6:31 am

Willis
I’ll make a deal.
When we’re talking serious science I’ll happily call ’em “AGW Supporters” “AGW Believers” or even “Nervous Climate Scientists”, if you prefer.
But when they are arguing for Trillions of Pounds / Dollars to be spent NOW screwing up the sources of energy on which the economy depends, on the basis of junk science scare stories, then I’m gonna keep on calling ’em “eco-fascist nut jobs”.
Just put it down to Tourette’s.
Sorry, can’t help myself!

Britannic no-see-um
June 7, 2010 6:31 am

anna v June 7, 2010 at 4:37 am
Thanks for that.

Martin Brumby
June 7, 2010 6:37 am

S Courtney says: June 7, 2010 at 2:43 am
You left out (13) Manufacturing bread and booze.
Don’t think that affects your argument too much, though.
Although it would be interesting to know how much (globally) fermentation produces. Probably a piece more than we save by using those pesky “energy saving” light bulbs.

Mr Lynn
June 7, 2010 6:37 am

Darkinbad the Brightdayler says:
June 7, 2010 at 1:05 am
“So if you are going to believe that this is not a result of human activities”
I’m not comfortable with the use of the words “Believe” or “Disbelieve” in a scientific context. These words are more appropriate to discussions about religion and concepts which are not open to a process of proof.
To pull them into a scientific debate is to allow participants to think and respond in a less rigorous way than they ought.

I agree, but many, many folks, including scientists, use “I believe” to mean, inter alia, “I think,” “I suppose,” “In my opinion,” “I am more or less convinced,” etc., etc. Some time ago I suggested to Willis that he eschew the terms “believe” and “belief,” in favor of more specific language, but he dismissed the idea.
Back on topic, it will be interesting to see how Willis responds to the comments above that raise doubts about the validity of the ‘observed’ rise in CO2 (that tags recent atmospheric measurements on to old ice-core ones) and about the role of the oceans in the carbon cycle. Do these affect his belief (= confidence?) that the CO2 spike (if it is real) is mostly anthropogenic?
BTW, what contribution do forest/wildfires make to atmospheric CO2? Will they not mimmic fossil-fuel combustion?
/Mr Lynn

tallbloke
June 7, 2010 6:40 am

FrankS says:
June 7, 2010 at 4:51 am
For instance the ice cores show a 800 year lag between temp and CO2. So should there be an estimate for this type of change factored in.

The fact that it’s been around 800 years since the Medieval Warm Period should be considered too.
I notice Beck didn’t get a mention either.

Paul Linsay
June 7, 2010 6:40 am

“I will start with Figure 1. As you can see, there is excellent agreement between the eight different ice cores, including the different methods and different analysts for two of the cores. There is also excellent agreement between the ice cores and the Mauna Loa data. Perhaps the agreement is coincidence. Perhaps it is conspiracy. Perhaps it is simple error. Me, I think it represents a good estimate of the historical background CO2 record.”
No conspiracy is required, it’s just human psychology. There’s a famous example in physics of just this kind of behavior. In the early 1900s R.A. Millikan discovered that the charge of the electron is quantized, but he got the actual value wrong. It took a long time and many experiments to finally arrive at the right value. If someone measured a value that differed from the original one, they would look for errors until they got agreement with Millikan and stop there, instead of looking for all possible sources of error.

Geoff Smith
June 7, 2010 6:43 am

Get yer butt back here, no lighting this fuse then running for the hills…. LOL
Sorry but pretty graphs and nice numbers mean nothing when they only talk about the tiniest percentage of the history of something….anything.
Have we not been over that fact on this site time and time again. 1 thousand years out of 4.5 billion, what it that!
You must start at the beginning.
The first step in understanding must be to say has this happened before. If yes then can we determine why. If no, this is the first time, then we start from the beginning and move forward to see what changed to bring about this increase.
In the case of our little blue marble we have seen from the ice cores that indeed there has been much higher levels of CO2 in the past.
There was a cause then and we know it was not our ancestors running around in there Flintstone cars. Have we determined this cause?? Have we looked for this cause in our last one thousand year time period?
My god man how would you feel if you were present at a murder scene along with the rest of the crowd and the police suddenly turn to you as the guilty party?
You ask them why and they simply say you are here.
Now gather up your notes, take them outside and as a true offering for forgiveness burn them and add your bit of CO2 to the skies.
Once you’ve warmed yourself by the fire as your ancestors once did, seen the immenseness of the sky, felt your insignificance on this giant marble of which we occupy very little space, come back in and start at the beginning.
What was going on in the past that caused the gas levels to change, WHICH gas levels changed the most, did the warmth come before the gas change or after…..while your pondering that one have a beer or two, maybe one in a glass in the fridge and one in a glass on a hot patio.
Oh yes and in an attempt to provide reading material with yet more pretty graphs but at least on a grander scale take a peek at this.( probably 50 articles to refute it as the debate continues)
Nasif Nahle. 2007. Cycles of Global Climate Change. Biology Cabinet Journal Online. Article no. 295.

June 7, 2010 6:50 am

In the end I agree with Mr. Willis Eschenbach, as the CO2 level has little or nothing to do with the temperature. The debate over CO2 levels is minor in nature and only proves what Einstein said “Not everything that can be measured is important. And not everything that is important can be measured.” At least I think he said this.

June 7, 2010 6:55 am

Willis said: “So if you are going to believe that this is not a result of human activities, it would help to answer the question of what else might have that effect. It is not necessary to provide an alternative hypothesis if you disbelieve that humans are the cause … but it would help your case. Me, I can’t think of any obvious other explanation for that precipitous recent rise.”
Could not the answer be “the same thing, or things, that caused similar atmospheric CO2 rises in the past”?
If you would say anthropogenic CO2 emissions “may” be causing, or causing a portion, of the recent rise, then you and Richard S. Courtney would appear to be in agreement, would you not?

1DandyTroll
June 7, 2010 6:56 am

Isn’t it a bit silly, and IPCCian, to state that if a person truly don’t have another viable hypotheses that can explain the increase in carbon emission, that fairly fits the world increase of tomato use or population or the number of new rice fields or et cetera, then people can’t speak their mind, less they wanna pretty much be ignored.
But essentially in the year of 1850, and before like, and even though the whole world was in a coal burning frenzy and not only from industrialization but also from private coal fired heater and boilers, they somehow only manage to emit about 0.5 gigatonne of carbon.
Personally I just don’t see it.

June 7, 2010 6:59 am

Humans are most certainly the cause of the recent CO2 increase. A simple graph comparing CO2 with the population should offer an important hint:
http://voksenlia.net/met/co2/pop.jpg
I find it a bit interesting that the correlation between population and CO2 is roughly the same for the past centuries. Does it mean that the level of technology is not that relevant? That a pre-industrial society causes roughly the same CO2 increase per person as the modern society?

Gail Combs
June 7, 2010 7:01 am

Juraj V. says:
June 7, 2010 at 2:53 am
“….Today, the rate of CO2 rise plays well with SST data.
http://climate4you.com/images/CO2%20MaunaLoa%20Last12months-previous12monthsGrowthRateSince1958.gif1998 El Nino is clearly visible, also La Nina and volcanic eruptions. But strange that 2007 La Nina is not visible. More, as oceans start to cool, the rate of rise stabilizes.”

____________________________________________________________________
I would like to look at that but the link seems to be broken.
[Reply: It may be this:
http://www.climate4you.com/images/GISS%20GlobalMonthlyTempSince1958%20AndCO2.gif
or this:
http://www.climate4you.com/images/CO2%20MaunaLoa%20Last12months-previous12monthsGrowthRateSince1958.gif
~dbs]

Brad
June 7, 2010 7:15 am

Agrtee completely, I think this post is the right answer. CO2 increase is human caused, but the increase in CO2 is responsible for only part of, and maybe a very small part of, the temp rise. Just wait, if the sunspots dont come back and we have a very weak solar cycle, well…

Ernesto Araujo
June 7, 2010 7:15 am

The debate about what causes CO2 concentration in the atmosphere is pointless. What matters is: does the increase in CO2 concentration cause warming? The whole thing is about Global Warming, not about CO2 concentration. To indicate that CO2 increase causes warming, you would need to present a curve where temperature oscillations match CO2 concentration, and that curve clearly does not exist for the last 1000 years, nor for the last 150 years.

Enneagram
June 7, 2010 7:22 am

There is a rule for sales´men: Never, Never, mention the name of your competition, because it would inmediately increase its sales. That is what this post is, consciously or unconsciously doing, paving the way to Cancun.

Phil.
June 7, 2010 7:32 am

Gail Combs says:
June 7, 2010 at 7:01 am
Juraj V. says:
June 7, 2010 at 2:53 am
“….Today, the rate of CO2 rise plays well with SST data.
http://climate4you.com/images/CO2%20MaunaLoa%20Last12months-previous12monthsGrowthRateSince1958.gif1998 El Nino is clearly visible, also La Nina and volcanic eruptions. But strange that 2007 La Nina is not visible. More, as oceans start to cool, the rate of rise stabilizes.”
____________________________________________________________________
I would like to look at that but the link seems to be broken.

Delete the terminal ‘1998’ and it works.

Enneagram
June 7, 2010 7:33 am

Geoff Smith says:
June 7, 2010 at 6:43 am
Now gather up your notes, take them outside and as a true offering for forgiveness burn them and add your bit of CO2 to the skies.
http://biocab.org/carbon_dioxide_geological_timescale.html

June 7, 2010 7:40 am

You make a good case for human-caused increase in atmospheric CO2. The rise in CO2 levels since 1945 is unprecedented in many thousands of years of geologic history and no natural cause (including volcanic activity) is known to be capable of producing the rise n CO2 over that past 65 years.
So let’s assume that humans have caused the rise in CO2. That still begs the question of whether or not the increase in CO2 is the cause of global warming. CO2 makes up 0.038% of the atmosphere, accounts for only 3.6% of the greenhouse effect, and has increased only 0.008% since ~1945.
‘CO2-pushers’ (for lack of a better term) claim this is enough to increase atmospheric water vapor (which accounts for ~95% of the greenhouse effect) and cause warming. The problem with this is that they haven’t demonstrated that water vapor has increased at all, and in fact, some data suggests decreases in water vapor over past decades.
We have had four climate shifts in the past century (cool, 1880-1915; warm, 1915-1945; cool, 1977-1999; cool, 1999-2010), and Greenland ice core data shows that we have had 40 similar warm/cool oscillations in the past 500 years and temperature increases of up to 15 degree F in as little as 40 years, none of which could possibly be caused by human-made CO2 because they all occurred before CO2 levels rose. Conclusion–naturally caused climatic fluctuations have been commonplace for tens of thousands of years without any relationship to CO2 (other than CO2 goes up after warming occurs).
So my question for you, Willis, is how about doing the same kind of analysis of atmospheric water vapor changes as you did with CO2 and calculating the total maximum effect CO2 by itself–then lite the fuse and run like hell!

RockyRoad
June 7, 2010 7:42 am

The discussion about CO2 is incomplete without a long-range historical graph showing CO2 concentrations over the geologic record so the current trend/amount can be put into perspective. Then of equal interest would be the cause of these CO2 concentrations that far exceed current levels. And while there is little doubt that (some/much/most) of the current up-tick is anthropogenic, the causes that propelled CO2 in the past to much higher levels than we currently see should be discussed, as those were certainly not anthropogenic.

wayne
June 7, 2010 7:42 am

Thanks Hopper for the Stauffer & Berner paper, that clears many things. If I open my mouth further on this subject I’m afraid I might cross one of those lines, so, like Willis, I’m outta here.
(Oh, stick to the numbers… 2.18 seems closer than 2.13)

Julian Flood
June 7, 2010 7:43 am

Re Fig 1. Levels of CO2 do not start to rise in 1850, the rise begins in 1750 or slightly earlier. This ties in nicely with Ferdinand Engelbeen’s graph of 12C/13C ratios which also show divergence beginning in the nmiddle of the 18th century.
Explanations of human interference with the atmosphere should begin around 1750.
JF

June 7, 2010 7:49 am

With respect to residence time in the atmosphere, I understand that the A bomb tests of the 1950s showed the half life to be five years. This means that the atmosphere is in equilibrium with part of the oceans with a lag of only a few years. My caculations say that on average it is the top 100 metres.
The CDIAC is the nuclear industry’s contribution to global warming hysteria. If you look at their experiments on the growth response of plants to increased CO2, they added ozone to their artificial atmospheres in order to get a negative response.
The cooling of the next 20 years should result in a flat atmospheric CO2 trend.

Shevva
June 7, 2010 7:52 am

Can i just point out that if your study doesn’t support the earth is turning into a giant fan-assisted over hypothesis then you’re not getting a grant.
Great work Mr Willis as direct and clear for us novices as ever.

HankHenry
June 7, 2010 7:53 am

The title is misleading. What’s there to blame anyone for in this piece?
The Keeling curve is so damn perfect it makes one wonder.

T.C.
June 7, 2010 7:54 am

Where is Becks curve in the summary? Without it you don’t have the full story. How do we know that the ice core data is accurate? Maybe glacial scientists are just making the same systematic error – there seems to be a lot of assumptions built into gas analysis of ice cores? A lot of things have not been taken into consideration – for example the biology of the snow that creates the ice:
http://www.aad.gov.au/default.asp?casid=2437
Again, if this is correlation = cause, as promulgated by the CAGW, then why don’t we see decreases in CO2 at ML along with decreased fossil fuel consumption and industrial activity by humans during the early 70’s, early 80’s (especially the early 80’s), early 90’s, late 90’s, etc.? Even if there is a lag in response, the decrease should show up? I don’t see it.
And as
Steve Keohane says:
June 7, 2010 at 6:18 am
the contribution of anthropogenic CO2 is utterly overwhelmed by natural sources of CO2 emission, particularly that fluxing in and out of the oceans. The idea that anyone can track anthropogenic CO2 in the midst of these fluxes is just silly, but I suppose careers certainly have been built on far less.

Phil.
June 7, 2010 7:55 am

Paul Linsay says:
June 7, 2010 at 6:40 am
No conspiracy is required, it’s just human psychology. There’s a famous example in physics of just this kind of behavior. In the early 1900s R.A. Millikan discovered that the charge of the electron is quantized, but he got the actual value wrong. It took a long time and many experiments to finally arrive at the right value. If someone measured a value that differed from the original one, they would look for errors until they got agreement with Millikan and stop there, instead of looking for all possible sources of error.

This result was actually controversial and Ehrenhaft and his collaborators in particular contested it, consequently there is no chance that deviations from the Millikan value would be ignored. The modern value for e differs from Millikan’s by less than 1% which is mostly due the the modern value for the viscosity of air.

June 7, 2010 7:59 am

One thing/question I would like to insert about the Mauna Loa CO2 data; http://www.esrl.noaa.gov/gmd/ccgg/trends/
shouldn’t there be any reflection of the large volcanic eruptions during that period – mainly Mt. St. Helens(1980), El Chichon (1982), Mt. Pinatubo(1993), etc. ? Looking at the Full Mauna Loa CO2 record and the Annual Mean Growth Rate for Mauna Loa for those years & shortly after, there is nothing to suggest anything volcanic happened as opposed to the SO2 levels during those same times.
Does it take a Yellowstone-type eruption to make a mark on those measurements or is something else up?
Just wondering…Jeff

Enneagram
June 7, 2010 8:06 am

Stockholm syndrome anyone?
In psychology, Stockholm syndrome is a term used to describe a paradoxical psychological phenomenon wherein hostages express adulation and have positive feelings towards their captors that appear irrational in light of the danger or risk endured by the victims
http://en.wikipedia.org/wiki/Stockholm_syndrome

June 7, 2010 8:10 am

RockyRoad says:
“The discussion about CO2 is incomplete without a long-range historical graph showing CO2 concentrations over the geologic record so the current trend/amount can be put into perspective.”
Here’s one chart of the geological record. [click on the chart to embiggen]

G. Karst
June 7, 2010 8:11 am

SimonH:
“I propose “AGW believer”.”
I agree but “AGW convinced” or “AGW unconvinced” avoids religiosity, while retaining correct meaning. GK

Steve Hempell
June 7, 2010 8:22 am

Willis,
Care to comment on this?
http://www.rocketscientistsjournal.com/2007/06/on_why_co2_is_known_not_to_hav.html
Just like to have your opinion. I get lost in the mathematics.

John Hounslow
June 7, 2010 8:24 am

A couple of points to provide food for thought:
1. Add to figure 2 a total global population line.
2.Something odd about CO2 seems to have started 10,000 years ago – from the ice core results it has been on an upward trend, even though temperature has been on a downward trend. Contrast with what happened during 10,000 year periods at comparable points in previous cycles – 120,000 years ago to 130,000 years ago, 230,000 years ago to 240,000 years ago, 325,000 years ago to 335, 000 years ago, 400,000 years ago to 410,000 years ago. In each of those periods CO2 followed temperature down. What caused the change? Total human population was pretty puny 10,000 years ago.

A C Osborn
June 7, 2010 8:25 am

I have not read all the posts on here so I apologise in advance if this has been mentioned before.
I have to disagree with the accuracy of this remark.
“It is unlikely that the change in CO2 is from the overall temperature increase. During the ice age to interglacial transitions, on average a change of 7°C led to a doubling of CO2. We have seen about a tenth of that change (0.7°C) since 1850, so we’d expect a CO2 change from temperature alone of only about 20 ppmv.”
On the grounds of the 600-800 time lag that is supposed to be shown in Ice Core evaluation. The changes we see would relate to what happened to temperatures in the Past, not what is happening now.

AndrewS
June 7, 2010 8:27 am

One is the residence time of CO2. This is the amount of time that the average CO2 molecule stays in the atmosphere. It can be calculated in a couple of ways, and is likely about 6–8 years.
The other measure, often confused with the first, is the half-life, or alternately the e-folding time of CO2. Suppose we put a pulse of CO2 into an atmospheric system which is at some kind of equilibrium. The pulse will slowly decay, and after a certain time, the system will return to equilibrium.

That I don’t get. Assuming residence time of 6 years we get ~16.7% of all pulse absorbed in first year. ~30.5% is gone after two years. And after only four years more that half of original pulse will be absorbed (~51.77%). That means that 6 years residence time results in less than 4 year of half-life time. Same analysis for 8 years of residence time results in just above 5 years of half-life time. Is anything wrong with my reasoning?
Andrew

Dave Springer
June 7, 2010 8:29 am

Willis asks what else other than human CO2 emissions might have caused recent rise.
Land use changes – primarily taking out old growth forests. Remember how the Kyoto protocol was originally supposed to credit countries for reforestation efforts until it was discovered that the US had planted so many trees it would get a huge credit then the Europeans balked at giving a credit for reforestation and then Clinton and Bush both decided not to sign it when the reforestation credits were removed.
Warming oceans. Natural warming of the ocean releases dissolved CO2 like a glass of beer.
Question for Willis: throughout the vast majority of earth’s history atmospheric CO2 levels were at least several and as much as order of magnitude higher than today. It’s only during the relatively recent period of glacial-interglacial cycles where the earth’s temperature and CO2 content of the atmosphere has been this low. The whole damn planet was tropical for most of its history. How do you explain that given the sun was 10% cooler in the distant past and fossil fuel reserves were being built up instead of being burned up.
2) warm
ing oceans
3)

PJB
June 7, 2010 8:31 am

From the Vostok data, CO2 lags the temperature from 50 to 600 years or so….
We are at the “end” of a warming period with a gradual lowering of temperatures and despite the recent “increase”, shouldn’t the natural trend of CO2 content be somewhat downward? That being the case, are we “diverting” the temperature drop into the next ice-age by our carbonaceous ways?
Just askin’.

Stop Global Dumbing Now
June 7, 2010 8:32 am

This is a fun exercise for us climate laymen. Not much time to play this morning.
1) Ice core data is a little too level for my taste. Could the newer ice be the “saturation” state (for lack of a better word) and the older ice reflects leaching (out gassing) to a a more stable concentration?
2) We didn’t have a way of measuring CO2 until 1890s but I have never seen a comprehensive pre-industrial estimate (guesses don’t count). Does one exist? That table going back to the 1700s is quite suspect as recent papers show that even hunter-gatherers participated in burning forests to change the landscape to improve hunting conditions. The poor neglected anthropologists finally have their chance to get in on this.

June 7, 2010 8:38 am

“1. Numbers trump assertions. If you don’t provide numbers, you won’t get much traction.”
I love numbers, I really do. I once wanted to be the statistician that threw out the various statistics at sporting events…….such as Team A wins 90% of the time when Joe Blow runs for over a hundred yards for the game, or, up next, here’s Jose Golpe, he’s batting .431 with runners in scoring position against left-handed pitchers on Wednesdays in a dome this year!(My apologies to the people not familiar with American sports).
My point is, numbers are often meaningless. For anyone that cares, every team that has a runner rushing for over 100 yards in a game should win at least 90% of the time, so attaching a meaning for a particular team to a specific runner going over 100 yards is silly. In my other example, while batting .431 is a lofty goal, this is example is bereft of “ifs” and “buts”. First, typically, a batter performs better against a pitcher throwing from the opposite hand.(Righty vs. Lefty, I assumed it is the same with cricket, but I don’t know.) Also, averages are skewed by lower incidents. Even if it is at the end of a season and Mr. Golpe was a full time player, how many times would Mr. Golpe have batted against a lefty on a Wednesday, in a dome with runners in scoring position for the year? Not often enough for the number to hold any meaning. For those that don’t know the last hitter to bat over .400, regardless of day of the week, right or left, ect. was Ted Williams in 1941.
Most people here, will say “so what?”
Numbers and averages skew a perspective. Like the team winning with the hundred yard rusher, atmospheric CO2 should increase. Why? For the obvious reason, man’s advancement! Anthropogenic CO2 is simply a by-product of economic growth and societal progress. My latter scenario, too, can have parallels with the CAGW discussion. In the CAGW discussion we use terms such as El Nino and La Nina. Almost always, they are accompanied by terms such as albedo and currents and solar phases ect. In other words, if solar activity X is accompanied by Nino or conversely Nina along with other astral convergences(a recent discussion here) combined with volcanic activity apparently, other than the one by Muana(Muana’s CO2 emissions are the good kind that we use a CO2 ruler for the world) then we’re likely to see Y in regards to Arctic ice extent in May(apparently meaningless in relation to ice extent in Sept). All that to mean……..nothing. Don’t get me wrong, man’s knowledge of our climate has increased significantly……sort of. As with many of the great questions of man, we’re often left with more questions than answers. I believe this will be the case for many years to come. Sadly, agenda driven science has altered the course of climate science. To what extent and what degree, we won’t know for many years to come.
Numbers are great when put in proper perspective, but useless unless the proper logic and critical thinking have been applied. Figures lie and lairs figure. I’m not sure what the exact number is, but my verbosity increases exponentially when I see a hockey stick graph.
Cheers

Malaga View
June 7, 2010 8:44 am

Enneagram says:
June 7, 2010 at 7:33 am

Thanks for the link to http://biocab.org/carbon_dioxide_geological_timescale.html
which includes following diagram showing the “Area of Continents Flooded” vs “Change of Tropospheric Temperature” http://biocab.org/Geological_TS_Sea_Level_op_713x534.jpg
I am left wondering did someone pull the plug out of their bath 500 mya?
Or perhaps the earth is expanding after all….

Steve Fitzpatrick
June 7, 2010 8:46 am

stevengoddard says:
June 7, 2010 at 6:09 am
“Carbon dioxide is slowly absorbed to convert the portlandite (Ca(OH)2) into insoluble calcium carbonate. “
Well yes, in theory at least, and this is certainly true near exposed surfaces of newly cast concrete. But most Portand cement based concrete is cast in slabs thick enough to preclude the easy entrance and diffusion of CO2, once the hydration process gets underway and the cement has a firm “set”. Indeed, reinforcing steel is normally set at a depth in the concrete sufficient to insure that carbon dioxide does not diffuse enough to reach the steel at any time during the expected useful lifetime of the concrete (since having the carbonation front reach the steel would lead to corrosion of the steel and possible failure of the concrete). So, while CO2 absorption certainly takes place, the “carbonation” process is extremely slow (taking hundreds of years) unless the concrete is demolished and broken into small pieces to expose more surface area to the air. In very thick sections (concrete in dams, large structural members, etc.) for all practical purposes (less than many hundreds of years) there is no significant CO2 absorption.
Cement manufacture is a net CO2 emissions source over less than geologic time scales, but is in fact a small fraction of other CO2 sources.

Bruce Cobb
June 7, 2010 8:46 am

According to Jaworowski:
“More than a decade ago, it was demonstrated that…the ice cores cannot be regarded as a closed system, and that low pre-industrial concentrations of CO2, and of other trace greenhouse gases are an artifact, caused by more than 20 physical-chemical processes operating in situ in the polar snow and ice, and in the ice cores. Drilling the cores is a brutal and polluting procedure, drastically disturbing the ice samples.”
“Liquid water is commonly present in the polar snow and ice, even at the eutectic temperature of −73°C. Therefore, the conclusions on low pre-industrial atmospheric levels of greenhouse gases cannot be regarded as valid, before experimental studies exclude the existence of these fractionation processes.”
“Recently, Brooks Hurd, a high-purity-gas analyst, confirmed the previous criticism of ice core CO2 studies. He noted that the Knudsen diffusion effect, combined with inward diffusion, is depleting CO2 in ice cores exposed to drastic pressure changes (up to 320 bars—more than 300 times normal atmospheric pressure), and that it minimizes variations and reduces the maximums (Hurd 2006).”
It seems very plausible that the carbon dioxide ice core records are highly questionable, meaning that hockey stick chart (where have we seen that before) is probably wrong, and that C02 levels have indeed been at least as high as todays’ over the past millennium. I doubt the rise in C02 is nearly as great as is supposed, and therefor, the % attributable to man is probably considerably less than what is supposed. It would be nice if we were responsible for more of the beneficial gas, though.

Tom Jones
June 7, 2010 8:48 am

I went through a lot of the same material, when I first got interested in AGW. Having come to the same conclusion then, I would be hard-put to disagree with it now. And, the insights of the Thermostat Hypothesis are really quite good. Nice work, Willis. It seems like a pretty good model for a piece of the puzzle. But, the question that bedevils is “what is changing the set-point of the thermostat?” There is a lot of chaotic short-term variation, which is not surprising given the feedback mechanisms, but there also seems to be an underlying long-term movement in the center value of the whole thing. There is obviously a large and loud school of thought that the atmospheric concentration of CO2 is what is moving the set-point. Perhaps it is, but correlation and causation are different things, and that theory seems to keep needing patching, which is not exactly reassuring. Nor am I convinced by any of the alternatives. Does anyone have any thoughts on this?

Anton
June 7, 2010 8:57 am

Bob says:
June 7, 2010 at 5:05 am
“Off Topic, but I’ve been following the evolution of David Hathaway’s “Solar Cycle Predictions” of the sunspot cycle for a while. I’ve noticed that when he posts the current month’s real data, he quietly adjusts his predicted curve to fit the real data.”
This is what MSN Weather does with its WTD/iMAP weather module on MSN homepages. Towards the end of the day, it goes back and changes incorrect forecasts to match what actually happened. It’s fraud, but what else is new in the climate community? Doesn’t Michael Mann spend much of his time rewriting temperature records to make them match his theories?
Does anybody ever admit to being wrong?

KevinUK
June 7, 2010 8:57 am

David Archibald
“The CDIAC is the nuclear industry’s contribution to global warming hysteria. If you look at their experiments on the growth response of plants to increased CO2, they added ozone to their artificial atmospheres in order to get a negative response.”
Do you have a citation for your statement above?
I once had a late night (in UK time zone) conversation with Steve M on CA about the possible link between the nuclear industry and CAGW. From my personal experience of working within the UK nuclear industry in the past, I told SteveM that I had never come across this link i..e that IMO CAGW propoganda was not eminating form the UK nuclear industry in order to justify building new nuclear plants – in fact from it!
Just because ORNL is a former nuclear R&D site but is now largely an ex-nuclear research site/institution doesn’t mean that CDIAC is part of the CAGW propaganda industry. Because of Steve M’s concerns I’ve spent quite a few days researching for evidence of this possible link between the US (and UK) nuclear industry (it definitely doesn’t exist in the UK) and the CAGW propaganda industry. The closest I’ve come to any evidence of it, is a fairly tenuous link between certain DOE funded staff at Pacific Nuclear Labs and some of their CAGW pronouncements and thats about it.
On the other hand there is a clear demonstrable link between the the UK universities that form part of the Tyndall Centre and BADC, the UK Met office and other DECC/DEFRA funded institutions/organisations like the Centre for Hydrology and Ecology and CAGW propaganda. None of these organisations/institutions have any connections with any of the UK’s former research and development sites like AERE Harwell or Winfrith despite their proximity (Harwell to Oxford and Winfrith to Exeter).
In fact I fully expected to see some familar names (from my days in UKAEA) on the staff compliment at UK Hadley Centre given my past involvement in modelling with the ‘severe nucelar accident catastrophe modelers’ at Harwell and Winfrith but I have been sadly disappointed to find that not one of them has ended up at Exeter as I was looking forward to contacting them and having some heated debates on thw usefulness (NOT!) of GCMs.

June 7, 2010 9:00 am

@ G. Karst
June 7, 2010 at 8:11 am
SimonH:
“I propose “AGW believer”.”
“I agree but “AGW convinced” or “AGW unconvinced” avoids religiosity, while retaining correct meaning.” GK
Hmm, perhaps for many, but for others it would seem including religiosity would be appropriate.
Religion
From Latin religiō (“‘moral obligation, worship’”)
1. A collection of practices, based on beliefs and teachings that are highly valued or sacred.
2. Any practice that someone or some group is seriously devoted to.
3. Any ongoing practice one engages in, in order to shape their character or improve traits of their personality.
4. An ideological and traditional heritage.
Numbers 1 and 2 seem to hit some CAGW convinced people dead on.

Ben
June 7, 2010 9:00 am

Side-notes that I think might be relavent. For one, I saw a study posted about 3 years ago by NASA that showed the Earth overall was 5% greener then 10 years before. I tried to find that study and can’t now, so as much as I hate to say it, you might need to take my word for it.
Our ecoystem may be able to counter the rising tide of CO2 given enough time, however this is speculation as is the question of how much CO2 our oceans can hold and sequester. There are no facts really known about this, and studies of those two relationships would have been much more beneficial then studies of say mammoth farts…but I digress..

Gail Combs
June 7, 2010 9:00 am

Andrew W says:
June 7, 2010 at 3:19 am
Of course, those nasty warmists have been trying to explain for years just how solid the evidence is that it is indeed human activity that’s causing the CO2 rise, it’s laughable that many “skeptics” are only capable of accepting the reasoning when it’s explained to them by one of the good people at WUWT.
__________________________________________________________________________-
I am more open to Willis and Anthony because I know they are more interested in science than in their next pay check or the next big financial bubble.
There have been enough people who have run a foul of the establishment and lost their jobs or what ever to make me examine ANYTHING that comes out of the government these days.
A neutral example: Since the international HACCP system replaced the old US meat inspection system there were ninety-four meat recalls in just one year, over 1000 Food inspection non-conformance reports found as a result of a freedom of Information act request from ‘Public Citizen’ and the Food Inspection Union’s chairman, Stan Painter accuses the USDA of ignoring the problems with HACCP during a Congressional investigation.
The official “Answer. …. The FSIS investigation has been completed and the allegations concerning improper enforcement of SRM regulations were not substantiated. In addition, the OIG independently sent an investigator and an audit team to examine the allegations concerning SRM regulatory compliance. Their observations also concluded that the chairman’s allegations were unsubstantiated.”
GRRRRrrrrr PEOPLE died horribly and the US government covered it up! http://www.marlerblog.com/tags/john-munsell/
http://www.foodsafetynews.com/contributors/nicole-johnson/
Now, tell me again why I should take the word of any scientist who is on the government payroll or gets government grants without checking his work closely.

Philip T. Downman
June 7, 2010 9:01 am

Excellent reasoning, perhaps with the exception of those sorrow “tree rings” again. “The Problems in interpreting tree-ring δ 13C records”
There seems to be considerable difficulties to use the tree rings as a proxy for 13C content of the atmosphere too: The presumption is that plants prefer 12C. So if the concentration of 13C increases. this ought not to be directly mirrored in the 13C content in wood. It ought to be less, wouldn’t it? The extent to which 13C is accepted by different plants might vary with concentration and perhaps other factors as say temperature, water, nourishment.
My bet is that ice cores are better proxies for athmospheric CO2-content. I just guess that the difference in diffusion rate between 13CO2 and 12CO2 is negligible even over thousands of years.

P. Berkin
June 7, 2010 9:04 am

I was starting to worry a bit . . . then I re-drew the top graph with the y-axis going from 0 – 1,000,000 and I stopped worrying.
I am not a climate scientist, by the way.

Ian H
June 7, 2010 9:08 am

Anyone who doesn’t think the CO_2 rise is due to human beings should explain where all that CO_2 we have emitted actually went if not into the atmosphere.
I find it interesting that the sequestration rate seems to be getting larger – it is trending above the exponential line. If we were using up the capacity of the sea to absorb CO_2 we’d expect to see the opposite. Of course the sea has a huge capacity to absorb CO_2 and so far we’ve barely made a dent in it. Nevertheless you wouldn’t expect the sea to actually be getting more efficient at absorbing CO_2 … or would you?
In related news … my lawn needs mowing yet again. Perhaps the increased rate of sequestration is due to increased plant growth in the higher CO_2 environment.

June 7, 2010 9:09 am

Interesting CO2/Temp chart.

Timothy Chase
June 7, 2010 9:10 am

The paper is:
Farmer, John G. (1979) Problems in interpreting tree-ring δ 13C records, Nature, Volume 279, Issue 5710, pp. 229-231.
http://adsabs.harvard.edu/abs/1979Natur.279..229F
When I was arguing with young earth creationists they would often trot out old papers to suggest that there were major unresolved issues. One even such paper even showed the sun to be shrinking at such a rate that if it had been shrinking as quickly in the past the sun couldn’t be very old. That was from the 1970s. Older papers. More difficult to track down — or get PDFs of them off the web.
Used to be that if you were going back to the early 1990s things were tough. Nowadays, however, it is possible to get PDFs of all the Sol Spiegelman and Manfred Eigen papers from the 1960-70s on different strains of Spiegelman’s Monster — the shortest of which is in the neighborhood of 50 nucleotides long — about the same length that linear polyribonucleotides will spontaneously form in the presence of montmorillonite. But I can’t find an actual copy of the paper you are referring to — although I can see that there are a number of papers that dealt with the same topic in the following years.
Have you checked to see if there was any progress made on this problem?

Enneagram
June 7, 2010 9:10 am

The return of the LOST SHEEP:
There were ninety and nine that safely lay
In the shelter of the fold.
But one was out on the hills away,
Far off from the gates of gold.
Away on the mountains wild and bare.
Away from the tender AL BABY Shepherd’s care.
Away from the tender AL BABY Shepherd’s care.

Ian H
June 7, 2010 9:12 am

In related news … my lawn needs mowing yet again. Perhaps the increased rate of sequestration is due to increased plant growth in the higher CO_2 environment.

Alternatively perhaps precipitation has increased and is washing the CO_2 out of the atmosphere at an increased rate.

R. Craigen
June 7, 2010 9:17 am

Rules for Discussion: A masterstroke! It would change this whole field if adherence to this could be guaranteed. How about enshrining that somewhere on the main WUWT page or making it an editorial policy, or something? It should be nailed up by moderators at the beginning of climate debates everywhere — and STRICTLY enforced.
As for your article, Willis, I find little that is controversial in it, but I would have liked to see more discussion of the empirical evidence for the high proportion of human emissions in the CO2 resident in the atmosphere today. From what I read of isotope studies, emissions comprise a small fraction. This, of course, does not mean that they have not CAUSED the rise in CO2, as the gas is continually being consumed, absorbed and re-emitted by the seas, plant life and geological system, none of which care which CO2 molecules they are absorbing. With the flux in and our of the natural system more-or-less in balance, the addition of human emissions should have the effect of increasing the total but it will be distributed into the various sinks and replaced with naturally occurring CO2, thus it should comprise a smaller proportion of the whole than one would think. I imagine one could use the emission numbers versus the empirical values of resident emitted CO2 to infer something about actual (as opposed to theoretical) absorption rates. I wonder if this has been done.

June 7, 2010 9:18 am

P Berkin,
Here’s another CO2 chart to keep you from worrying. And another.

Dave Springer
June 7, 2010 9:22 am

I left out forest fire suppression as a reason for bump in CO2 in recent years. As was discovered a few decades ago forest fire suppression actually has the opposite effect of what Smokey the Bear wanted – preservation of our forests. Suppressing forest fires allows combustables to accumlate near ground level setting the stage for truly massive fires that can’t be controlled and are so hot that it destroys old growth trees which would have otherwise survived the lesser more frequent fires. When those old growth trees go up in smoke you get a century’s worth of CO2 locked up in the wood released into the atmosphere.
While land use changes and forest fire suppression are still anthropogenic in origin its not the fossil fuel bogeyman that everyone is focused on and wants to control and it’s not the US that is the big offender. The political drive behind this whole CAGW hoax is ained at two things
1) Taking over control of fossil fuel use and thus taking over the factor that literally fuels the economic growth of industrialized nations. It’s a power play.
2) Putting a damper on the US economic and military superpower status. Anything that can’t be blamed on the US is of no interest to the rest of the world or even US domestic self-loathers. Thus we hear next to nothing about the effects of black carbon (soot) produced by dirty diesel engines, slash & burn agriculture, heating of homes with wood and even dung, lack of particulate filters on coal burning power plants, and other assorted soot sources. You see, starting with the Clean Air Act of 1963 the US has dramatically reduced the amount of soot it pumps into the atmosphere. No other country in the world has come close to matching that effort. Moreover we regulate our logging industry such that their harvest methods either don’t denude the land or require planting of young trees to replace the old ones removed and we pretty much no longer use wood as source of fuel and instead much of becomes lumber used in contruction of durable structures where the carbon in the wood remains locked in the wood for a hundred years after harvest. On top of that we now use control burns that don’t destroy old growth forest but rather serve to remove fuel before it accumulates into disastrously hot fires that kill big trees.

40 shades of green
June 7, 2010 9:26 am

Willis,
You have to go and publish that book.
Note: There is no need to write it, as I think you have most of it written already.
40 shades

Steve Oregon
June 7, 2010 9:30 am

The worst kind of worry is worrying about what to worry about.
Once one reaches that point it’s too late to really worry about anything.
Unless you’re worried about not worrying enough.
But then there’s a risk of worrying about worrying too much and it becomes a worry about being lost in a circle of worry.

With AGW, worry has become the devil’s elixir.

J. Bob
June 7, 2010 9:34 am

Willis
I will echo the comment Steve Hempell made on the rocket science site. In my case I like the math included at that site, but the downloads are long, and I long since gave my analysis programs such as MATLAB away. So I’m limited to EXCEL and VB.
Looking at your initial 1000 year plot, one can make a case for man’s increase in CO2, “eyeballing” the apparent rise since the inception of the “industrial revolution” ~1750.
I’m in the process of completing a book, most of which is irrelevant to this discussion, except for the section discussing mass migration of people from 500 B.C. to 500 A.D., and what effects weather/climate had to do with it. Which has led to trying to better understand temperatures prior to the 1850 point. In looking at the temperature records, there seems to be very little data, if any, outside of central and western Europe.
One of my thoughts were, if the industrial age started in Europe, emitting all that soot and CO2, might one see a rise there first?
The following are averaged temperature anomaly charts reflecting the earliest records from Central England, DeBilt and http://www.rimfrost.no/
sources:
http://www.imagenerd.com/show.php?_img=lt-temp-1650-2008-1-Rxrdy.gif
http://www.imagenerd.com/show.php?_img=lt-temp-1750-2008-4-EyvXd.gif
http://www.imagenerd.com/show.php?_img=lt-temp-1800-2008-14-9ZSv8.gif
http://www.imagenerd.com/show.php?_img=lt-temp-1850-2008-27a-UtBGD.gif
http://www.imagenerd.com/show.php?_img=lt-temp-1900-2008-50a-PhLn0.gif
I used a Fourier convolution, or spectral lo-pass filter, with a cut off at a 40 year period. From my rought view, I don’t see anything happening until more global temps are started to be included from 1900 on, and any warming showing up only after about 1975.
I liked the Fourier as it included the end points, if used correctly, and allows correlation of other physical inputs which have periodic, or almost periodic secular cycles.
Any thoughts?

Gail Combs
June 7, 2010 9:37 am

Britannic no-see-um says:
June 7, 2010 at 3:49 am
How trivial or significant is the direct additional respiratory and food production CO2 emission produced by increases in human longevity and population density since medieval times?
____________________________________________________________________________
The termites beat us hands down.
“According to the journal Science (Nov. 5, 1982), termites alone emit ten times more carbon dioxide than all the factories and automobiles in the world. Natural wetlands emit more greenhouse gases than all human activities combined. (If greenhouse warming is such a problem, why are we trying to save all the wetlands?)”
Termites emit ten times more CO2 than humans. Should we cap-and-tax them?
“The 0.03% CO2 content of the atmosphere is minimal and has less impact and is lower in volume than, for example, methane given off by termites (who outweigh humans by 30x). ” http://globalwarminghoax.wordpress.com/2006/08/01/global-warming-is-a-hoax-invented-in-1988/#comment-9266

MW
June 7, 2010 9:37 am

Correct me if I’m wrong, but doesn’t thawing permafrost release the same Carbon isotope as fossil fuel (12C)? Could thawing permafrost as part of a natural warming cycle account for part of the CO2 increase?

toho
June 7, 2010 9:43 am

“Nick Stokes says:
June 7, 2010 at 2:52 am
Willis,
A good explanation of many things, especially the e-folding time. But I don’t agree with your calculation of the 31 year period. I think you have calculated as if each added ton of CO2 then decays exponentially back to the 1850 equilibrium level. But the sea has changed. It is no longer in equilibrium with 280 ppm CO2. Of course it has its own diffusion timescale, and lags behind the air in pCO2. You could think of the decay as being back to some level at each stage intermediate between 1850 and present.
If you apply that process to the emission curve, you’ll match the airborne fraction with a slower decay (longer time constant) where the decay has less far to go.”
No, Nick. Exponential decay towards an increasing equilibrium level would need a smaller time constant for the observed rate of sequestration, not a larger one.

kwik
June 7, 2010 9:45 am

Richard S Courtney says:
June 7, 2010 at 3:58 am
That is what I call using your little grey cells.
Come on Willis! That pre historic CO2 proxy is low pass filtered!
From Segalstad;
http://folk.uio.no/tomvs/esef/esef3.htm
And;
http://folk.uio.no/tomvs/esef/esef5.htm

GeoFlynx
June 7, 2010 9:46 am

stevengoddard says:
June 7, 2010 at 6:09 am
Claims that cement manufacture introduce significant CO2 are the work of people who don’t understand science. Sadly that includes the US Government. When the cement is mixed with water, it absorbs the CO2 back from the atmosphere. Without CO2 the cement would never harden.
http://en.wikipedia.org/wiki/Portland_cement
“Carbon dioxide is slowly absorbed to convert the portlandite (Ca(OH)2) into insoluble calcium carbonate. “
GeoFlynx-
The cement industry is thought to contribute about 5% of human Co2 emissions. Co2 is released in the manufacture of cement by the calcination of lime and the combustion of fuels in a kiln process. Portland and other hydraulic cements will continue to react with Co2 in the air and further “cure”. The reaction with atmospheric Co2 continues after initial hardening and is very slow often continuing for hundreds or thousands of years. In no way does this continued uptake of Co2 equal the amount of Co2 required in the cement’s manufacture. Hydraulic cements will harden quite well without absorbing Co2 from the atmosphere. Cements of this type are well suited to applications underground and underwater.

Doug Proctor
June 7, 2010 9:47 am

If CO2 is the primary driver of temperature changes today, what would the CO2 content be in the atmosphere historically under a reverse prediction of the IPCC temp-CO2 connection?
Note: I recognize that this argument has a limited application, as it is of a “if a then b, if b not necessarily a” type. However, is it not reasonable that for at least the post 1850 period this argument could be applied? Warmists say that most of the post-industrial warming is AG-CO2 related. Certainly the date of AGW showing up is variable, depending on how much change is desired for alarm purposes. 1850 is used as one reference point, though 1945 and 1975 are also used. The warmist argument also holds that recent changes in astrophysical input are not significant to this warming. A reversal of the prediction would show when solar influences MUST have become significant. CO2 as a significant forcing cannot be JUST in the current era. Or it will show that CO2 measurements by ice core are unreliable … which goes against the warmist view of how global temperatures are moderated even recently.

G. Karst
June 7, 2010 9:47 am

James Sexton:
“Hmm, perhaps for many, but for others it would seem including religiosity would be appropriate.”
Again, I agree, but my suggestion was for those people searching for a term, which does not imply further connotation. Reality sometimes has to move over for politeness! Besides, faith and belief are terms which can be thrown back at us. That is all I have to say. GK

June 7, 2010 9:49 am

Willis:
I’m very impressed by your work.
But to all the people playing “average temperature”, and in the spirit of trying to do GOOD ENGINEERING WORK… “average temperature” is a FICTION and MEANINGLESS. Here is why: Go to any online psychometric calculator. Put in 105 F and 15% R.H. (Heh, heh, I use the old English units, if you are fixated on Metric, get a calculator!) That’s Phoenix on a typical June day.
Then put in 85 F and 70% RH. That’s MN on many spring/summer days.
What’s the ENERGY CONTENT per cubic foot of air? 33 BTU for the PHX sample and 38 BTU for the MN sample.
So the LOWER TEMPERATURE has the higher amount of energy.
Thus, without knowledge of HUMIDITY we have NO CLUE as to atmospheric energy balances. “Average temperature” discussions, EVEN IF THE PROXIES ARE VALID (something I strongly doubt…take the O18 proxy, geologist use it to trace coastal outfalls from tropic areas, where tropic thunderstorms enrich it. That’s it, period…INVALID as a temperature proxy. KING’S NEW CLOTHES argument, just because it’s been repeated long enough and loud enough does not mean it is true!) the problem is, in terms of atmospheric energy, they are MEANINGLESS.
Max

latitude
June 7, 2010 9:51 am

“”Smokey says:
June 7, 2010 at 9:09 am
Interesting CO2/Temp chart.
Smokey says:
June 7, 2010 at 9:18 am
P Berkin,
Here’s another CO2 chart to keep you from worrying. And another.””
Thanks Smokey, it’s nice to have a reality check.
Measurements like ‘tons’ etc, are meant to fool.
Percentages are all that matter, and out of that very small percentage,
how much is ‘man made’ and how much of that can we really do about it?

janama
June 7, 2010 10:03 am

I’ve currently interested in the work of Dr Christine Jones, retired soil scientist. She recently made the following statement.
“This year Australia will emit just over 600 million tonnes of carbon. We can sequester 685 million tonnes of carbon by increasing soil carbon by half a per cent on only two per cent of the farms. If we increased it on all of the farms, we could sequester the whole world’s emissions of carbon.”
http://www.abc.net.au/landline/content/2008/s2490568.htm
Perhaps our agriculture methods are having more effect on CO2 levels than we realise.

Lee Klinger
June 7, 2010 10:05 am

Interesting piece Willis. Your conclusions on post-1850 CO2 trends seem fair, but I don’t think that the pre-1850 CO2 data that you show in Figure 1 tell the whole story. Back in 1996 I did an analysis of the available CO2 proxy data, most of which were from ice cores, and the results differed form what you show. The relevant figure from this paper is shown in my blog (http://suddenoaklifeorg.wordpress.com) here:
http://suddenoaklifeorg.wordpress.com/2010/01/10/the-potential-role-of-peatland-dynamics-in-ice-age-initiation/
Note that I found a slight but significant downward trend in CO2 concentrations during the few thousand years leading up to 1850. So I’m curious why the difference in these data sets?

Enneagram
June 7, 2010 10:12 am

P. Berkin says:
June 7, 2010 at 9:04 am
I was starting to worry a bit . . . then I re-drew the top graph with the y-axis going from 0 – 1,000,000 and I stopped worrying.
I am not a climate scientist, by the way.

However YOU ALREADY KNOW THE TRICK : By making “convenient graphs” you’ll scare housewives and politicians will praise you, and, what is more important in these days, you’ll eat tomorrow and you won’t lose your house the day after tomorrow.

Steven mosher
June 7, 2010 10:13 am

Nice work Willis. It’s notable to find the number of responses that are not on point. perhaps you should add a 7th rule about being germane, or about admitting when one is wrong. Anyway, I should catalogue the various ways in which people avoided simple agreement with your hypothesis. Also I want to reserve special criticism for those people who still do not understand what the TRICK is. The trick has been covered in detail many times, but I will do it one more time. the TRICK consists of this
1. truncation of a proxy series. {not always required}
2. Splicing a temperature series onto that.
3. smoothing the result.
4. failing to indicate that this is what you did by NOT distinguishing the data sources.
The KEY ELEMENTS are these
1. performing a mathematical operation on the two datasets that regards them as measures of the same thing. The smooth.
2. presenting the result AS IF it were data from from one source.
Folks Willis has not performed the trick. I give him a F on climate science trickery.
Anyway, to respond simply. I agree with you willis. There is more C02 in the atmosphere today than in 1850. The best explanation for that is man’s activities.
I’ve seen nothing in the way of argument that would lead me to seriously question either of those.

Arno Arrak
June 7, 2010 10:14 am

The perfect linearity of the pre-eighteenth century curve is a surprise. Against that background, the more recent trend stands out like a sore thumb. I am not greatly concerned with that, however, because you cannot jump from there to argue that it warms the world if you cannot explain the mechanism. The IPCC starts out with the Svante Arrhenius formulation but this does not give as much warming as they want so they fudge it by adding positive feedback from water vapor. This is how they get their alarming forecasts. But now we have two reasons to doubt them: Willis has come out with a “Thermostat Hypothesis” whereby tropical clouds and thunderstorms actively regulate the temperature of the earth. And now Ferenc Miscolczi (E&E, vol. 21, no. 4, 2010) has a theory according to which water vapor feedback is not positive as IPCC would have it but strongly negative which prevents any temperature excursions caused by excess greenhouse gas concentrations. He also points out that for all practical purposes the oceans are an essentially infinite source of water vapor. Which means that AGW simply cannot happen. And fretting about an increase in the partial pressure of atmospheric CO2 is nothing more than an unnecessary distraction designed to make us fear that coming warming it is supposed to bring.

Rhoda R
June 7, 2010 10:14 am

“The termites beat us hands down.
“According to the journal Science (Nov. 5, 1982), termites alone emit ten times more carbon dioxide than all the factories and automobiles in the world. ” Ha! That’s where all the extra CO2 is coming from — the termites in the south having a field day since air condidtioning made living there viable.
Actually, I have a bit of a problem: All this CO2 rising seems to be post 1950 but the industrial age started in the early 1800’s. Parts of England, France, Poland and Germany were covered in soot from the coke and steel production by the 1870’s and the US was ramping up to speed in Pennsylvania but the increases shown n the charts don’t reflect this. Since the 1960’s on, manufacturing has become cleaner with scrubbers, etc – even with China and India. The biggest change that can see to human fuel useage that follows the CO2 usage charts is the automobile.

Steve Fitzpatrick
June 7, 2010 10:14 am

toho says:
June 7, 2010 at 9:43 am
“No, Nick. Exponential decay towards an increasing equilibrium level would need a smaller time constant for the observed rate of sequestration, not a larger one.”
Yup. that is exactly right. Depending on what level of CO2 from land use change you believe (Willis is taking a high value), the decay constant ranges from about 31 years to about 43 years (if you believe a much lower land use contribution, as several recent studies suggest). In any case, the IPCC projections of CO2 decay rates are way to low.

Chris G
June 7, 2010 10:19 am

Overall, a reasonable posting.
A few thoughts:
“Now, how can we determine if it is actually the case that we are looking at exponential decay of the added CO2? One way is to compare it to what a calculated exponential decay would look like.”
This presumes there is only one sink, and that it is infinite, as another commenter noted. There are multiple sinks, of limited capacity, acting on different timescales. What Willis has captured is most likely the upper 700m of the ocean, a fast acting sink, as is well known by the already apparent rise in carbon content of this volume. If a sink is fast acting, it is likely to reach an equilibrium point quickly; so, expecting the same rate of decay into the future is problematic.
under “Issue 3. 12C and 13C carbon isotopes”
I think Willis is making the mistake of thinking of CO2 absorption by the ocean as a one-way trip. A decent analogy is to imagine a host of tennis players knocking green balls to each other; the balls are CO2 molecules and the net is the boundary between water and air. The court is littered with balls. If you dump a bunch of orange balls onto one side, they don’t simply ‘decay’ to the other side; in a little while, an equilibrium is reached where there are orange and green balls on both sides.
@Slioch:
“the reaction whereby CO2 is most readily absorbed is NOT by simple reaction with water”
I would expect that CO2 would have to be absorbed by the water before it could react with the other chemicals in the water. How much gas is absorbed in water is largely a function of temperature and relative concentrations; no chemical reaction is required there. What you have shown is better labeled as how an increase in CO2 content leads to a decrease in pH, rather than as how CO2 is absorbed.
Ian H,
“I find it interesting that the sequestration rate seems to be getting larger – it is trending above the exponential line. ”
That’s about the opposite of how I see the graph. Looking at the end points and the rate of change of the slopes, calculated airborne is less than actual airborne, and calculated absorbed is higher than actual absorbed. I suspect Willis had to tweak the decay rate vary carefully in order to be able to overlap the curves without the differences between the rates of change (and the rates of change of the rates of change) (derivative and second derivative for those familiar with calculus) being readily apparent.

JPeden
June 7, 2010 10:19 am

“Richard S Courtney says:
June 7, 2010 at 2:43 am”
Many thanks for your detailed schema!

Phil's Dad
June 7, 2010 10:19 am

Double or quits!
Someone mentioned that there was not enough economically accessible fossil fuel left in the world to produce a doubling of CO2 levels. Maybe not even enough to double it against pre-industrial levels. That is a pretty important statement as CAGW scenarios pretty much rely on that doubling. A response to that was that out gassing had plenty to spare. This post seems to say don’t hold your breath. What are the facts?
Oil
Global proved oil reserves in 2008 fell by 3 billion barrels to 1,258 billion barrels, 0r 52.836 trillion US gallons with an R/P (reserves-to-production) ratio of 42 years.
(http://green-energysaving.com/carbon-emissions/how-much-oil-is-left-in-the-world-when-will-oil-run-out/)
(http://www.bp.com/sectiongenericarticle.do?categoryId=9023769&contentId=7044915)
Gas
Global proved reserves of natural gas in 2008 were 185.02 trillion cubic meters or roughly 6,500 trillion cubic feet with an R/P ratio of 63.1 years.
(http://www.carboncounted.co.uk/when-will-fossil-fuels-run-out.html)
(http://green-energysaving.com/carbon-emissions/fossil-fuels/how-much-natural-gas-is-left-in-the-world-when-will-natural-gas-run-out/)
(http://www.bp.com/sectiongenericarticle.do?categoryId=9023779&contentId=7044843)
Coal
World Energy Council 2007 global coal reserves was 847 billion tonnes. BP’s 2008 total estimate was 826 billion tonnes with an R/P ratio of 122*. Roughly half of this is sub-bituminous and lignite.
(But Prof. David Rutledge, chair of Engineering and Applied Science at the California Institute of Technology applied the “Hubbert linearization method” to today’s major coal-producing countries, including the US, China, Russia, India, Australia and South Africa. Hubbert linearisation suggests that future coal production will amount to around 450 billion tonnes – little more than half the official reserves.
Although most academics and officials reject the idea out of hand, over the past 20 years, even official reserves have fallen by more than 170 billion tonnes even though production is only 6 billion tonnes per year. 50 billion has “disappeared” from the official estimates – 20 billion between 2007 and 2008.
In February 2007, the European Commission’s Institute for Energy reported that the reserves-to-production (R/P) ratio had dropped by more than a third between 2000 and 2005, from 277 year’s worth to just 155. By 2008 this was 122.
*The world coal institute notes however that recent falls in the R/P ratio can be attributed to the lack of incentives to prove up reserves, rather than a lack of coal resources. Exploration activity is typically carried out by mining companies with short planning horizons rather than state-funded geological surveys. There is no economic need for companies to prove long-term reserves.)

(http://green-energysaving.com/carbon-emissions/fossil-fuels/how-much-coal-is-left-in-the-world-when-will-coal-run-out/)
(http://www.davidstrahan.com/blog/?p=116)
(http://www.worldcoal.org/coal/where-is-coal-found/)
(http://www.bp.com/sectiongenericarticle.do?categoryId=9023784&contentId=7044480)
Summary of world fossil fuel resources
For our purposes I will take the highest figures to arrive at the highest achievable levels of CO2. So 1,258 billion barrels of oil, 6,500 billion cubic feet of gas, 413 billion tonnes of “black” coal and 413 billion of “brown” coal.
The combined liquid fuels from an average barrel of crude oil will produce a minimum of 317kg of CO2 when consumed. (http://numero57.net/2008/03/20/carbon-dioxide-emissions-per-barrel-of-crude/)
So we get 398,786 billion Kg of CO2 from all known oil reserves
1000 cubic feet of Gas will results in between 115lb – 120lb of carbon dioxide depending on the temperature at which the cubic feet were measured.
(http://cdiac.ornl.gov/pns/faq.html)
(http://www.eia.doe.gov/oiaf/1605/coefficients.html)
Let’s stick to our rule of taking the worst case. We get 6.5 billion X 120lbs or 780 billion lbs of CO2. That is just less than 354 billion kg from gas.
Best quality anthracite produces 2.84 times its own weight in CO2 falling to lignite at only 1.4 times its own weight.
(http://www.eia.doe.gov/cneaf/coal/quarterly/co2_article/co2.html)
Coal type CO2 lbs per short ton
Anthracite AC 5685.00
Bituminous BC 4931.30
Subbituminous SB 3715.90
Lignite LC 2791.60
(http://www.eia.doe.gov/oiaf/1605/coefficients.html)
(The short ton is a U.S. unit of weight equal to 2,000 pounds. For the most part this post uses metric tonnes = 1000 kgs – it is noted when otherwise)
For our calculation we will again apply our worst case rule and say all black coal is Anthracite and all brown coal is sub-bituminous. 413 billion X 2.84 = 1,173 billion tonnes plus 413 X 1.86 = 768 billion tonnes. 1,941 billion tonnes in all or (big number alert!) 1,941,000 billion kg of CO2 from all known coal reserves.
The coal number makes the oil and gas figures look like a minor problem. “Oil and gas by themselves don’t have enough carbon to keep us in the dangerous zone [of global warming] for very long,” said Pushker Kharecha, a scientist and colleague of Hansen at NASA GISS. http://www.wired.com/wiredscience/2008/12/oil-not-the-cli/#ixzz0q7dazJcf You can kind of see where J Hansen’s death trains are coming from. Sorry –sorry, wash my mouth out etc. Anyhoo!
Total CO2 from burning all known fossil fuel reserves would be 2,340,140 billion kgs. Loads! – but what is that in p.p.m. of the total atmosphere? Well…
The weight of the Earth’s atmosphere is 441,000 billion x 10 = 4.41 million billion (long) tons. Or 4,480,000 billion metric tonnes.
(http://www.hydrogen.co.uk/h2_now/journal/articles/2_global_warming.htm)
2340.140 / 4,480,000 = 522p.p.m.
So yes it could more than double CO2 levels.
Except that only 40% of that will stay in the atmosphere for any meaningful length of time. A constant unchanging 40% by the way. Knorr, W. (2009), Is the airborne fraction of anthropogenic CO2 emissions increasing?, Geophys. Res. Lett., 36, L21710, doi:10.1029 / 2009GL040613.
(http://wattsupwiththat.files.wordpress.com/2009/11/knorr2009_co2_sequestration.pdf)
This reduces the potential CO2 from burning all known reserves of fossil fuel to just 208 p.p.m. beyond where we are now.
Given what Willis Eschenbach tells us here about natural out gassing, that is about another 45 p.p.m. to come. 253 p.p.m. in all?
Even if the IPCC are right about 3degC per doubling that means we would struggle to find 2degC of warming left in fossil fuels and “feedback” out gassing combined.
And…relax

Ian W
June 7, 2010 10:23 am

Here is a Null Hypothesis:
When air is trapped in snow which over decades becomes trapped by more snow and buried in ice below the firn, the CO2 in the trapped air diffuses initially into the snow layer then into the ice below the firn such that a steady state concentration balance of CO2 in the ‘bubbles’ is reached after several centuries of diffusion in both directions.
Until this hypothesis is disproved all ice core proxies should be treated with extreme caution. Especially as the results from ice cores run so counter to the results from stomata counts after all fossilized plants are used for temperature proxies.
(some studies of diffusion have been done see: CO2 diffusion in polar ice: observations from naturally formed CO2 spikes in the Siple Dome (Antarctica) ice core
Jinho AHN, Melissa HEADLY, Martin WAHLEN, Edward J. BROOK,
Paul A. MAYEWSKI, Kendrick C. TAYLOR).. Shows diffusion in the firn as well as in the ice.

Allen63
June 7, 2010 10:23 am

My issue is with figure one ice core data. I am very skeptical that all the core data readings gave exactly the same CO2 measurement (plus or minus a few ppm). I feel assumptions have been made leading to the result shown in the figure. That is, the ice core data as interpreted by current science may not be an accurate representation of ancient CO2.
1. I imagine the readings were averaged, and I imagine that the estimated dates ascribed to each reading were off by decades — centuries for more ancient data. This could lead to a seeming constant CO2 — when actually there was considerable variability.
2. CO2 trapped in ice may not be entirely trapped. It could diffuse in ice over the centuries — again averaging out large gains and losses with the same effect as item 1.
3. I imagine the ice core CO2 “measurement” was not precisely 270ppm (or whatever) in all 8 cores — possibly in no core. Rather, the readings were quite different. Then, “Kentucky windage” was applied by adding or subtracting an amount from the “level” CO2 ppm to make ice core readings coincide with recent historical atmospheric measurements. This is circular reasoning — “we can adjust old values to today’s values because CO2 has not changed until recently”. Thus, the “perfect hockey stick shape” — is based on assumption and circular reasoning.
4. I have never seen proof that actual CO2 levels in ice cores are literally invariant over centuries or on a millennial scale. That is, the amount initially trapped may not be the amount currently in the ice. This not the same concern as item 2 (but may seem similar). So, a measurement taken today from 1000 or 10000 year old ice may not be representative of the CO2 amount originally trapped in the ice. No, there is just an amount of CO2 present today — its correlation to the amount initially present a thousand years ago is not a proven fact — it is an assumption.
Hence, if we do not really know the past history of CO2 (some folks make assumptions about it, but do not really know), then we can’t be confident about what processes are raising CO2 now.
If we are being asked to change our lifestyles and pay out trillions in new taxes to prevent CO2, we need much stronger evidence.

Pofarmer
June 7, 2010 10:24 am

A number of published studies suggest that between about one fifth and one third of a pulse of CO2 would remain in the atmosphere for long periods, only being eventually removed over millennia as the slow weathering of rocks delivers more CO3– to the oceans.
Those studies completely discount the rest of the biosphere.

DirkH
June 7, 2010 10:29 am

” Joe Lalonde says:
June 7, 2010 at 4:23 am
Willis,
I enjoy these mind manipulation response games so, here goes.[…]”
Somehow this turned me off from reading the rest of whatever Joe Lalonde had to say…

Chris G
June 7, 2010 10:37 am

Lastly, regarding the abstract of the Nature article,
http://www.nature.com/nature/journal/v279/n5710/abs/279229a0.html
Farmer 1979.
(Think we may have learned a bit more since then?)
The mismatch of the amounts predicted by a simple calculation of the anthro sources and the amounts observed would be consistent with idea that CO2 tends to be released by the biosphere when the temperature increases. This would fit the historical records where CO2 appears to be a positive feedback on Milankovitch cycles. This fits with the swings in climate being larger than the forcings of the cycles. So, in the present day, CO2 could be it’s own positive feedback. (No, there is no run-away, but that’s too long to explain here.)

Charles Higley
June 7, 2010 10:38 am

1) To be thorough, Beck’s research and data have to included here. Geocarb’s 600 million year graph should also be included. The impression given is that only man can change CO2.
2) It is highly unlikely that CO2 would be so consistent over time, volcanoes and warm/cold spells would definitely have an influence – man might, too. As with climate, CO2 would be constantly varying with the conditions.
3) Why is it ignored totally that the Mauna Loa data and ice core only “agree” because the ice core data was time shifted (artificially/fraudulently?) 84 years into the future? In my book, this horrendously poisons the resulting graph which melds two vastly different data sets.
4) As mentioned by others, Jaworowski has pointed out that ice core CO2 measurements CANNOT be interpreted as absolute—there is just too much trauma involved in taking cores. Although I usually agree with most articles presented here, I think that taking ice cores as sources of absolute CO2 data is a major failing of the discussion in this article.

June 7, 2010 10:52 am

@ Doug Proctor
“1850 is used as one reference point, though 1945 and 1975 are also used.”
Yep! I use this as a “tell”! It’s bad enough the referenced time periods are arbitrary and have no real reasoning behind the picking of the time periods, but they’re entirely different time periods! The 1850 is for CO2 or industrialization even though man started to industrialize well before 1850 and made greater CO2 emissions well after 1850. The ’45-’75 seems to be somebodies idea of an ideal climate. I’ve never been allowed to vote for which time period I preferred. Even if CO2 and temps had a direct correlation, it wouldn’t be appropriate to include them in the same references unless the posit is CO2 emissions lag to effect temps is about 100 years. Which, the lag thing never made much sense to me. CO2 emissions are immediate. So, too, is the re-radiative properties of CO2, unless some can tell me how it takes x amount of time to train CO2 to become a greenhouse gas. Or perhaps it takes so long for the CO2 to bounce the energy back to earth from the enormous altitude of 10 km.
Pop quiz!! Does anyone know how much of the energy absorbed by tropospheric CO2 is radiated back down to the earth by percent? I’ve always assumed 50%, but I don’t know that either.

Richard S Courtney
June 7, 2010 11:06 am

Friends:
I write to dispute an assertion made by several people here that determiation of the cause of the recent rise in atmospheric CO2 is a trivial matter not worthy of investigation. Perhaps this assertion was best stated by Ernesto Araujo (June 7, 2010 at 7:15 am) who wrote:
“The debate about what causes CO2 concentration in the atmosphere is pointless. What matters is: does the increase in CO2 concentration cause warming? The whole thing is about Global Warming, not about CO2 concentration. To indicate that CO2 increase causes warming, you would need to present a curve where temperature oscillations match CO2 concentration, and that curve clearly does not exist for the last 1000 years, nor for the last 150 years.”
This assertion is wrong on two counts; viz. theoretical and practical.
Michael Faraday gave a clear answer to the theoretical point when the then UK Prime Minister asked him what use Farady had for his work on electical induction. Farday replied that he knew of no use for his work but he was confident that somebody would find a use for it someday. Well, “somebody” did, and nobody could be reading this if “somebody” had not.
And the practical point is directly pertinent to the argument put by Ernesto Araujo and others. The hypothesis of anthropogenic global warming (AGW) is being used as justification for amending energy, industrial and economic policies world-wide. But that hypothesis is founded on three assumptions: viz
(1) It is assumed that the anthropogenic CO2 emission is the major cause of the increasing atmospheric CO2 concentration
and
(2) It is assumed that the increasing atmospheric CO2 concentration is significantly increasing radiative forcing
and
(3) It is assumed that the increasing radiative forcing will significantly increase mean global temperature.
There are reasons to doubt each of these assumptions. But if any one of them were known to be false then the entire AGW hypothesis would be known to be false.
Perhaps the recent rise in atmospheric CO2 concentration is anthropogenic, but it would seem injudicious to disrupt energy, industrial and economic policies on the basis of an assumption that the rise is anthropogenic.
Richard

barry moore
June 7, 2010 11:06 am

An interesting article Willis unfortunately it is full of errors and contradictions, these may have already been pointed out but I have not had time to read all the posts.
First the history of CO2 concentration in the atmosphere. The ice core samples are not the only proxy but all other proxies are noticeably ignored. Regarding the ice core data I recommend J J Drake’s paper “A Simple Method to in ice core data.” A few of the key points, below a certain level all the gasses are dissolved in the ice due to the pressure there are no gas bubbles. The solubility of the gasses are very different thus CO2 is absorbed quicker than the other gasses even in the firn. The CO2 migrates in the ice and some gets locked into calthrates, this CO2 does not get released during the crushing process. From the isotopic analysis of the O18 the H2O and the CO2 can have a difference in age of up to 7000 years and there is a correlation between the age difference and the measured concentration of CO2.
I would also like to recommend Dr. Tim Ball’s paper “Measurement of Pre Industrial CO2 Levels”. This shows the 500 million year history of CO2 and global temperatures which clearly indicates zero correlation between CO2 and temperature.
Most ice cores will cover from 100 years to 1000 years in a single core so they are at best a general average thus they can not be compared to daily samples taken with state of the art analytical equipment. The leaf stomata proxy however is a much more precise indicator of the CO2 content in the atmosphere in the year the leaf grew, carbon dating is quite accurate since the half life is around 5300 years for C14. The leaf stomata evidence shows how erroneous and misleading the ice cores are.
Now we turn to the carbon cycle reference IPCC AR4 section 7.3.1. In engineering there is a basic calculation called a mass balance, now try to apply some of these basic principles to Fig 7.3, If the air contains 597 GT of natural and 165 GT of anthropogenic CO2 then the ratio dissolving in the oceans and being absorbed by the biota should be in this ratio. What do we have, a ratio of 70 t0 22.2 by the ocean ( close) and 120 to 2.6 by the biota ????? Now when the water in the surface of the oceans evaporate we have 900 to 18 in the liquid water but 70.6 to 20 when it evaporates, now this is really getting strange.
Then we have 244 GT of fossil fuel carbon released all time by human activity but 100 GT is already sequestered in the deep ocean leaving 144 between the air ocean and biota, but look there is 169 GT in the air alone, this is what I would term political math. The make up is the minus 140 anthropogenic in the land which I assume is CO2 released by land use, if this is the case it must be shown as a flux since this carbon is not anthropogenic created but it is released by human activity there is a 1.6 GT flux shown but how can this account for 140 GT?? Now there is a 119.6 flux caused by respiration but where is the anthropogenic component.
I created a simple mass balance program using the figure values for carbon content where possible and the iteration with all ratios balanced in both directions leads to a 2.4 year residence time for all CO2 in the atmosphere and the total CO2 content was adjusted to 860 GT to bring the figure up to data of which 79.9 G which gives 34.3 ppm as the human impact even taking the generous number of .004 deg C per ppm this yields an impact of 0.14 deg C on the global climate.
Let us now consider your evaluation of residence time and half life here I am afraid you get very confused “The strength of the exponential decay is usually measured as the amount of time it takes for the pulse to decay to half its original value (half-life) or to 1/e (0.37) of its original value (e-folding time).” How on earth can half its value be 0.37 of the same value? more political math?
There is only one half life or residence time and if you cut something in half enough times it becomes insignificant but it never reaches zero and that is where the IPCC nonsense comes from. The estimate of 6 to 8 years for the half life is high and does not agree with the most recent research. I think the average is about 5 years but by mass balance calculation it is 2.4. Your number of 211 GT per year of carbon flux yields a half life of aproximately 4 years for an atmospheric content of 860 GT. I believe the cycle is much higher than 211 GT per year, check the seasonal variation in the NH levels of CO2 due to just the annual vegitation and you start to get a picture of how large the biota uptake is.

June 7, 2010 11:08 am

Are you crazy? Only a fool would believe these deniers and warmists and econazis with their base motives.
Seriously though, great post, really elucidated the topic nicely for me.

June 7, 2010 11:23 am

J. Bob @9:34 a.m.,
Interestingly, your charts look like this one. They generally seem to begin rising around 1900.

Will F
June 7, 2010 11:25 am

Willis
are you saying that there is 2,460.15 billion tons of co2 in the atmosphere?
I have not been able to find a figure above 750 billion tons for co2.

Richard S Courtney
June 7, 2010 11:33 am

Ian H:
At (June 7, 2010 at 9:08 am) you ask:
“Anyone who doesn’t think the CO_2 rise is due to human beings should explain where all that CO_2 we have emitted actually went if not into the atmosphere.”
But your question has been answered in two different ways above.
I answered it at (June 7, 2010 at 2:43 am) where I explained that the anthropogenic emission enters the carbon cycle.
And at (June 7, 2010 at 6:18 am) Steve Keohane explained that the anthropogenic emission gets lost in the error terms of the estimates of natural emissions.
These two different answers amount to alternative views of the same effect.
Until somebody can provide a sufficiently accurate measure of all the sources and sinks for the anthropogenic emission then those who “think” the CO2 rise has an anthropogenic cause need to answer the question as to why they “think” that.
I want to know if the cause of the recent rise is anthropogenic in part or in whole. The anthropogenic emission could be the cause of the rise, but consideration of the carbon cycle suggests that the anthropognic emission is irrelevant to the cause of the recent rise (please see my post at June 7, 2010 at 2:43 am for an explanation of this).
Richard

Bart
June 7, 2010 11:34 am

The problem with this chart is that you are arbitrarily assuming one type of exponential decay for the anthropogenic component of CO2, and implicitly an entirely other, and faster, one for natural CO2. Aside from extremely minor isotopic distribution differences, nature essentially cannot tell the two molecules apart.
If you apply the same time constant to the much larger portion of naturally generated CO2, you will quickly find the atmosphere almost entirely composed of CO2. The only way to square this would be to hypothesize that the CO2 reservoir is saturating, and becoming less able to absorb the new, anthropogenically generated CO2. But, if that were the case, the yearly variations in CO2 you see in the MLO and other records would be progressively increasing in amplitude and warped. Instead, they are very regular for the past 50 years.
The isotopic ratio question is entirely speculative and, as others have noted and as has been dramatically demonstrated in the Mann-made temperature hockey stick, the grafting of proxy data onto measured data is highly questionable. So, overall, you have questionable evidence supporting an implausible hypothesis. There can be no doubt that the CO2 rise is mostly natural, it will eventually falter, and then researchers will actually start to look for what really caused it.

June 7, 2010 11:35 am

@ Steven mosher
Well, he did splice……:-)
As far as germane, if the question was simply do we have more CO2 today than 150 yrs ago, then no, they aren’t very germane. However, today, any mention of CO2 has an implicit connection to a myriad of topics while I’m left to wonder how a discussion on CO2 is germane to anything other than photosynthesis.

Bart
June 7, 2010 11:48 am

One other thing that I feel a need to vent on: the question of whether to call AGW advocates “warmists”. “Warmist” is awfully tame compared to “denier”. “Denier” is clearly an intentional reference to Holocaust deniers, and should be offensive to every thinking man and woman on Earth.
The History channel and other cable channels have been showcasing retrospectives on WWII for the past couple of weeks, what with Memorial Day and D-Day remembrances. My wife and I watched the episode of “The World at War” focusing on the concentration camps just the other night. I can barely see straight when I contemplate these jerks popping off about “deniers” and recall the unimaginable barbarity of the Nazis which we were reminded of again that night. I would urge you to delete any posts snarking off about “deniers” immediately and without qualification.

FrankS
June 7, 2010 11:49 am

Not quite answering Willis’s question but James’s comment to Roy Spencers blog entry sums up extremly well the relative control on volumes of CO2 that AGW believers want to implement.
Blog here – http://www.drroyspencer.com/2010/06/warming-in-last-50-years-predicted-by-natural-climate-cycles

James Davidson says:
June 6, 2010 at 11:01 AM
CO2 levels in 1890 were 290 ppmv. ( Siple ice core.) Current levels ( Mauna Loa ) are 388 ppmv ( round it up to 390 ppmv ) for an increase of 100 ppmv over the last 120 years. The mistake a lot of people seem to make is to treat these as whole numbers, instead of what they are- the numerators of fractions. In 1890 CO2 constituted 290 millionths of the atmosphere, and this has now risen to 390 millionths of the atmosphere – an increase of 100 millionths. To express this as a fraction, multiply by 100, for an answer of 0.01%. If someone in 1890 had decided that CO2 levels should be kept constant and had succeeded 120 years later to within one hundredth of one percent, they would think they had done pretty darn well. I really cannot believe that such a small increase can have had ANY effect on global warming, and as Dr Spencer has shown, natural variation is a more likely candidate.

Willis calculations assume that with the exception of sequestration the effect from the natural world is largely static or at least much smaller. With man’s fossil fuel contribution being only1/24 of the volume produced by nature (Ian Plimer P180 man nearly 4%, Oceans nearly 60% and animal nearly 40%) it would only take a small change there to swamp and invalidate Willis’s calculations. And we know that in the past from ice core data that CO2 has generally changed with a 800 year time lag on temperature so it is extremely unlikely to have been static during this period.
So to propose controlling CO2 levels “to within one hundredth of one percent” with natures vastly larger variable input – how reasonable is that.

Honest ABE
June 7, 2010 11:58 am

My understanding is that most CO2 is produced from the decay of organic matter. Would increasing temperatures accelerate this process? Would the millions of miles of roads, often concrete or gravel, create local heating effects to do the same? These local heating effects, producers of CO2 (my assumption), would have a greater effect from locations that are rather cold like cities in Russia and such land use changes may produce isotopic signatures similar to fossil fuel use due to old/frozen plant matter finally having a chance to decay and release their ancient CO2.
Just some thoughts. Cheers.

DirkH
June 7, 2010 12:00 pm

” Steinar Midtskogen says:
June 7, 2010 at 6:59 am
Humans are most certainly the cause of the recent CO2 increase. A simple graph comparing CO2 with the population should offer an important hint:
http://voksenlia.net/met/co2/pop.jpg

Why should the number of humans be proportional to the CO2 concentration? If anything, the number of humans would have to be proportional to the differential of the CO2 concentration. Or do you posit a magical amount of CO2 in the air per living human individual? How should that work?

Al Gored
June 7, 2010 12:03 pm

“Given all of the issues discussed above, I say humans are responsible for the change in atmospheric CO2 … but… please be aware that I don’t think that the change in CO2 will make any meaningful difference to the temperature.”
OK. I would say it slightly differently: Given all of the issues discussed above, I say humans are responsible for some of the change in atmospheric CO2 … but… please be aware that I don’t think that the change in CO2 will make any meaningful difference to the temperature.
Love your rules Willis!

DirkH
June 7, 2010 12:04 pm

” barry moore says:
[…]
Let us now consider your evaluation of residence time and half life here I am afraid you get very confused “The strength of the exponential decay is usually measured as the amount of time it takes for the pulse to decay to half its original value (half-life) or to 1/e (0.37) of its original value (e-folding time).” How on earth can half its value be 0.37 of the same value? more political math?”
http://en.wikipedia.org/wiki/E-folding

phlogiston
June 7, 2010 12:07 pm

Bart says:
June 7, 2010 at 11:34 am
… Aside from extremely minor isotopic distribution differences, nature essentially cannot tell the two molecules apart.
Are you sure – where are your figures? As the Iranian nuclear industry is learning, isotopes of different mass do behave differently on the basis of that difference – it is not impossible for heavier C13 to settle faster than C12. This would put a spanner in the works. (“Extremely minor” – 8.3% mass difference, not quite insignificant.)

Tim
June 7, 2010 12:10 pm

Is there any ocean floor core samples that show CO2? I ask this because there are 50+ sites for those and only 8 for ice cores. It would be nice to see a larger group if we are talking about world wide levels as opposed to localized ones. The recently online CO2 satellite does show that CO2 is not evenly distributed.

Arfur
June 7, 2010 12:11 pm

Thank you Willis for a balanced and informative article.
I agree that the balance of evidence suggests that mankind is responsible for most of the measured rise in CO2 since 1850, although I also agree with other posters that the proxy measurements prior to 1850 are not necessarily accurate for comparison with modern measurements.
But I do have a problem with the connection with increased CO2 and increased warming (particularly of the catastrophic kind). Can someone help me out here problem because my scientific knowledge is seemingly lacking on the following reasoning:
CO2 exists at less than 400ppmbv in the atmosphere (in fact, I believe ALL the GGs which have a ‘radiative forcing’ factor as described by the IPCC) exist at less than 400ppmbv. To my logic, therefore, each GG molecule capable of radiating is surrounded by approximately 2500 molecules of ‘greenhouse-inert’ N2, O2 molecules or Ar atoms. The fact that CO2 can re-radiate having absorbed LW radiation is well-known, but – and here is the main point of my confusion – simply re-radiating does not directly infer significant warming. The radiation has to be absorbed by another molecule capable of absorbing radiation and then that molecule must transfer heat by conduction to its neighbouring molecules. Given that air is a poor conductor, how is it possible that one GG molecule can conduct enough heat to enough neighbouring molecules to create a catastrophic warming? I agree a very small amount of warming is likely, but the mechanism surely has to be conduction, not radiation, and the number of neighbouring molecules which can warm by conduction must be small.
I am probably missing something important here, so could someone please advise?
Obviously I have ignored water vapour but then it is not included as a GG when it comes to radiative forcing, and I still find it hard to believe that a GG concentration of 400ppm will have that much effect even with the wv feedback.

tonyb
Editor
June 7, 2010 12:13 pm

The history of the last 10,000 years clearly tells us that -despite Dr Mann’s work-the climate goes up and down like a yo yo. It has at times been rather warmer than today and at times much colder. If Co2 has been constant throughout this period it has surely been a very weak climate driver. We need to be told why it has now changed its characteristics and become a much more powerful climate driver in the modern era compared to the past.
The only other credible scenario is that-as would be expected- CO2 does fluctuate in some sort of reflection of the warm and cold periods the Earth experiences as oceans outgas or absorb it. What seems likely, if this alternative scenario is correct, is that the proxies-primarily ice cores-are not an accurate reflection of real world CO2 levels.
There is far too much evidence of warmer and colder times to believe that Dr Mann is correct and that his barely fluctuating temperature scenario can be set neatly besides a similarly flat co2 concentration.
So in conclusion either CO2 is, at best, a weak climate component, or that past levels have not been accurately recorded by ice cores. In either scenario surely CO2 is merely responding to the climate that preceded it, not causing it.
Tonyb

Gail Combs
June 7, 2010 12:17 pm

Oslo says:
June 7, 2010 at 3:53 am
Well, as you say – your first graph resembles the Mann hockey stick, and perhaps for good reason, as it seems to utilize the good old “trick” – splicing the instrumental record onto the proxy data.
Here is another graph, clearly showing the instrumental data (red) disjointed from the proxies:
http://www.geo.cornell.edu/eas/energy/_Media/ice_core_co2.png
So the question is, as with the hockey stick: do the figures from the two methods even belong on the same graph?
_________________________________________________________________________
Thank you for that graph. What I see as interesting is the many data points below 200 ppm. Some as low as about 170 or 180 PPM. At these levels some types of plants have great difficulty growing and reproducing. Do we have evidence of massive plant die offs to substantiate the ice core numbers?
At 180 ppm to 200 ppm C3 plants (trees) would have a very difficult time competing with C4 plants (grasses) and could not complete their life cycles especially in warmer or drought (ice age) conditions. Low levels of CO2 also leads to leaf loss to conserve moisture and slow rates of growth in C3 and C4 plants. A history of atmospheric CO2 and its effects on plants, animals, and ecosystems
By J. R. Ehleringer,

Elephants eat trees, that is why they developed trunks. So what did mastodons and woolly mammoths eat?
elephants:
* Spend 16 to 18 hours a day either feeding or moving toward a source of food or water.
* Consume between 130 to 660 pounds (60 to 300 kg) of food each day.
* Drink between 16 to 40 gallons (60 to 160 l) of water per day.
* Produce between 310 to 400 pounds (140 to 180 kg) of dung per day.
mastodons and mammoths:
…mammoths had ridged molars, primarily for grazing on grasses, mastodon molars had blunt, cone-shaped cusps for browsing on trees and shrubs. Mastodons were smaller than mammoths, reaching about ten feet at the shoulder, and their tusks were straighter and more parallel. Mastodons were about the size of modern elephants,
From the preserved dung of Columbian mammoths found in a Utah cave, a mammoth’s diet consisted primarily of grasses, sedges, and rushes. Just 5% included saltbush wood and fruits, cactus fragments, sagebrush wood, water birch, and blue spruce. So, though primarily a grazer, the Columbian mammoth did a bit of browsing as well. Mammoths
And how about the Giant Sloth
“The giant ground sloth was one of the enormous creatures that thrived during the ice ages. Looking a little bit like an oversized hamster it probably fed on leaves found on the lower branches of trees or bushes. The largest of these ground sloths was Megatherium which grew to the size of a modern elephant with a weight over five tons.
Giant Sloths had very large, dangerous-looking claws. Despite their size they were probably only used to strip leaves or bark from plants. Their teeth were small and blunt in keeping with their herbivore diet. Examinations of their hip bones suggests that they could stand on their hind legs to extend their grazing as high as twenty feet.

How could these creatures be eating trees when the CO2 levels were at 180 ppm and trees were all but extinct? For that matter given browsing pressure and problems with growing enough to produce seed why didn’t trees become completely extinct? like this?

DirkH
June 7, 2010 12:22 pm

” Juraj V. says:
[…]
Today, the rate of CO2 rise plays well with SST data.
http://climate4you.com/images/CO2%20MaunaLoa%20Last12months-previous12monthsGrowthRateSince1958.gif1998 El Nino is clearly visible, also La Nina and volcanic eruptions. But strange that 2007 La Nina is not visible. More, as oceans start to cool, the rate of rise stabilizes.

That’s a beautiful illustration of what Beenstock & Reingewertz found out – that the temperature anomaly can not be (Granger-)caused by the level of CO2 but only by the differential of the CO2 level, if at all.
The thread about their paper is here:
http://wattsupwiththat.com/2010/02/14/new-paper-on/

Björn
June 7, 2010 12:23 pm

There is no doubt in my mind that the CO2 measurements from Mauna Loa and other stations of the same type around the world the are solid, and that the steady exponential rise is they show directly are related human activity one way or another the most likely causes being the that of land use change and a fossil fuel burning used in our somewhat antique power generation technology. I also accept the ice core data as good proxy for the baseline ~280 ppmv CO2 pre-industrial atmospheric concentration, I just little pussled by th small variation (+/- 2ppmv I belive) it shows for the last 2000 years or ,beforehan I would have thougt we should see greater variations, so there is a wisp of a suspion that maybe we are missing something essential there, but ist not very strong or well founded.
But what prompted my comment here was this part of a comment from Ian Hayes
…” I find it interesting that the sequestration rate seems to be getting larger – it is trending above the exponential line.” ….
It reminded me that I had somtime back in time come across a ( now slightly aged it is a critique of some part in the IPCC AR3 report rather than the latest and now much ridiculed AR4) paper by Dr. Jarl Ahlbeck at the (late) John Daly’s website
the link is here , Alhbeck : CO2 Sink 1970-2000…
http://www.john-daly.com/ahlbeck/ahlbeck.htm
It’s an anlysis of resent past and a future forcasting exercise for atmospheric CO2 concentration attributable to human activity, the math is nothing exotic and easy to work through , and I at least can find no fault with it.
What struck me was his his result that the fraction of human CO2 emission ending up in the athmosphere for each year had gone down from either 52% to 39% or 69% to 49% depending on whether the figure for land use change estimate is included in total emission or not.
If he is right and this is not an a spurious signal somehow related to the data from 1970-2000 period that he is using but a the there is a large CO2 sink unaccounted for in the UN dream models eating up an ever larger slice of anthropogenic emission, an he project that the airborne fraction would go down below 20 % by year 2100.
I just wonder what could this sink be , if it exists

DirkH
June 7, 2010 12:27 pm

” Bart says:
June 7, 2010 at 11:34 am
The problem with this chart is that you are arbitrarily assuming one type of exponential decay for the anthropogenic component of CO2, and implicitly an entirely other, and faster, one for natural CO2. Aside from extremely minor isotopic distribution differences, nature essentially cannot tell the two molecules apart. ”
Plants can: “Carbon 13 is also a stable isotope, but plants prefer Carbon 12 ”
from
http://environmentalchemistry.com/yogi/environmental/200611CO2globalwarming.html

Brian D
June 7, 2010 12:28 pm

What is the estimated intake versus outtake of carbon in the oceans at the varying SST’s? From 0C to 30C, what are the ratios in tonnes every 1C?

KDK
June 7, 2010 12:31 pm

Martin says:
“Although it would be interesting to know how much (globally) fermentation produces.”….
I challenge warmists to actually live their nonsense by giving up ALL, yes ALL, carbonated beverages. why? Because they are a major source of useless CO2 being ‘freed’ in our system. Millions of bottles an hour being opened must have some effect, and since they are a LUXURY, not a necessity of life, then those most concerned would gladly give up their drink–not a day, or week, but forever. If not, then they are just bandwagoneers on for the ride with little belief in their own ‘belief’.
Try it. Figure it out. To warmists, ALL luxury items contributing must shut down. I guarantee you with all that I am, very few, would give up a beer, or coke, or spritzer for their cause. You will know them by their actions.
Just how much CO2 is released? I don’t know, but EVERY bottle opened is a needless addition in their mind… or should be.

Hockeystickler
June 7, 2010 12:35 pm

Willis – AGW supporter sounds like a jock strap and AGW believer sounds like a cult member ; I personally prefer Alarmist. It has been my experience that people that consistently preach doom and gloom are consistently wrong : e.g. Paul Ehrlich.

Billy Liar
June 7, 2010 12:39 pm

Dave Springer says:
June 7, 2010 at 9:22 am
‘You see, starting with the Clean Air Act of 1963 the US has dramatically reduced the amount of soot it pumps into the atmosphere. No other country in the world has come close to matching that effort. ‘
I disagree. The UK’s Clean Air Act dates from 1956. The fall of the Berlin Wall and the transformation of the former USSR did much the same for vaste swathes of eastern Europe and former Soviet countries.
Other commenters have suggested that this mass removal of aerosols from the atmosphere might have contributed to whatever warming we saw in the latter part of the 20th century.

Bart
June 7, 2010 12:42 pm

phlogiston says:
June 7, 2010 at 12:07 pm
You are arguing an entirely different matter. I am not speaking of two molecules specifically defined on a technical level. I am speaking of a naturally occurring molecule and an anthropogenically released one. One sample may be ever so slightly likely to have a different number of neutrons that the other, but across the entire ensemble of natural and anthropogenically released molecules, there is little difference.

June 7, 2010 12:44 pm

KDK,
Don’t forget toilet paper, which is made from beneficent CO2-sequestering trees. According to Laurie David, one sheet is enough. Anything more is a luxury.
I wouldn’t want to be downwind from Ms David.

Bart
June 7, 2010 12:45 pm

DirkH says:
June 7, 2010 at 12:27 pm
You did it, too. Sorry I confused the matter with an unfortunate turn of phrase. I am not separating molecules into 12C and 13C. The difference in isotopic distribution in naturally and anthropogenically generated CO2 is negligible in terms of how rapidly they should be reabsorbed.

Malaga View
June 7, 2010 12:52 pm

The Real Co2 site by Ernst-Georg Beck is very interesting:
http://www.biomind.de/realCO2/realCO2-1.htm
Especially the Atmospheric CO2 Background 1826-1960 diagram:
http://www.biomind.de/realCO2/bilder/CO2back1826-1960eorevk.jpg
Seems a lot more natural and believable than the ice core flat lining….

Editor
June 7, 2010 12:53 pm

Willis,
Another excellent essay. It would have been a perfect essay if you had been able to integrate the plant stomata and GeoCarb data with the ice core data. I think you’d find that the resolution of the ice core data are insufficient and that they consistently run 15 to 40 ppmv lower than the global average. Van Hoof et al., 2005 demonstrated that the ice core CO2 data essentially represent a low-frequency, century to multi-century moving average of past atmospheric CO2 levels.
This blog post might be of some interest: CO2: Ice Cores vs. Plant Stomata.
Based on the stomata data, I think that century-scale variations of CO2 from 275 to 360 ppmv have not been uncommon during the Holocene. I just don’t think the ice cores can resolve those variations.
The West Antarctic Ice Sheet Divide Ice Core Project may yield some much higher resolution Antarctic ice core data than has thus far been available.

Bart
June 7, 2010 12:59 pm

Bart says:
June 7, 2010 at 11:34 am
One way I may have led others astray is that, in the first sentence of my third paragraph, I did talk about the 13C/12C ratio question. But, this sentence was unrelated to the previous two paragraphs. In the first two, I explain why the anthropogenic attribution hypothesis is implausible. In the third, I explain the reasons why I believe the evidences are questionable. I sum these together in the penultimate sentence:
“So, overall, you have questionable evidence supporting an implausible hypothesis.”

Malaga View
June 7, 2010 1:00 pm

Willis: Would be very interesting to see you splice the Mauna Loa CO2 data onto Beck’s CO2 data in one of your nice graph… I am not holding my breath… but i will keep my fingers crossed 🙂

kwik
June 7, 2010 1:12 pm

” Steinar Midtskogen says:
June 7, 2010 at 6:59 am
“Humans are most certainly the cause of the recent CO2 increase. A simple graph comparing CO2 with the population should offer an important hint:”
Have you checked against the population of thermites? Because if all humans are disappeared, we will for sure be replaced by just as many kilograms of insects, as there are kilograms of humans.

Gail Combs
June 7, 2010 1:15 pm

BBk says:
June 7, 2010 at 4:11 am
“So, what should we expect? In the early decades of a pulse of CO2 being added to the atmosphere, with a “fresh” ocean awaiting, the near exponential decay of CO2 is possible. But as the surface layers of the ocean become more saturated with CO2, its ability to absorb more CO2 declines, and the removal of CO2 from the atmosphere departs from the exponential, and becomes much slower. ”
This assertion ignores diffusion of CO2 from the surface to the lower levels of the ocean. If diffusion (removal of CO2 from the surface to the lower volume) happens at a faster or equal rate to the absorbtion of CO2 from the atmosphere then the ocean can be considered “fresh” until the entire volume “fills.” While, in theory, eventually the ocean would saturate, the rate would be very slow.
Have there been any studies about the rate of diffusion of CO2 through the ocean layers?
My gut feeling is that since we’re dealing with Volume vs Area, that diffusion would, indeed, be a much larger value.
______________________________________________________________________
Your synopsis of the Carbon Cycle completely ignores the fact that CO2 is taken out of the system as a solid and deposited at the bottom of the ocean.
“Calcium occurs in water naturally. Seawater contains approximately 400 ppm calcium…..
The reaction mechanism for carbon weathering is:
H2O + CO2 -> H2CO3 and CaCO3 + H2CO3 -> Ca(HCO3)2
And the total reaction mechanism:
CaCO3 (s) + CO2 (g) + 2H2O (l) -> Ca2+ (aq) + 2 HCO3- (aq)
The product is calcium hydrogen carbonate.”

See: http://www.lenntech.com/periodic/water/calcium/calcium-and-water.htm#ixzz0qCQwTEHV
“…CO2 is also removed from solution to a small extent when proteins are present, by direct combination with amino side groups to form carbamino compounds….” http://www.acidbase.org/index.php?show=sb&action=explode&id=63&sid=66
Then there is the formation of shells and coral that is then deposited on the bottom of the sea, later to become limestone. Not to mention all the ocean plant life utilizing CO2 to grow and then become fish food…..
There is a heck of a lot of information about the carbon/CO2 cycle with the math here from an EPA research scientist: http://www.kidswincom.net/climate.pdf

June 7, 2010 1:16 pm

Statement written for the Hearing before the US Senate Committee on Commerce, Science, and Transportation
Climate Change: Incorrect information on pre-industrial CO2
March 19, 2004
Statement of Prof. Zbigniew Jaworowski
Chairman, Scientific Council of Central Laboratory for Radiological Protection
Warsaw, Poland
The notion of low pre-industrial CO2 atmospheric level, based on such poor knowledge, became a widely accepted Holy Grail of climate warming models. The modelers ignored the evidence from direct measurements of CO2 in atmospheric air indicating that in 19th century its average concentration was 335 ppmv[11] (Figure 2). In Figure 2 encircled values show a biased selection of data used to demonstrate that in 19th century atmosphere the CO2 level was 292 ppmv[12]. A study of stomatal frequency in fossil leaves from Holocene lake deposits in Denmark, showing that 9400 years ago CO2 atmospheric level was 333 ppmv, and 9600 years ago 348 ppmv, falsify the concept of stabilized and low CO2 air concentration until the advent of industrial revolution [13]. ”
11. Slocum, G., Has the amount of carbon dioxide in the atmosphere changed significantly since the beginning of the twentieth century? Month. Weather Rev., 1955(October): p. 225-231.
12. Callendar, G.S., On the amount of carbon dioxide in the atmosphere. Tellus, 1958. 10: p. 243-248.
13. Wagner, F., et al., Century-scale shifts in Early Holocene atmospheric CO2 concentration. Science, 1999. 284: p. 1971-1973.
Sorry again figure 2 would not copy. Please note last sentence about stomata. I have included references (11,12,& 13) .

1DandyTroll
June 7, 2010 1:21 pm

Today they can apparently create 1500 pound of coke from 1 ton of coal. But it still takes coal to create more efficient coal.
But back in the day, to which every greenies wants to take us it seems, what they called efficiency was utterly terrible by today’s standard, of course it would’ve pretty pissy otherwise.
Then it takes “coal”, so as not to confuse some people, to make iron, of course it took coal to mine the iron. It took coal to shape the iron that required coal to mine the iron that required coal to make the coal to melt and shape and separate.
It takes coals and “coals” to make steel. It takes a lot of coal just to make the tools you need. So it takes more coal to shape that steel into the tools you use to make and shape the steel with, then it takes more to do useful crap you can sell.
It takes an enormous amount of coal to be industrious today, it took even more in 1850, and even more before that, due to less efficiency. And that’s just iron and steel, add to that copper and bronze, lead, tin, et cetera. Heh, how about 500 years of gunpowder use in Europe between 14th and 19th century? How much, I wonder, carbon was emitted in the whole process of creating the cheapest lightest at some 60-70 pound iron stove, which pretty much every family had?
The more I try to wrap my head around it, the concept of 0.5 GT carbon at 1850 and before as depicted by one of the graphs, is just getting more and more silly. If somethings’ve been constant, or there about, for a very long time in statistics, chances are pretty darn good it probably didn’t start at zero or any where close.

Malaga View
June 7, 2010 1:24 pm


Willis Eschenbach says:
June 7, 2010 at 1:13 pm
On the Mauna Loa thread this was just discussed a couple days ago, including a comment from Beck himself. My conclusion was that the Beck records do not represent the background CO2 level … and Beck agreed.

And the ice core does a better job for the period 1826-1960!!!

June 7, 2010 1:26 pm

Thanks I had not heard of the fallacy of the excluded middle.

June 7, 2010 1:29 pm

By the way I donot disagree with the idea that some of the increase of CO2 is from human activity I just think the amount is too high.

Gail Combs
June 7, 2010 1:37 pm

Hoppy says:
June 7, 2010 at 5:00 am
“Does the CO2 level in the trapped ice represent the composition of the original air or is it the final equilibrium concentration between the trapped air and compressed snow. If it is an equilibrium then it would be a low level and very constant like that shown in Figure 1.
http://www.igsoc.org/journal/21/85/igs_journal_vol21_issue085_pg291-300.pdf
CO2 in Natural Ice
Stauffer, B | Berner, W
Symposium on the Physics and Chemistry of Ice; Proceedings of the Third International Symposium, Cambridge (England) September 12-16, 1977. Journal of Glaciology, Vol. 21, No. 85, p 291-300, 1978. 3 fig, 5 tab, 18 ref.
Natural ice contains approximately 100 ppm (by weight) of enclosed air. This air is mainly located in bubbles. Carbon dioxide is an exception. The fraction of CO2 present in bubbles was estimated to be only about 20%. The remaining part is dissolved in the ice…..”

________________________________________________________________________
Thank you very much for this bit of research. Note the DATE: September 12-16, 1977 This was written before skeptical scientists were muzzled.
As a chemist who graduated in 1972 I can tell you that there were Gas Chromatographs, Infrared Spectrophotometers, Atomic Absorption, Mas Spec and other modern analytical tools available at that time and analysis to ppm levels was routine.

June 7, 2010 1:54 pm

DirkH says:
June 7, 2010 at 12:00 pm
“Why should the number of humans be proportional to the CO2 concentration? If anything, the number of humans would have to be proportional to the differential of the CO2 concentration. Or do you posit a magical amount of CO2 in the air per living human individual? How should that work?”
kwik says:
June 7, 2010 at 1:12 pm
“Have you checked against the population of thermites? Because if all humans are disappeared, we will for sure be replaced by just as many kilograms of insects, as there are kilograms of humans.”
I do not posit an exact correlation, but at least it’s closer than CO2 and temperature. The amount of CO2 in the lungs or the mass of humans is of course insignificant. Human activity is more than breathing. And somehow the impact on CO2 seems to be the same for a pre-industrial society as a modern one. I think it will be difficult to find ways to reduce CO2, if that is what we want to do, unless this observation is explained.

Bart
June 7, 2010 2:07 pm

Willis Eschenbach says:
June 7, 2010 at 1:52 pm
“Run a low pass filter on the Mauna Loa Observatory data and what do you get? You basically get the original data back, because there is so little high frequency variation in the MLO data.”
Depends on the order of the filter, and how low you set the corner frequency, and whether any dominant modes are located at a zero or notch of the filter. Put it through a 12th order Butterworth with 200 year corner frequency, and you will see virtually no recent rise at all.

Neil
June 7, 2010 2:07 pm

This is another excellent thread – thank you, Anthony and all. And I particularly value Richard Courtney’s contributions to it.
Richard S Courtney says:
June 7, 2010 at 2:43 am
I do not know if the cause of the increase is in part or in whole either anthropogenic or natural, but I want to know.
Spoken as a true scientist, Sir.
And Richard S Courtney says:
June 7, 2010 at 11:06 am
(My response). You talk of AGW, but for me the accusation is CAGW. So, for me, you left out of your argument:
(4) It is assumed that an increase in mean global temperature, of the magnitude predicted by those who accept the first three assumptions, will have negative consequences for human civilization.
Cheers,
Neil

phlogiston
June 7, 2010 2:13 pm

Willis Eschenbach says
June 7, 1:20 pm
You are again conflating residence time (6-8 years or so) with the half life (much longer)
Not so – residence time is half life / ln 2 (which is 0.693). Thus t1/2 is a bit shorter than residence time (tau).

June 7, 2010 2:38 pm

Steve Fitzpatrick says:
June 7, 2010 at 8:46 am
So, while CO2 absorption certainly takes place, the “carbonation” process is extremely slow …

My understanding of the way concrete/cement works is that over time calcium silicate is formed, which is the hardening process. That is why sand is added. So the nett effect of manufacture and use is the conversion of CaCO3 to CaSiO3, with permanent displacement of CO2.

DirkH
June 7, 2010 2:46 pm

“Steinar Midtskogen says:
[…]
I do not posit an exact correlation, but at least it’s closer than CO2 and temperature. […] I think it will be difficult to find ways to reduce CO2, if that is what we want to do, unless this observation is explained.”
It’s a spurious correlation. Replace the number of humans with World GDP total or anything else that grows roughly exponential… say the total of freeway kilometers globally. Without a physical mechanism or hypothesis, there are too many things you can correlate.
(Now some people will say “But what Beenstein and Reingewertz did was analyzing possible causality without looking at a possible physical mechanism so that’s unphysical as well and thus not important.” The answer to that would be: Beenstein and Reingewertz have shown that there cannot be a direct causation of temperature anomaly through total CO2 level because the statistic properties don’t match. They have EXCLUDED a possibility, not suggested a new correlation like you do here.)

Z
June 7, 2010 3:01 pm

Actually, I have a bit of a problem: All this CO2 rising seems to be post 1950 but the industrial age started in the early 1800′s. Parts of England, France, Poland and Germany were covered in soot from the coke and steel production by the 1870′s and the US was ramping up to speed in Pennsylvania but the increases shown n the charts don’t reflect this. Since the 1960′s on, manufacturing has become cleaner with scrubbers, etc – even with China and India. The biggest change that can see to human fuel useage that follows the CO2 usage charts is the automobile.
It is my personal theory, that the increase in CO2 is attributable to the Green Revolution.
Although productivity has increased due to the use of fertilisers etc – this is only the productivity of things we *want* to grow. If you look at a field full of lettuce, yes, there are plenty lettuce, but there is also literal acres of bare soil where nothing is growing. Useful productivity is up, total productivity is down.
Farming is not the art of growing things. I can clear an area of soil and within 2 weeks it will be choked with plants of one sort or another. Farming is the art of killing stuff you don’t want growing, whether they be plants, insects or animals.
The rise is carbon dioxide is all the weeds that don’t grow every year now. That’s a relief for the other plants, which aren’t now choking for air.

Bart
June 7, 2010 3:04 pm

Willis Eschenbach says:
June 7, 2010 at 2:44 pm
“No, I’m not. The ice core data indicates a system at general equilibrium. Exponential decay relates to the decay of pulses of input to a system at equilibrium, not to the flows that make up that equilibrium.”
The system has to be continuous. You cannot separate the dynamics of the equilibrium from the dynamics in a neighborhood of the equilibrium like that. It does not describe any physical system in this universe.

kadaka (KD Knoebel)
June 7, 2010 3:06 pm

Graph of Vostok ice core data found laying around Wikipedia (Wikimedia). It is listed as pubic domain and sourced as US Government work, but wiki version is colorized while the source is black and white, some may find the B&W one easier to read.
Note at the bottom where they try to match up insolation peaks with 18O concentrations. What exactly are they trying to show there?
Other interesting things:
This graph shows (by eyeball estimation) the atmospheric CO2 levels dropped as low as around 180 ppm about 12,000 years ago. However other research concerning plants (link) shows that during the Last Glacial Maximum the plants experienced CO2 concentrations as low as 110 ppm. From this, as opposed to the normal line of inquiry, the question arises as to why the ice cores show such a high level of CO2.
The past 10,000 or so years look very “noisy” with regards to temperature. It is noticeable though how the temperatures seem confined to a certain rage while CO2 concentrations shot up from about 260 to about 285 ppm without an accompanying rise in temperature.
This graph may be compared to this one which was found used at the “Simple English Wikipedia.” It is sourced from Petit et al (1999) using data found at this NOAA-NCDC site. It also shows recorded dust levels (Question, is soot included?) with a note on the linked wiki page saying “Higher dust levels are believed to be caused by cold, dry periods.” As seen by the peaks, the levels of dust have risen considerably since 350,000 years ago.
I am requesting help on “translating” that last wiki page. By the listed edit summary, there was a version championed by and edited by William Connolley, last change on 12 Feb 2006. File history though only lists one version, the current one, dated 15 Feb 2006, from a user not appearing in the edit summary. Was there an earlier version used, the mention of it removed from the File history?
(Side note: For those upset with Wikipedia’s engineered pro-(C)AGW bias, you do not want to see what was done to the “Simple English” version likely to be used by young schoolchildren. Found in Carbon Dioxide, a short and simple “Intro to Science” article: “Overall, this climate change causes global warming, but it can also make winters much longer and colder in some areas.” Ugh.)

Z
June 7, 2010 3:06 pm

Willis,
IMHO you need to divide the atmosphere into two parts. The part where liquid water exists, and the part where it does not. The dwell time of a soluble gas in a rain storm is going to be radically different from its dwell time over a frozen pole.

barry moore
June 7, 2010 3:11 pm

Dirk H – Willis’s wording was very poor I see the residence time is a different length of time than the e-folding but the plot of the quantity of CO2 wrt time for a pulse of CO2 released is the same so why confuse the issue, the concept of the time required for the pulse to reduce to 50% (residence time) is much more comprehensible. Half life in nuclear terminology is the same value i.e. time to reach 50% of the original activity level so we have a conflict of terms in different technologies or Willis is confusing half life with e-folding which are different values.
Phlogiston – gasses are very different to liquids, the law of partial pressures applies so gasses do not separate according to density.
Arfur you did acknowledge H2O at the end of your post. H2O is by far the most powerful so called GG but it varies from 35ppm at –60 in the polar regions to 40 000 ppm at the tropics where is the tipping point you might ask good question there is not one.
Bjorn take a look at the CO2 record “ exponential increase ?” please check your dictionary.
For those interested in actual science not manipulated cherry picked statistics from dubious proxies try John Nicols paper which is a true scientific evaluation based on the laws of physics of the so called greenhouse gas effect.
The stomata proxy is very robust and has been verified by data from 1952 to 1995 correlating stomata configurations with actual measured CO2 levels. Unlike the tree ring farce.

John Finn
June 7, 2010 3:28 pm

Malaga View says:
June 7, 2010 at 12:52 pm
The Real Co2 site by Ernst-Georg Beck is very interesting:
http://www.biomind.de/realCO2/realCO2-1.htm
Especially the Atmospheric CO2 Background 1826-1960 diagram:
http://www.biomind.de/realCO2/bilder/CO2back1826-1960eorevk.jpg
Seems a lot more natural and believable than the ice core flat lining….

So a drop of ~80 ppm in a few years around 194o is believable is it?

Stephen Wilde
June 7, 2010 3:36 pm

An earlier poster said:
“Warming oceans. Natural warming of the ocean releases dissolved CO2 like a glass of beer.”
and Willis replied:
“Again, I discuss this in the head post, and it is not large enough to cause the recent rise either.”
Willis,
On a year to year basis the Mauna Loa changes are pretty small and could well be explained by variations in oceanic CO2 absorption rates.
Only if one looks at the entire post 1850 rise as a single block does it seem unlikely that the oceans are a big enough contributor to the whole observed change.
However that could be explained by the discontinuity issue whereby a proxy that fails to recognise the signals of the MWP and LIA is grafted onto the far more sensitive modern day methods which clearly would have been sensitive enough to recognise the MWP and LIA.
Thus the oceans could indeed be responsible.

John Finn
June 7, 2010 3:40 pm

phlogiston says:
June 7, 2010 at 2:13 pm

Willis Eschenbach says
June 7, 1:20 pm
You are again conflating residence time (6-8 years or so) with the half life (much longer)


Not so – residence time is half life / ln 2 (which is 0.693). Thus t1/2 is a bit shorter than residence time (tau).
There might be some confusion over the definition of ‘residence time’ here. I think Willis is referring to the average lifetime (residence time) of a CO2 molecule in the atmosphere. This is only a few years. However, this is not the same thing as the time taken for a pulse of CO2 to be removed from the atmosphere. In this case, both the half-life and e-folding time are much longer.

bubbagyro
June 7, 2010 3:45 pm

Gail:
Here is the direct answer to your question from the good scientists at Scripps Oceanographic:
AHN Jinho (1 2) ; HEADLY Melissa (1) ; WAHLEN Martin (1) ; BROOK Edward J. (2) ; MAYEWSKI Paul A. (3) ; TAYLOR Kendrick C. (4) ;
Affiliation(s) du ou des auteurs / Author(s) Affiliation(s)
(1) Scripps Institution of Oceanography, University of California-San Diego, La Jolla, California 92093-0225, ETATS-UNIS
(2) Department of Geosciences, Oregon State University, Corvallis, Oregon 97331-5506, ETATS-UNIS
(3) Climate Change Institute, University of Maine, 303 Bryand Global Sciences Center, Orono, Maine 04469-5790, ETATS-UNIS
(4) Desert Research Institute, University of Nevada, 2215 Raggio Parkway, Reno, Nevada 89512-1095, ETATS-UNIS
Résumé / Abstract
One common assumption in interpreting ice-core CO2 records is that diffusion in the ice does not affect the concentration profile. However, this assumption remains untested because the extremely small CO2 diffusion coefficient in ice has not been accurately determined in the laboratory. In this study we take advantage of high levels of CO2 associated with refrozen layers in an ice core from Siple Dome, Antarctica, to study CO2 diffusion rates. We use noble gases (Xe/Ar and Kr/Ar), electrical conductivity and Ca2+ ion concentrations to show that substantial CO2 diffusion may occur in ice on timescales of thousands of years. We estimate the permeation coefficient for CO2 in ice is ∼4 x 10-21 mol m-1 s-1 Pa-1 at -23°C in the top 287m (corresponding to 2.74 kyr). Smoothing of the CO2 record by diffusion at this depth/age is one or two orders of magnitude smaller than the smoothing in the firn. However, simulations for depths of ∼930-950 m (∼60-70 kyr) indicate that smoothing of the CO2 record by diffusion in deep ice is comparable to smoothing in the firn. Other types of diffusion (e.g. via liquid in ice grain boundaries or veins) may also be important but their influence has not been quantified.

Gail, your instincts are correct:
1) All CO2 analytical methods, wet or dry give close results, they are precise methods
2) Of course, the diffusion calculated for Scripps for CO2 shows that all results are “smoothed”, another word for equilibrated.
3) The results in hundreds or thousands of years will arrive, by Fick’s Laws to the same average ambient level for that time period.
Given enough time, from the Scripps results, probably in less than over a hundred year span, ice under a thousand meters thick will behave as if on the surface, and will equilibrate to the atmospheric level in an asymptotic fashion.
Bottom line: If CO2 is constant in the atmosphere, at 300-400 ppm over the last hundreds of years, all of the cores of the last 10,000 years will show a similar quantitative result for CO2 concentration regardless of what the concentration was at the time the CO2 was incorporated, approaching asymptotically 300-400 ppm.

June 7, 2010 3:47 pm

I have a few questions for Willis regarding CO2 and temperature.
1) Is he aware of any other CO2 measurement stations besides Mauna Loa (which sits atop the world’s largest active volcano surrounded by the world’s largest CO2 emitting ocean)?
2) Shouldn’t the summary findings of Beck 2007 of 90,000 CO2 measurements taken by hundreds of scientists during the last 200 years be taken into consideration?
3) What is the explanation for CO2 increases lagging by about 800 years the temperature increases documented in ice core records?
4) Why is there no correlation between the Scotese temperature data and Berner CO2 data for the last 600 million years?

Gail Combs
June 7, 2010 3:55 pm

“Statement of Prof. Zbigniew Jaworowski
Chairman, Scientific Council of Central Laboratory for Radiological Protection
Warsaw, Poland
….The problem with Siple data (and with other shallow cores) is that the CO2 concentration found in pre-industrial ice from a depth of 68 meters (i.e. above the depth of clathrate formation) was “too high”. This ice was deposited in 1890 AD, and the CO2 concentration was 328 ppmv, not about 290 ppmv, as needed by man-made warming hypothesis…….”

______________________________________________________________________
HMMMmmmm 328ppm CO2 for the year 1890.
I cross checked that against the detailed scientific examination of historic CO2 measurements by Ernst Beck and it is within his margin of error. http://www.biomind.de/realCO2/
I then went to Beck’s site: http://www.biokurs.de/eike/daten/leiden26607/leiden9e.htm
1883 – 266 sample with an average of 335 ppm
What is really really interesting is Barrow 1947-1948 data at 420 ppm! (average of 330 samples) It is noted that the Keeling samples (1972 to 2004) are transported from Barrow Alaska to California before they are analysed. http://www.biokurs.de/eike/daten/leiden26607/leiden6e.htm

Bart
June 7, 2010 4:02 pm

Willis,
I would like to ask how you performed the calculation to produce the Calculated Airborne line Figure 4. The proper way would be to perform a convolution of the emissions with an exponential weighting with time constant of 31 years. To get the endpoint, you would then have to extrapolate the emissions data forward. Or, did you just scale it somehow?
I would also like to know more about the source of the “emissions” data. I have an extremely tough time believing we have anything like an accurate measure of worldwide emissions stretching back to 1850. Is this really just tabulated data of things like fuel consumption and cement production scaled for their theoretical partial CO2 production based on the combustion technology of the time, or were there other assumptions or observation data blended in? You might consider showing a plot of the yearly differences (delta-production year-to-year) as this might more clearly show some blips which most of us would expect for, e.g., the WWII years.

tallbloke
June 7, 2010 4:05 pm

Willis Eschenbach says:
June 7, 2010 at 12:59 pm
I thought my response to Dr. Ravetz was free of ad hominem attacks, but of course, YMMV.
Pick a post, any post…
Willis Eschenbach says:
February 15, 2010 at 7:15 pm
Ooooh, poor widdle baby Jerry, we insulted his cerebral sensitivities by calling him a big bad Marxist so he took his toys and went home …
Now, you claim that he “resigned from politics”, but he sure hasn’t resigned from Marxism …

Scientific debate is so edificatory innit?
😉

barry moore
June 7, 2010 4:05 pm

For all those people who are taken in by the Willis smoke and mirrors dog and pony show I sincerely hope you will remember these days when Obama inflicts his carbon tax on you indirectly via the energy companies and has poured billions into totally uneconomical white elephants called renewable energy projects. This will cost jobs and cripple an already staggering economy.
The alternative is quite simple assume the worst, global warming is an unstoppable natural cycle, so let us mitigate against it. Rising sea levels at 25 cm per century? we can built sea walls to protect low lying areas, the Dutch have been doing it for centuries. Droughts and floods, computer models show in a warmer world 10% will have less rain and soil moisture 90% more so let us build irrigation systems and flood control systems.
This will create useful jobs and whatever happens will increase our productivity.
The current CO2 reduction proposals will have absolutely no effect on the CO2 levels and cause a negative economic impact of the average taxpayer of between $2500 and $3000 per year.
People talk glibly of doubling the CO2 level, where on earth is the 860 GT of carbon going to come from to double the CO2 level, not from humans that’s for sure.
In the last 500 million years CO2 levels have been excess of 1000 ppm for 80% of the time and there were no tipping points or disasters this is absolute nonsense.
Realistically what does the future hold for the climate, we are is the worst solar minimum for over 100 years with no end in sight the PDO an AMO have swung into a cold phase which according to historical records could last 40 years, according to the Argo float buoy data the oceans are cooling down, according to satellite data the troposphere is not performing anything like the IPCC computer programs predicted. All the indications are that we have peaked out of the 20th century warm period and we are heading into a down period but there is no guarantee of this so the mitigation actions as well as research into an economical sustainable energy supply should proceed.
So please let us stop running around like chickens with our heads cut off at get down to some logical, cost effective, long term planning.

J. Bob
June 7, 2010 4:31 pm

Smokey @11:23
I haven’t got to it yet, but I would like to look at the ~11 year cycles of the charts I pasted. I would add a 8-12 year band-pass filter to the ~40 year low pass, and see how it would compare to the irradiance graph..
Prior to 1800, on the graphs, all temp records were from Europe. It is only about 1850, that a few non-European records show appear. So by 1900 onward, the shape begins to approach the Hadcet shape.

Steve Garcia
June 7, 2010 4:33 pm

I can’t think of any obvious other explanation for that precipitous recent rise.

I am going to lob a hand grenade into this, just to see what comes of it:
Nixon. China. 1972.
1976: Mao and Chou En Lai both die, opening the door to some level of capitalism in China. This will take a few years to take seed.
Early 1980s: US starts trade missions in China. This accelerates in the 1980s. US begins the era of the maquiladores, US companies operating just across the Rio Grande in northern Mexico.
1990s: Full-fledged flow of manufacturing out of the US to Mexico and increasingly to China. Jobs off-shored to Mexico begin moving to China. China’s population begins to become consumers, vastly increasing the market for Chinese goods, many of which used to be US goods.
What is the tie-in with the CO2 hickey stick?
Poorer regulation of emissions of all sorts in Mexico and China versus the US. What used to be produced in a country with emissions standards was – in not much more than a decade – produced in countries with much lower standards.
In addition: I traveled in Europe in the early to mid-1970s, and the level of prosperity there was a long way short of the US. By the 1990s, that had leveled off, and now Europe has passed the US in many ways. Translation: A much larger market for goods from all sources – just as the Chinese were assuming the role as the manufacturing for the world.
Add in India, to boot, whose emissions standards were quite a bit below US and European standards.
Larger markets (don’t forget the non-Chinese Asian market, either), manufacturing under less stringent regulations – it was all happening at the same time the CO2 began climbing.
We all know that since the 1970 Clean Air & Water Act the US’s air has been cleaned up immensely. We don’t seem to credit the degree that manufacturing is NOT done here anymore. Off the top of my head, I would say our air is cleaner than it was in 1990, and 1990 was cleaner than 1980. America IS cleaning up its act – by shipping so many jobs overseas, mostly to Mexico, China and India.
Is it coincidence that China becoming the industrial park of the world, with little controls, just when all those curves start climbing? Or is there linkage there?
I suspect linkage.

bubbagyro
June 7, 2010 4:35 pm

Willis has done an admiral job.
He has effectively summarized that CO2 concentration is decoupled from temperature, as least in a positive sense. Many studies have shown that CO2 rises lag temperature increases. Of course, they are all by proxies.
However, he has insisted that CO2 measurements are rising. This is certainly equivocal, but it gives support to one facet of the warm-earthers’ argument.
But remember, all facets of the alarmists’ argument must be true for any action to be warranted:
1) CO2 current levels and rate of rise must be unprecedented in recent historic times
2) CO2 rise, if unprecedented, must be anthropogenic, due to industrial burning of “fossil fuels”
3) CO2 anthropogenic rise must dwarf other natural sources
4) The earth’s average temperature must be rising without precedent in historic times
5) CO2 rise must be causative of current higher earth temperatures, correlating strongly with higher temperatures as CO2 goes higher, not the other way around
6) The temperature record must be accurate and free from introduced or systematic bias, and show an inexorable rise correlating with human CO2 emissions
So let’s not get caught up in one facet, CO2. For the alarmist cult to survive, all of the above have to be unequivocal and proven. The burden of proof is on the alarmist side, since they are advocating the Draconian measures.

barry moore
June 7, 2010 4:43 pm

Charles, a long time ago I did an analysis of CO2 readings from around the world NOAA has a excellent web site for this (iadv) by 1980 there were about 20 modern stations covering the entire globe the number has increased considerably since then. Net result the annual average of CO2 is very close (2ppm) between the northern and southern hemispheres. The north exhibts a sigificant annual cycle due to seasonal vegitation the south virtually none, indicating very little mixing of the air masses between the hemisphere tropospheres. In actual fact the SH has a higher level of CO2 for 5 months so mixing is academic however 94% of anthropogenic CO2 is released in the NH and 6% in the south even with a 3 year mixing tme the NH should be at least 20ppm higher than the south.

Dr A Burns
June 7, 2010 4:46 pm

It seems Michael Mann-erisms are catching … mixing data from markedly different sources to create hockey sticks.
What is the basis for the assumption that air surrounded by ice is any more a closed system than air in a balloon ? Why should air in ice be immune from diffusion and dissolution ?
We know that oceans emit and absorb 20 times as much CO2 as man. We know that rising temperatures cause CO2 in the atmosphere to increase. We would hence expect ice core records to show rising atmospheric CO2 levels during the MWP and falling levels during the LIA … however we get a horizontal line !
I think that data from nature showing horizontal lines with no variation is suspect, as it was with Mann’s hockey stick. In this case ice core data showing no variation, indicates some form of long term averaging, such as by gas in ice diffusion. We would expect more recent ice core data to curve up to meet current levels, again through diffusion.

Z
June 7, 2010 5:04 pm

barry moore says:
June 7, 2010 at 4:05 pm
The alternative is quite simple assume the worst, global warming is an unstoppable natural cycle, so let us mitigate against it.

But that isn’t the worst. The worst is that we are 18,000 years into an interglacial (there’s some sediment in Ireland that says 19,000 years, but 18,000 is the most popular number) that have lasted somewhere between 15,000-20,000 years – at least since the Ismus of Panama closed up. (Would the Panama Canal help?) Since that’s a range, and the 20,000 years upper limit is not a guarantee, we may have a problem.
Of course if the various Dryas stadials have “reset the clock”, then we’re home dry, but if Milanslavik is right, and it’s all due to orbital dancing – then the stadials won’t have. At which point, ground level is going to be approximately a mile about my head in a lesser timeframe than the Bible covers.
*That* is assuming the worst…

June 7, 2010 5:06 pm

Willis Eschenbach says:
June 7, 2010 at 4:48 pm
“So if year zero has an airborne C of X gigatonnes, and an emission of E gigatonnes, then year one has an airborne C of (X + E) * P.”

Willis,
This can’t work. If there were no manmade emissions, your model would have airborne CO2 down to 31 ppm within a century. And down to 2.5 ppm within 200 years.
It assumes CO2 is absorbed by a large sink into which it disappears without any return flux. No wonder you get an odd time constant.

wayne
June 7, 2010 5:09 pm

barry moore says:
June 7, 2010 at 3:11 pm
… For those interested in actual science not manipulated cherry picked statistics from dubious proxies try John Nicols paper which is a true scientific evaluation based on the laws of physics of the so called greenhouse gas effect.
Very good paper Barry, thank you pointing that out. Finally found it at:
http://www.middlebury.net/nicol-08.doc
Will take a while to absorb this one completely.

Adpack
June 7, 2010 5:12 pm

If the increase in CO2 follows temperature increases by 400 to 800 years or so, as seen in past data, then could the increase we see now have been caused by a temperature increase in the period of around 1000 to 1200?

Gail Combs
June 7, 2010 5:15 pm

kadaka (KD Knoebel) says:
June 7, 2010 at 3:06 pm
“…..Other interesting things:
This graph shows (by eyeball estimation) the atmospheric CO2 levels dropped as low as around 180 ppm about 12,000 years ago. However other research concerning plants (link) shows that during the Last Glacial Maximum the plants experienced CO2 concentrations as low as 110 ppm. From this, as opposed to the normal line of inquiry, the question arises as to why the ice cores show such a high level of CO2…..”

__________________________________________________________________________
As Willis and many others have pointed out CO2 is not uniform in the atmosphere near the ground because of the sinks and sources.
For example:
Greenhouse Information and measurements:
“Plant photosynthetic activity can reduce the CO2 within the plant canopy to between 200 and 250 ppm… I observed a 50 ppm drop within a tomato plant canopy just a few minutes after direct sunlight at dawn entered a green house (Harper et al 1979) … photosynthesis can be halted when CO2 concentration aproaches 200 ppm… (Morgan 2003) Carbon dioxide is heavier than air and does not easily mix into the greenhouse atmosphere by diffusion… click
I also remember a study with trees in the open where they found the CO2 in the micro-environment around the leaves dropped to 200 ppm during daylight. Unfortunately I can not find the source.
I did find in my notes “200 pm CO2 trees starve” http://biblioteca.universia.net/ficha.do?id=912067
but the link no longer works… Now all the searches turn up papers showing 180 ppm or lower….HMMMmmmm
I just check a couple of studies the 180 -200 ppm CO2 for trees is now based on “models” derived from the ice cores. GRRRrrrr

barry moore
June 7, 2010 5:15 pm

Willis
“Let me preface this by saying that I do think that the recent increase in CO2 levels is due to human activities.”
If you had include the caveat “partly” I could agree with you but as stated in the above preamble I disagree with you and I think you presented a great deal of confusing statements I still think your definition of half life is incorrect as I said if you cut something in half again and again it never reaches zero but it does become insignificant like 1/1012 after 10 half lives this point is lost on most peole and is a major source of propaganda and misinformation by the IPCC. Your article definitely did nothing to clarify this issue. Also your singular reference to the ice core data as the “gold standard” is not even close as other posts have demonstrated. Sorry but your article was confusing and misleading to those of us who have spent many years and thousands of hours studying this very complex subject.

Graham Dick
June 7, 2010 5:22 pm

“not all of the carbon that is emitted (in the form of CO2) remains in the atmosphere. Some is absorbed by some combination of the ocean, the biosphere, and the land ….
the excess atmospheric carbon is indeed from human activities”
Does this mean that, when the AGW meme finally bites the dust (as it surely must), the next prop for the carbon-based financial and industrial industry will be anthropogenic ocean acidification (AOA)?

Steve from Rockwood
June 7, 2010 5:23 pm

Grumbler says:
“Hold on – CO2 straight line for 800 years and climate fluctuates dramatically over that period? And CO2 drives climate? What am I missing? ”
There is something wrong with Figure 1. It defies common sense that CO2 would remain constant for 800 years and then suddenly increase 70 years before AGW.

barry moore
June 7, 2010 5:23 pm

Z – point well taken but I was limiting my frame of reference to the next 200 not 2000 years I hope the next ice age will not come up on us that quickly, but it is comming I agree.

Daniel H
June 7, 2010 5:29 pm

Ric Werme said:

That would be a bit difficult, as Keeling died in 2005… MLO does release a lot of data, can you be more explicit about what you are looking for that you can’t get from them? Your comment wasn’t very explicit. One starting point is:
http://www.esrl.noaa.gov/gmd/obop/mlo/livedata/livedata.html

Let me see if I can be more explicit. I’ll start by reiterating my previous comment:

…another line of evidence implicating fossil fuels as the source of atmospheric CO2 increase is the change in the atmospheric O2:N2 ratio. This is discussed in AR4 WG1, The Physical Science Basis, Chapter 2, Section 2.3.1 and illustrated here. My only problem with this record is that it’s impossible to verify since the raw data are “protected” behind a firewall at the Scripps Institution of Oceanography web site… For more information, click the link “Lab Data”, located here: https://bluemoon.ucsd.edu/data.html

As stated, I’m looking for the raw data behind the O2:N2 atmospheric ratio plot that was used in AR4 WG1, The Physical Science Basis, Chapter 2, Section 2.3.1. The Scripps Atmospheric Oxygen Research Group, which actively collects and manages the O2:N2 raw data, is presently headed by Ralph F. Keeling, who is still very much alive (his father, Charles D. Keeling, passed away in 2005). The data is protected behind a firewall. It cannot be accessed. As stated in my previous comment, this can be verified by clicking on the “Lab Data” link at the bottom of the following web site: https://bluemoon.ucsd.edu/data.html
I don’t know how much more explicit I can get than that. Do I need to put it up on the Jumbotron for you? The “starting point” you provided does not help. It simply points back to the same Scripps web site which leads back to the same Scripps firewall which leads to a dead end. See here: http://www.esrl.noaa.gov/gmd/obop/mlo/programs/coop/scripps/o2/o2.html
If anyone else has any idea of where I might be able to download the raw atmospheric O2:N2 ratio data, I’m all ears eyes.

wayne
June 7, 2010 5:30 pm

says:
Is it coincidence that China becoming the industrial park of the world, with little controls, just when all those curves start climbing? Or is there linkage there?
I suspect linkage.

I so agree. And not only that, on exporting our jobs, they now want us to pay via taxes for the lack of emmision controls in the very countries they have exported our jobs to. No! Tax the people you have given our jobs to or bring our jobs back, only then will a bit of justice be served.

bubbagyro
June 7, 2010 5:35 pm

Dr A Burns says:
June 7, 2010 at 4:46 pm:
See my post at 3:45 that shows empirically that ice diffuses from within ice over years, reducing initial high concentrations.

Z
June 7, 2010 5:39 pm

barry moore says:
June 7, 2010 at 5:23 pm
Z – point well taken but I was limiting my frame of reference to the next 200 not 2000 years I hope the next ice age will not come up on us that quickly, but it is comming I agree.

The 2000 years is not a guarantee – it *could* start at Christmas. This Christmas.
In fact it could happen in a couple of months – I’m being very northern hemisphere biased, and assuming it’ll happen during a northern winter.
Simple fact, is that no one knows. It’s one of the reasons why I’m firmly against geo-engineering to cool or warm things. Given a 50/50 chance, people will get it wrong 90% of the time.
Now Mann, Hansen et al are promising a few degrees warmer – that’s good news. I wish they were right.

barry moore
June 7, 2010 5:42 pm

Willis
See my post of 4.43 pm this is only the tip of the iceberg but there is a lot of evidence to show that human emissions are a minor contributor to global CO2 concentrations so after many years of research I will stand by my comments and I will not cap my electronic pen. I note how selective you have been in your responses, answer the easy questions do not touch the tough ones.

Steve Fitzpatrick
June 7, 2010 5:51 pm

Nick Stokes says:
June 7, 2010 at 2:38 pm
“My understanding of the way concrete/cement works is that over time calcium silicate is formed, which is the hardening process. ”
While Portland cement does not have a precisely defined composition, in all cases the primary reaction is one of hydration of anhydrous calcium silicates, which are formed from calcium carbonate and silicate minerals in the cement kiln, with CO2 being driven off in the process. The dry calcium silicate solids initially dissolve in the added water, then gradually precipitate from solution in hard (rock-like) hydrated structures. Here is a very nice PDF presentation that lays out all the basic chemistry:
Portland Cement Hydration
Dr. Kimberly Kurtis
School of Civil Engineering
Georgia Institute of Technology
Atlanta, Georgia
I didn’t get the web address, but a Google search with title and author will bring it up on the first page of results. Anyhow, the setting of Portland cement it is a hydration/dissolution/precipitation process, and CO2 is not at all involved.
By the way, too much water in the concrete results in soft/fragile precipitated hydrates and very weak concrete, so if you are ever getting concrete poured, don’t let the truck driver add water to make the wet concrete easier to work with!

Z
June 7, 2010 5:56 pm

Dr A Burns says:
June 7, 2010 at 4:46 pm
What is the basis for the assumption that air surrounded by ice is any more a closed system than air in a balloon ? Why should air in ice be immune from diffusion and dissolution ?

Especially given that ice can freeze leaving tiny “tunnels” of water where microbes/algae can live and CO2 can dissolve. Also at the bottom of an ice core, the pressure must be incredible.
Any dissolved CO2 must outgas like a smashed bottle of champagne when the core is pulled into the ambient pressure.
I too find straight lines in nature a source of suspicion, rather than reassurance.

Steve Fitzpatrick
June 7, 2010 6:07 pm

Nick Stokes says:
June 7, 2010 at 5:06 pm
“Willis,
This can’t work. If there were no manmade emissions, your model would have airborne CO2 down to 31 ppm within a century. ”
No, the level will only fall to the natural background level of the ocean (that is the vast majority of the deep ocean, which has never seen the recently elevated CO2, and so has a concentration that is roughly in equilibrium with 285 PPM in the atmosphere. The decay is from the current atmospheric CO2 level down to the pre-industrial CO2 level see: http://wattsupwiththat.com/2009/05/22/a-look-at-human-co2-emissions-vs-ocean-absorption/ for further discussion and description of the process.

Michael Larkin
June 7, 2010 6:17 pm

This is coming from left field, but if no one has mentioned it, I wondered if it might have relevance. If not all “fossil fuel” is in fact biogenic, then wouldn’t that affect C12/C13 ratios?
I have no opinion either way, incidentally, about abiogenic oil or anthopogenic CO2 increase. The more I read pro- and contra, the more I come to realise that agnosticism is the only sane position for me.

June 7, 2010 6:20 pm

A more informative exercise is to use the Scripps seasonally adjusted monthly CO2 averages, convert to global gigitons/year (annual difference or accumulation rate) and compare the cyclic behavior to the relatively straight line for anthropogenic emissions. They average about the same but the natural cycles vary by orders of magnitude. This does not play well for cause and effect.

June 7, 2010 6:34 pm

Steve Fitzpatrick says:
June 7, 2010 at 6:07 pm
“No, the level will only fall to the natural background level of the ocean”

But not in Willis’ formula – no such level appears. Try it – P goes to zero.
I had assumed Willis was doing that in my earlier comment, and said the background level needed to be updated. But it seems that he’s been using zero.
On concrete, I think we’re agreeing that the end product is a calcium silicate, and the CO2 can’t come back as Steve Goddard suggested.

Steve Fitzpatrick
June 7, 2010 6:44 pm

“Nick Stokes says:
June 7, 2010 at 6:34 pm
Steve Fitzpatrick says:
June 7, 2010 at 6:07 pm
“No, the level will only fall to the natural background level of the ocean”
But not in Willis’ formula – no such level appears. Try it – P goes to zero.
I had assumed Willis was doing that in my earlier comment, and said the background level needed to be updated. But it seems that he’s been using zero.”
OK Willis, what say you? I don’t think it can fall below the long-term background level.

Gail Combs
June 7, 2010 6:46 pm

John Finn says:
June 7, 2010 at 3:28 pm
Malaga View says:
June 7, 2010 at 12:52 pm
The Real Co2 site by Ernst-Georg Beck is very interesting:
http://www.biomind.de/realCO2/realCO2-1.htm
Especially the Atmospheric CO2 Background 1826-1960 diagram:
http://www.biomind.de/realCO2/bilder/CO2back1826-1960eorevk.jpg
Seems a lot more natural and believable than the ice core flat lining….
So a drop of ~80 ppm in a few years around 194o is believable is it?
_______________________________________________________________________
Yes, I find it believable, especially when you go from Bern Germany with Duerst to Barrow Alaska with Scholander to Chapman in Nebrasca. Heck you see more than 80 ppm variation in Harvard forest. From 320 ppm to around 420 ppm with a set of outliers to 500 ppm as Beck shows.
The only reason “official source” data does not have that type of variation is because it is not RAW DATA. And I quote:
“At Mauna Loa we use the following data selection criteria:
3. There is often a diurnal wind flow pattern on Mauna Loa ….. The upslope air may have CO2 that has been lowered by plants removing CO2 through photosynthesis at lower elevations on the island,…. Hours that are likely affected by local photosynthesis are indicated by a “U” flag in the hourly data file, and by the blue color in Figure 2. The selection to minimize this potential non-background bias takes place as part of step 4. At night the flow is often downslope, bringing background air. However, that air is sometimes contaminated by CO2 emissions from the crater of Mauna Loa. As the air meanders down the slope that situation is characterized by high variability of the CO2 mole fraction…..
4. In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step, in which we fit a curve to the preliminary daily means for each day calculated from the hours surviving step 1 and 2, and not including times with upslope winds. All hourly averages that are further than two standard deviations, calculated for every day, away from the fitted curve (“outliers”) are rejected. This step is iterated until no more rejections occur…..”

If any data that is not within 2 standard deviations is rejected then of course you will never see swing of 80 ppm, it has already been edited out of the final “product”

barry moore
June 7, 2010 6:50 pm

Michael
C12 and C13 are not radioisotopes therefore their concentration ratio does not change, the only radioisotope is C14 which is created in the upper atmosphere by the action of cosmic rays on N14 thus the abundance of C14 will vary with solar variability and has a half life of 5300 years, this property is used in carbon dating. Isotopes have the same chemical properties but they do have different physical properties such as absorbtion through a semipermeable membrane i.e. leaves and the plants ability to uptake CO2 with different isotopes, so a plant will have a different C12/C13 ratio than normally found on the surface. Fossil fuels are the product of plants and have been buried for millions of years thus the C14 content is very low and the C12/C13 ratio is different from surface conditions so by tracking the C12/C13 and C14 trends in atmospheric CO2 certain conclusion are drawn as to the source of the CO2. Unfortunaely like most things conclusions drawn from these observations are not as simple as they appear so the data is subject to considerable error.

Steve Fitzpatrick
June 7, 2010 6:53 pm

Gail Combs says:
June 7, 2010 at 6:46 pm ,
Woa. Too much speculation there. Please note that air samples collected by airplanes match the Mauna Loa (and Barrow Alaska, and Antarctica) data stations almost perfectly. There is very good reason to believe the procedures used a Mauna Loa lead to CO2 data that is representative of the upper troposphere…. which is what matters.

barry moore
June 7, 2010 7:05 pm

Daniel
Ch1 Page1 of IPCC AR4 has an anecdote concerning Albert Einstein which I must admit sets the gold standard for scientific proof per AE ” why 100 it only takes one providing he has testable results” the secracy surrounding the data and computer programs negates a lot of the IPCC data on the grounds that it is not testable.

Gail Combs
June 7, 2010 7:09 pm

bubbagyro says:
June 7, 2010 at 3:45 pm
Gail:
Here is the direct answer to your question from the good scientists at Scripps Oceanographic:….
________________________________________________________________________
Thank you. I will add it to my notes. The whole set of data is just too smooth and that includes present day data. Beck’s historic data and the Harvard Forest data show swings of up to 200 ppm, yet we are to believe the levels hardly varied for millions of years while the earth went through ice ages and major changes in plant and animal life??? That the present day data shows a straight line trend over decades with little variation from place to place on the earth’s surface???
I sure wish my production analysis data was that nice and neat from day to day.
Actually the ice core CO2 data looks more like a “process” that has reached equilibrium than a geologic record.

dr.bill
June 7, 2010 7:09 pm

Richard S Courtney: June 7, 2010 at 11:06 am
Richard, here’s another version of Faraday’s reply:

It is said that the then British Prime Minister Sir Robert Peel (1788-1850) after seeing a demonstration of the dynamo effect asked Faraday what use the discovery was. Faraday replied, “I know not, but I wager that one day your government will tax it.” Faraday himself did not try to develop the practical applications of his discoveries. Rather he became deeply interested in understanding how electricity and magnetism are related to each other. (Quote from here.)

It would seem that Faraday wasn’t just smart, but also prescient. The extension of what he said is utterly germane to what is happening today.
/dr.bill

June 7, 2010 7:29 pm

I believe they can determine the cycling time of carbon in the atmosphere pretty precisely based on radioactive carbon-14. Basically, IIRC each of the big atmospheric fusion bomb explosions in the 1960s generated a spike of C-14. This soon becomes globally distributed in the atmosphere and gets incorporated into plants, mud layers, etc., and then its fate can be tracked as it ages & as the spike works it way into various systems.

Phil's Dad
June 7, 2010 7:42 pm

A correction to my earlier comment. (June 7, 2010 at 10:19 am)
1/ I undersized the atmosphere and
2/ I used ppmw without converting to ppmv.
Sorry.
To pick up from where I have made the corrections.
Total CO2 from burning all known fossil fuel reserves would be 2,340,140 billion kg. Loads! – but what is that in ppmv of the total atmosphere? Well…
The total mass of the atmosphere is 5.3E18 kg
(http://scipp.ucsc.edu/outreach/balloon/atmos/The%20Earth.htm)
Total mass of atmosphere: 5.1E18 kg
(http://nssdc.gsfc.nasa.gov/planetary/factsheet/earthfact.html)
Atmospheric mass is between 5 and 5.5E18 kg
(http://hypertextbook.com/facts/1999/LouiseLiu.shtml)
The difference seems to be mainly the extent to which the mountainous regions are taken into effect (where pressure is lower).
Using our golden rule of trying for the highest possible CO2 level we will use the smallest given figure of 5E18 kg for atmospheric mass. 2,340,140 billion kg / 5,000,000,000 billion kg = 468ppmw which is about 308ppmv of CO2 to be produced by burning all the known reserves of fossil fuel.
(My ppmw to ppmv conversion above uses an air to CO2 molecular mass ratio of 28.97/44.01)
(http://www.engineeringtoolbox.com/molecular-mass-air-d_679.html)
Except that only 40% of that will stay in the atmosphere for any meaningful length of time. A constant unchanging 40% by the way. Knorr, W. (2009), Is the airborne fraction of anthropogenic CO2 emissions increasing?, Geophys. Res. Lett., 36, L21710, doi:10.1029 / 2009GL040613.
(http://wattsupwiththat.files.wordpress.com/2009/11/knorr2009_co2_sequestration.pdf)
This reduces the potential CO2 from burning all known reserves of fossil fuel to just 123ppmv beyond where we are now. Given what Willis Eschenbach tells us here about natural out-gassing; however much is in the oceans most of it seems to be staying there. There may be another 20ppmv to come from “feed-back” out-gassing, giving us 143ppmv in all?
Even if the IPCC are right about 3degC per doubling of CO2 that means we would struggle to find much more than 1degC of warming left in all remaining fossil fuels with feed-backs. Added to post industrial warming to date that still sees us struggling to reach 2degC above pre-industrial levels.
And…relax.

barry moore
June 7, 2010 8:19 pm

So the contribution to the global atmospheric CO2 by human emissions is not a substantive issue. The subject which you are consistently ducking is that you claim in your opening statement that human emissions are responsible for the increase in CO2, this you clearly stated and I take issue with it since I think it is giving aid and comfort to the enemy. Please stop hiding behind the “insult” strategy that is an alarmist tactic and not worthy of you.

Bart
June 7, 2010 8:59 pm

Willis Eschenbach says:
June 7, 2010 at 4:48 pm
This calculation is not clear to me. Try putting it in terms of a difference equation.
For example, let “k” denote the year index, i.e., k = 1,2,3… for year 1, year 2, year 3… Let airborne C at year k be denoted C(k), and cumulative emissions similarly E(k) . If deltaE(k) = E(k+1) – E(k) is the incremental emissions added in year k, then I would expect you to have an equation of the form
C(k+1) = P*C(k) + (1-P)*Co + deltaE(k)
where Co = the presumed “equilibrium” level. Each year, you add an amount deltaE(k) to the atmosphere from fossil fuel burning. Each year, natural sources add (1-P)*Co and natural sinks take out (1-P)*C(k). As you can see, if deltaE(k) is zero every year, the equation settles out to C(k) = Co. If you want only the difference between the current concentration and the equilibrium, then let D(k) = C(k) – Co, and find
D(k+1) = P*D(k) + deltaE(k)
This would be the “excess CO2 that is not sequestered every year”, i.e., every year, one would sequester (1-P)*D(k). You can see, if P = 1, then D(k) = E(k), i.e., there is a straight accumulation (no sequestration). The solution for D(k) is convolution of deltaE(k) with the exponential sequence P^k (P to the kth power).

Keith Minto
June 7, 2010 9:23 pm

Wayne,
June 7, 2010, 5:09.
Interesting paper by John Nicol in that link. Yes, it is heavy going but I found this conclusion to be pertinent.

The findings clearly show that any gas with an absorption line or band lying within the spectral range of the radiation field from the warmed earth, will be capable of contributing towards raising the temperature of the earth. However, it is equally clear that after reaching a fixed threshold of so-called Greenhouse gas density, which is much lower than that currently found in the atmosphere, there will be no further increase in temperature from this source, no matter how large the increase in the atmospheric density of such gases.

cicero
June 7, 2010 9:25 pm

Willis –
Thank you for your post.
You (and others) might be interested in the much briefer explanation under “Q. What percentage of the CO2 in the atmosphere has been produced by human beings through the burning of fossil fuels?” posted on the with a similar conclusion as yours although they state 14% ACO2 by 2000 and your result appears to be around 25% by 2005. Maybe I’m missing something though.
Again, thanks for posting on this topic.

June 7, 2010 11:03 pm

I did a smilar analysis several years ago. I got an atmospheric half-life for anthropogenic carbon dioxide of about 20 years, which agrees well with an e-folding time of about 30 years (20/30 ≈ ln(2)).
Anybody using real data from the real world must get about the same result. Human emissions remain in the atmosphere for decades, not centuries. The claim from IPCC of an e-folding time of 90-200 years is probably based on models. It surely cannot be based on real-world data.

anna v
June 7, 2010 11:04 pm

This is an experiment that shows that CO2 can be poured like a liquid , showing that it is
heavier than air.
http://www.metacafe.com/watch/930860/see_how_co2_is_heavier_than_air/
Everybody who believes in the “well mixed” hypothesis should watch this.
From what I have found on the “well mixed” hypothesis, it is an assumption that it holds for CO2, the logic being that since it is a gas and not that much heavier than the main gases of the atmosphere, turbulence will mix it well.
I need experimental proof. I.e., measurements taken in columns of air from the surface to the 5000 meter in logical steps, in a wide sampling of the globe: oceans, forests, fields, deserts, icecaps…etc.
Does anybody have a link?
Missing that, we only have Beck’s compilations, and I am free to accept them as evidence that “well mixed” is another myth. A myth promulgated by the half educated in physics climate scientists community, or very smart operators wanting to close a deal.
If well mixed is nonsense, then the keeling curves are counting angels on the top of a pin.
The ideal way to measure the CO2 global content and distributions is with satellites that measure ppms from down to the ground to the top of the troposphere.
The satellite that was supposed to do that failed to reach orbit a year or so ago.
There exists this:
http://www.jaxa.jp/projects/sat/gosat/index_e.html
and the most recent preliminary data is here:
http://www.jaxa.jp/press/2009/05/20090528_ibuki_e.html#at1

(Note 1) The analysis showed Northern Hemisphere results to be on average around 10 ppm higher than Southern Hemisphere results. An atmospheric transport model calculation predicts the difference between north and south at this time to be 2-4 ppm.
(Note 2) Southern Hemisphere values were on average approximately 17 ppm lower than the model calculation, while Northern Hemisphere latitude band average values were approximately 7-12 ppm lower.

Nothing for over a year now. Maybe it has to do with the cap and trade that Japan wants adopted? After all this is a government project. Maybe they are frantically trying to generate “pure” keeling curves.
Watch the range 360 to 390ppm, 390 a point in Australia. Nothing over oceans.

stumpy
June 7, 2010 11:12 pm

I think Henrys law needs some consideration. According to this law of physics, as the ocean warms it should increase the level of atmospheric co2 in the atmopshere. I believe this is a linear function of SST, I have not done the calcs, but overs have claimed this law alone could account for much of the rise. This may account for some of the rise from the LIA, though I am not sure how much. It most likely accounts for the variations observed in the ice core records over longer periods, and why co2 levels follow temperature the majority of the time in the ice cores.
I have seen work by others where they have calculated an experiance curve for SST anonaly vs Co2 growth rate and it follows a nice linear relationship, and the deviation in growth rate matchs deviation in SST very well also (visually). Plus one paper I read observed that co2 changes simultaneously in both hemispheres, despite most of the man made co2 being produced in the northern hemisphere. This points to the ocean as at least a factor.
In regards to the c12/c13 relationship, this is subject to other factors, like are hydrocarbons being abiological or biological (the balance of evidence seems to point to abiological origins), how changes in the biosphere change the atmospheric c12/c13 ratio etc… bacteria (inc. poorly understood extremophiles) also can change the ratio to appear biological in origin (fossil fuel signature). This whole area seems a little contentious to me as we dont really understand the factors involved, or even if fossil fuels are biological or not. (another dogma to AGW that cannot be questioned!)
My last point would be that the ice core record cannot be compared to the current Mauna Loa record which is on a monthly timestep, wheras the Ice Core data is averaged over long periods (say 50 – 500 years) due to the processess involved in snowfall accumlating, forming fern and then eventually becoming solid ice. This can take long periods of time (and would vary subject to acculation etc…) and would result in an “average” co2 value over the formation period involved.
If say, co2 levels fluctuate in a linear way subject to SST and have a relatively short life time ( approx. 5 yrs according to Seagalstad) it could fluctuate considerably over say a 500 year period and the ice cores miss this. Also, according to Jarowski, no one has yet demonstrated that co2 levels extracted from ice cores represent atmopsheric co2 at the time, or the global co2 level.
Another issue is that the latest satellite co2 data shows levels are lower over the Antarctic, as would be expected with Henrys Law, this means that ice cores from antarctica cannot be prepared to the “global” level. I fear we could be comparing apples with oranges!
Whilst I dont deny that humans have contributed to current levels, I think there is still room for natural variation in co2 and if we dont understand that natural change we cannot attribute a % change to humans.
Also, much of the IPCC understanding of the co2 cycle is based on a four or five reservoir model, that assumes there is a constant co2 level that never changes and that it is in balance with the sinks. This means that any extra co2 will accumulate as the sinks are 100% utilised, this is a questionable assumption as many sinks could increase capacity to accomodate the extra co2 (for example plants due to co2 enrichment) and reach some new level of equilibrium (as most long term stable systems do).
I think the understanding of co2 level and change is still a poorly understood area that should recieive more attention. Hopefully this discussion can iron out some of these issues, and put others to rest. Of course, relying on the work of others always is a risk!

Bart
June 7, 2010 11:16 pm

Bart says:
June 7, 2010 at 8:59 pm
My example model could as easily be
D(k+1) = P*(D(k) + deltaE(k))
The difference equation is actually an approximation to a discretized differential equation. In my previous model, the convolution integral was approximated as a forward Euler integration. In this one, it is a backward Euler integration approximation. So, if that is essentially what you are implementing, I don’t have a significant problem with it.
The thing you need to do now is evaluate your conclusion for consistency. If Co is the pre-industrial equilibrium level, and (1-P)*Co is the amount of natural yearly input, what is the value of P? Does it agree with the value you have calculated here? If not, how do you reconcile the difference?

tallbloke
June 7, 2010 11:19 pm

Willis Eschenbach says:
June 7, 2010 at 5:10 pm
I will say ten “Hail Marx’s” and I vow to go and sin no more.
You happy now? Can we return to the science?

Sure, and apologies if you already answered these points, I’ve been concentrating on another thread the last couple of days.
1) Why no account taken of the work done by Georg Ernst Beck in gathering empirical records which show that co2 levels were much more upsy downsy in the C19th and early C20th than the warmists would like to have us know?
2)Why no account taken of the studies which show Isotope ratios and co2 levels in ice cores might be misleading?
3)Why no consideration of the 800 year lag which means the current rise might be down to the Medieval Warm Period rather than my Range Rover?
4)Why hand the attribution to the warmists on a platter when there is so much uncertainty?
It’s not my area of speciality and I haven’t been keeping up, maybe these issues are settled now, so don’t bite my head off for asking please.
Cheers
Rog

John Finn
June 7, 2010 11:21 pm

Gail Combs says:
June 7, 2010 at 6:46 pm

John Finn says:
June 7, 2010 at 3:28 pm
So a drop of ~80 ppm in a few years around 194o is believable is it?


_______________________________________________________________________
Yes, I find it believable, especially when you go from Bern Germany with Duerst to Barrow Alaska with Scholander to Chapman in Nebrasca. Heck you see more than 80 ppm variation in Harvard forest. From 320 ppm to around 420 ppm with a set of outliers to 500 ppm as Beck shows.
You seem to have missed the point as to why remote locations, such as Barrow and Mana Loa, were chosen and why Beck’s measurements are of no use. I’m quite sure you can get higher (much higher ) readings if you take measurements in Piccadilly Circus or at the Arc de Triomphe but these would not be, in any way, representative of global CO2 concentrations. CO2 maps show that “well-mixed” concentrations vary by only a few ppm across the world.
I do not envy you, Willis.

wayne
June 7, 2010 11:42 pm

Keith Minto says:
June 7, 2010 at 9:23 pm
Wayne,
June 7, 2010, 5:09.
Interesting paper by John Nicol in that link. Yes, it is heavy going but I found this conclusion to be pertinent.
The findings clearly show that any gas with an absorption line or band lying within the spectral range of the radiation field from the warmed earth, will be capable of contributing towards raising the temperature of the earth. However, it is equally clear that after reaching a fixed threshold of so-called Greenhouse gas density, which is much lower than that currently found in the atmosphere, there will be no further increase in temperature from this source, no matter how large the increase in the atmospheric density of such gases.

A bit off topic: That is very interesting, isn’t it. If you have ever followed the anti-logic of the IPCC Free Energy Oven (thanks anna v) and agree that in the purest net sense, you can never really re-radiate energy back to warm the surface but only retard the exit of energy from the surface that was already there to begin with, thereby seeming like you are warming when in reality you are only retarding it from getting colder.
John Nicol seems to be saying through the physics outlined in his paper that each absorption band in any GHG will retard the passage of energy through those bands but that rate of retardation has an upper limit and we have already passed the surface temperature at which these bands can retard that amount of energy exiting the surface. The surface is radiating at a fixed rate by the temperature of the surface and any increase in GHG molecules with the same absorption bands just widen the path (bigger pipe) for energy to escape in exactly the same amount that the additional molecules add ability to retard, therefore, no further increase in temperature. That would apply to water vapor as well as co2.
Maybe that’s why a huge 40,000 foot thunder cloud never toasts to a crisp the people underneath it for, at that very moment under the cloud, there is an incredible number of GHG molecules above you, thank God that oven doesn’t really work! @;-)
Interesting. His work will have to be checked but I can now clearly see his point. Is that the basics you got from his paper?

anna v
June 7, 2010 11:47 pm

Sorry, it is in China that we get red. Australia has a very low blue point.
We had discussed it here at the time it appeared, http://wattsupwiththat.com/2009/09/13/some-results-from-gosat-co2-hot-spots-in-interesting-places/

June 7, 2010 11:53 pm

DirkH says:
June 7, 2010 at 2:46 pm
[On CO2 and human population]
“It’s a spurious correlation. Replace the number of humans with World GDP total or anything else that grows roughly exponential… say the total of freeway kilometers globally. Without a physical mechanism or hypothesis, there are too many things you can correlate.”
You need to replace with something quite independent of humans, so not world GDP, not freeway kilometers. You’re saying that A -> B must be wrong because there are spurious C -> B connections, but that’s flawed logic when there are A -> C connections.
The physical mechanism? Surely complex. The impact of land use (agriculture, cattle, etc.), natural resources, could be some areas to look into. My point, however, is that the whole story including the physical mechanisms might be easier to see if we start the nesting from a different starting point, such as the human population. That’s meaningless, though, if you’ve already decided that there is connection between human activity and CO2. That I doubt. You might also say that it doesn’t really matter much, which may be true.

Malaga View
June 8, 2010 12:45 am


CONCLUSION
As I said, I think that the preponderance of evidence shows that humans are the main cause of the increase in atmospheric CO2. It is unlikely that the change in CO2 is from the overall temperature increase. During the ice age to interglacial transitions, on average a change of 7°C led to a doubling of CO2. We have seen about a tenth of that change (0.7°C) since 1850, so we’d expect a CO2 change from temperature alone of only about 20 ppmv.
Given all of the issues discussed above, I say humans are responsible for the change in atmospheric CO2 … but obviously, for lots of people, YMMV.

So lets have a look at the NOAA Global view of CO2 Weather for 2008.
In Winter the Artic seems to be turned OFF as a CO2 sink and all that human produced CO2 has accumulated at a positive atmospheric anomaly around the North Pole by the end of February:
http://www.esrl.noaa.gov/gmd/webdata/ccgg/CT2009/co2wx/glb/co2wx_hammer-glb_20080229.png
However, in Summer the Artic seems to be turned ON as a CO2 sink and all that human produced CO2 (and more) has become a negative atmospheric anomaly around the North Pole by the end of August:
http://www.esrl.noaa.gov/gmd/webdata/ccgg/CT2009/co2wx/glb/co2wx_hammer-glb_20080831.png
So a quick blink test of these two CO2 Weather maps seem to easy undermine your conclusions…. unless you want to start arguing with the observations 🙂

anna v
June 8, 2010 12:49 am

Willis Eschenbach says:
The AIRS data is not measuring a column of air from the surface to the5000 or 7000 meters which is what it plots.
Looking at concentrations up there, if the hypothesis of well mixed is wrong, is like smelling the air in the Himalayas and saying”what forest fire? the air is clean here”
The failed satellite was planned in order to do that, and it seems that the Japanese one is in position to do that but are slow in trickling out data.

Malaga View
June 8, 2010 1:00 am


John Finn says:
June 7, 2010 at 11:21 pm
CO2 maps show that “well-mixed” concentrations vary by only a few ppm across the world.

Provided your definition of “vary by only a few ppm” means “vary by upto 20 ppm”
http://www.esrl.noaa.gov/gmd/webdata/ccgg/CT2009/co2wx/glb/co2wx_hammer-glb_20080831.png
http://www.esrl.noaa.gov/gmd/webdata/ccgg/CT2009/co2wx/glb/co2wx_hammer-glb_20080229.png
Perhaps calling it “plus or minus 10 ppm” gets us a bit closer to “a few” 🙂

anna v
June 8, 2010 1:04 am

Here is what AIRS itself is saying:
Significant Findings from AIRS Data
1. ‘Carbon dioxide is not homogeneous in the mid-troposphere; previously it was thought to be well-mixed
2. ‘The distribution of carbon dioxide in the mid-troposphere is strongly influenced by large-scale circulations such as the mid-latitude jet streams and by synoptic weather systems, most notably in the summer hemisphere
3. ‘There are significant differences between simulated and observed CO2 abundance outside of the tropics, raising questions about the transport pathways between the lower and upper troposphere in current models
4. ‘Zonal transport in the southern hemisphere shows the complexity of its carbon cycle and needs further study

http://airs.jpl.nasa.gov/AIRS_CO2_Data/About_AIRS_CO2_Data/
They talk of middle troposphere for their results. Not column averaged as the Japanese data I linked to. Even at this level AIRS does not see well mixing.
Not well mixed means one needs 3d plots.
Did you watch the experiment with pouring CO2?

phlogiston
June 8, 2010 1:20 am

John Finn says:
June 7, 2010 at 3:40 pm
Willis Eschenbach says
June 7, 1:20 pm
You are again conflating residence time (6-8 years or so) with the half life (much longer)
Not so – residence time is half life / ln 2 (which is 0.693). Thus t1/2 is a bit shorter than residence time (tau).
There might be some confusion over the definition of ‘residence time’ here. I think Willis is referring to the average lifetime (residence time) of a CO2 molecule in the atmosphere.

Thats what I’m talking about too – the residence time tau. If we choose to define CO2 clearence from the atmosphere as exponential (rather than immortalise CO2 in the atmosphere as a CAGW deity) then we have to play by the rules of exponential decay. Again, residence time tau = halflife / 0.693.
This is only a few years. However, this is not the same thing as the time taken for a pulse of CO2 to be removed from the atmosphere. In this case, both the half-life and e-folding time are much longer.
How long does a pulse of CO2 stay in the atmosphere? Forever, until judgement day, or until CO2 starts causing global warming, however you choose to define an infinite time. Exponential decay never reaches zero.
This discussion on exponential CO2 removal is very important, the CAGW establishment try to wriggle out of discussion of CO2 removal from the atmosphere and thus tacitly deify CO2 as an immortal angry sky god. This discussion will make them uncomfortably so is to be applauded – indeed it will seem blasphemous to them to think that CO2 can leave the atmosphere.
But why introduce the esoteric term “e folding time” when all that is needed is the exponential decay term or the half life?
In fact CO2 removal from the atmosphere would seem likely to be multi-term or multi-compartmental with several exponential terms (although I looked in Google scholar but could not find any reference). There may be short, medium and long term compartments. This could explain the anomalously smooth curve of CO2 in air with time. However I share the suspicion of some posters about the credibility of such a smooth CO2 curve. If human CO2 emission is indeed now a dominant signal, then changes in CO2 output caused for instance by an economic recession, should leave a visible signal in the CO2 curve. But they dont.

tallbloke
June 8, 2010 1:27 am

Willis Eschenbach says:
June 8, 2010 at 12:52 am
… the main issue is not whether humans are raising the CO2. It is whether that CO2 rise will affect the temperature. I see no evidence that it will, and lots of evidence that it won’t. So I don’t want to dispute what I see as an inconsequential issue for which, as I said, I think the preponderance of evidence is large in favor of the idea that humans caused the CO2 rise.

Fair enough, and I agree with you that it’s a side issue. Still one that tax raisers will use to beat us with though. If the human part of the atmosperic co2 flux increased 1.3% over the modern warming period, we are only talking about 0.00049% of the atmosphere. I won’t be losing any sleep.
Cheers

wayne
June 8, 2010 1:50 am

Keith Minto, sorry I posted an inaccessible link, might have left you wondering what I was talking about, here’s the correct one, and correctly labeled NASA’s, not IPCC’s, they just approved it! 🙂 (it’s a spoof!)
http://www.vermonttiger.com/content/2008/07/nasa-free-energ.html

C.W. Schoneveld
June 8, 2010 1:51 am

I always read about the 800-year timelag of a CO2 increase following a rise in temperatures, but I hardly ever find anybody arguing that the present level of CO2 was caused 800 years ago. Strange!

Richard S Courtney
June 8, 2010 1:55 am

Willis:
At (June 7, 2010 at 1:13 pm) you write:
“My impression (haven’t read too much on the stomatal measurements) is that there are some problems with the stomatal data. One that I haven’t seen discussed much is this: …”
Previously, at (June 7, 2010 at 2:43 am) I wrote:
“As an aside, I address your point concerning the ice-core data because I think it is a distraction. …
[snip]
There is – at very least – adequate reason to assess the recent changes in atmospheric CO2 concentration as indicated at Mauna Loa, Barrow, etc. on the basis of the behaviour of the carbon cycle since 1958 (when measurements began at Mauna Loa).
Comparison of the recent rise in atmospheric CO2 concentration with paleo data merely provides a debate as to
(a) the validity of the ice-core data (which provides the ‘hockey stick’ graph you reproduce above)
and
(b) the validity of the stomata data that shows the recent rise in atmospheric CO2 concentration is similar to rises that have repeatedly happened previously.”
Quod erat demonstrandum.
Richard

Steve Garcia
June 8, 2010 2:44 am

Z says June 7, 2010 at 5:56 pm:

Also at the bottom of an ice core, the pressure must be incredible.
Any dissolved CO2 must outgas like a smashed bottle of champagne when the core is pulled into the ambient pressure.

Yes, this is a very cogent point. The lack of pressure is found to be a big problem when oceanographers/biologists try to study deep sea creatures when the creatures are exposed to our 1 bar atmosphere.
As soon as the core is exposed – unless it is but into a pressure chamber and kept there until and including the sampling tests are undertaken – the “steady state” is erased and all bets are off as to what was really there when it was at depth. No matter HOW careful they are with procedures, the baseline shifted in unknown ways, throwing serious doubt on any data.
If they then USE that data, biff! GIGO.

Richard S Courtney
June 8, 2010 2:50 am

John Finn:
At (June 7, 2010 at 3:28 pm ) pertaining to Beck’s data, you ask:
“So a drop of ~80 ppm in a few years around 1940 is believable is it?”
I answer:
Yes, it is.
Please see my above comment at (June 7, 2010 at 3:23 am). I wrote there:
“Your point about “uptake of CO2” by the oceans cuts both ways. The great bulk of carbon flowing around the carbon cycle is in the oceans. An equilibrium state exists between the atmospheric CO2 concentration and the carbon concentration in the ocean surface layer. So, all other things being equal, if the atmospheric CO2 concentration increases then – as you say – the ocean surface layer will dissolve additional CO2 and alkalinity of the layer will reduce. However, the opposite is also true.
If the alkilinity of the ocean surface layer reduces then the equilibrium state will alter to increase the atmospheric CO2 concentration and to reduce the carbon in the ocean surface layer. The pH change required to achieve all of the recent rise in atmospheric CO2 concentration (i.e. since 1958 when measurements began at Mauna Loa) is less than 0.1 which is much, much too small for it to be detectable. And changes of this magnitude can be expected to occur.
Surface waters sink to ocean bottom, travel around the globe for ~800 years then return to ocean surface. They can be expected to dissolve S and Cl from exposure to undersea volcanism during their travels. So, the return to the surface of these waters will convey the S and Cl ions to the surface layer centuries after their exposure to the volcanism, and this could easily reduce the surface layer pH by more than 0.1. Hence, variations in undersea volcanism centuries ago could be completely responsible for the recent rise in atmospheric CO2 concentration.
Please note that the fact that these volcanic variations could be responsible for the recent rise does not mean they are responsible (which is the same logic as the fact that the anthropogenic emissions could be responsible does not mean that they are).
However, Tom Quirk observes that the geographical distribution of atmospheric carbon isotopes provides a better fit to the undersea volcanism hypothesis than to the anthropogenic hypothesis as a cause of the rise: see
http://climaterealists.com/attachments/database/A%20Clean%20Demonstration%20of%20Carbon%2013%20Isotope%20Depletion.pdf
There are many possible causes of the recent rise in atmospheric CO2 concentration. They each warrant investigation, and there is not sufficient evidence to champion any one of them.”
I hope you will take note of my final paragraph in my above quotation: I am not championing the ‘volcanic variation hypothesis’ but am using it here to demonstrate that “a drop of ~80 ppm in a few years around 1940” is physically possible.
Seasonal variation differs greatly from place to place. It is minimal at Mauna Loa (which is why Keeling chose to monitor atmospheric CO2 concentration there) where it is typically more than 8 ppmv each year. And, at that location, this variation is attributed to oceanic emission and sequestration.
So, a change to the pH of the ocean surface layer that deverely inhibited oceanic emission (but not sequestration) could easily induce “a drop of ~80 ppm” in less than a decade.
This is yet another example of how closed minds inhibit investigation of the true behaviours of the carbon cycle by asserting that known possibilities are not “believable”.
Richard

Steve Garcia
June 8, 2010 2:52 am

Gail Combs says June 7, 2010 at 6:46 pm:

“At Mauna Loa we use the following data selection criteria:
3. There is often a diurnal wind flow pattern on Mauna Loa ….. The upslope air may have CO2 that has been lowered by plants removing CO2 through photosynthesis at lower elevations on the island,…. Hours that are likely affected by local photosynthesis are indicated by a “U” flag in the hourly data file, and by the blue color in Figure 2. The selection to minimize this potential non-background bias takes place as part of step 4. At night the flow is often downslope, bringing background air. However, that air is sometimes contaminated by CO2 emissions from the crater of Mauna Loa. As the air meanders down the slope that situation is characterized by high variability of the CO2 mole fraction…..

Wow. I have always furrowed my brow at why they would put a CO2 station on top of a CO2 spewing volcano and try to convince us all that the data is pristine. Out in the middle of the ocean makes a lot of sense, but couldn’t they find an island WITHOUT a simmering volcano?
THANK YOU for this bit of information. I don’t trust their data at all anymore. Even the CO2 data coming from Mauna Loa has to be “adjusted” – and guess who gets to do the adjusting?

John Finn
June 8, 2010 2:56 am

Unless I’ve missed it, I notice no-one has mentioned the AIRS data from satellite observations. These are the oservations of CO2 in the mid-troposphere. From what I recall they seem to provide strong independent support for that the Mauna Loa readings are accuarate and are representaive of the “well-mixed” atmosphere.
This link
http://airs.jpl.nasa.gov/AIRS_CO2_Data/
shows the CO2 map for July 2009 which shows the average concentration at roughly 387 ppm. The July 2009 reading at ML was 387.74 ppm.
If we can establish that ML and the ice core data are reliable, then it will become a relatively straightforward task to show that human emissions are mainly responsible for the rise.

Keith Minto
June 8, 2010 3:10 am

anna v says:
June 7, 2010 at 11:47 pm
Sorry, it is in China that we get red. Australia has a very low blue point.

That low blue point (360ppm) on the coast of western Australia contrasts with a solitary ‘spot’ in the desert of central Australia and one near Brisbane at 385ppm. 385ppm is also recorded by Cape Grim on the west coast of Tasmania in 2009.
This is more than “a percent or two” of variation across a continent and worthy of more attention, perhaps in a new thread.
I could be wrong but I think that anna v is, like me is interested in a vertical profile of CO2 mixing,or, more likely, layering.
Wayne, I will digest that Nicol article a little more and not comment on it now so as not to go OT, thanks anyway for the link.

June 8, 2010 3:33 am

Willis
“I’ve never been able to find a physical explanation of how this works in the real world.”
The reason is that exponential decay just isn’t the right model for diffusion of CO2 into the sea. If you want to think of it as a set of layers of equal thickness and diffusivity respobding to a rise of CO2 in the air, then the top layer absorbs CO2 fairly quickly, since there’s not much resistance. When it approaches capacity, the next layer comes into play. But to get there, CO2 has to pass through the top layer, so it fills more slowly, with a longer time constant. And so on down.
This is solving the diffusion equation, and instead of convolving with a decaying exponential, which as Bart says is what you are doing, you should be convolving with a function of form t^(-3/2). This decays much more slowly than an exponential, but you could consider it approximately made up as a sum of exponentials of varying time constant. That’s where the Bern model comes from.

Malaga View
June 8, 2010 3:44 am

anna v says:
June 7, 2010 at 11:47 pm
We had discussed it here at the time it appeared, http://wattsupwiththat.com/2009/09/13/some-results-from-gosat-co2-hot-spots-in-interesting-places/

Thanks for that reference…. which includes some interesting comments regarding the validity of the MLO data (besides their data processing procedures)…
OsandZsChemist says:
September 14, 2009 at 5:01 am
It is interesting to correlate this with historical pollution tracking and jetstream patterns http://www.cgd.ucar.edu/cms/pjr/pubs/2000JD900842.pdf which imply a dumpout between Hong Kong (lat. 22N) and Hawaii (lat. 21N). Although an early reach, it is possible to speculate that the Mauna Loa CO2 measurements are biased by the huge CO2 emissions from China and the Middle East. Anyone tried a correlation between ramp up of fossil fuel use in China and the Mauna Loa http://www.esrl.noaa.gov/gmd/ccgg/trends/co2_data_mlo.html measurements?
John says:
September 14, 2009 at 11:58 am
Concerning Mauna Loa, Jeffrey A. Glassman (http://www.rocketscientistsjournal.com/) has for years insisted that Mauna Loa was a very poor choice for monitoring CO2. The island lies in the middle of a discharge plume as cool North Pacific water warming on its trip southward swings westward. The warming results in a steady elevation in atmospheric CO2. This map seems to confirm that Mauna Loa overestimates global atmospheric CO2 levels.
=====================================================
So it looks like the MLO and Ice Core data have all sorts of associated issues….
So this CO2 analysis seems to have very questionable foundations… so say the least.

Richard S Courtney
June 8, 2010 4:24 am

John Finn:
At (June 8, 2010 at 2:56 am) you say:
“If we can establish that ML and the ice core data are reliable, then it will become a relatively straightforward task to show that human emissions are mainly responsible for the rise.”
I agree it is a pity that we cannot determine the reliability of either of them.
However, in the light of his comment at (June 7, 2010 at 7:09 pm) , dr.bill would probably be interested to know that in 1859 Michael Faraday was the discoverer of the major problem with ice cores as reliable stores of atmospheric CO2 concentrations: i.e. all ice surfaces (including surfaces of ice crystals) are coated in a liquid water phase at all temperatures down to about -40 deg.C.
(ref. Rosenburg R, ‘Why is ice slippery’, Physics Today, Dc.2005).
Liquid water dissolves CO2 and ionic diffusion occurs through liquid water.
Richard

kadaka (KD Knoebel)
June 8, 2010 4:41 am

Re: Gail Combs on June 7, 2010 at 5:15 pm
Gail, I tossed that out there for general consideration without speculating myself, but there was something about the ice core low CO2 levels that was tickling my brain. The plant data show much lower levels, and yes I agree that could be a local source/sink issue but there it is. Well, for a long time now I’ve read speculation here about CO2 diffusing out of high-concentration air bubbles in the ice cores, yielding too-low readings. Where does it go? If there was an “averaging” occurring in the ice in all those layers over all that time, wouldn’t the low-concentration bubbles gain some CO2? And there it is, the plant data showing much lower levels than what the ice cores do.
Of course in some ways it sounded silly, as with that graph (link to US-B&W version) not getting smoother further back in time, and sudden swings from low to high being preserved. So it doesn’t look like such “averaging” is a long-term effect due to the lack of long-term smoothing, and swing preservation says it’s not short term acting on “young” ice. However, I can’t rule out the high and low points of the swings weren’t clipped, especially with the lower plant numbers, thus can’t rule out a short-term issue, when the young ice wasn’t “set” and under so much pressure… So I posted it without speculation to see if someone else might get that same tickle.
Looking at that graph again though, I’ve noticed a major irritation. The CO2, temperature, and CH4 lines match up well, and this graph has too coarse a time scale to really see any CO2/temperature lag. But what is noticeable is that insolation line. Going by the peaks, first temperature and the other two peak, then insolation peaks. It’s not a perfect match, and they don’t always line up that way, especially on minor peaks where other factors might be in play, but that certain pattern is far more likely to be seen than otherwise. Further, quite often insolation will be rising, then the others peak, then insolation peaks. Around 160,000 BP there’s an oddball, temp and CO2 have a minor peak while insolation is going down, but methane is also going down. Nearly always the pattern is there, insolation rises, the three peak, then insolation peaks.
Which brings about the following question: Is there some negative feedback mechanism keeping insolation from raising the other readings past a certain point? Looks to me like an albedo change is indicated, namely increased cloud cover. Past a certain temperature, increased cloud cover due to increased evaporation acts to dampen any further increases in temperature. (One may choose to argue it is CO2 and/or methane, both GHG’s, that are somehow triggering some sort of temperature-limiting mechanism. Which would be interesting.)

Ah crud, and here I was, I really was, trying to keep near the “attribution” topic while simply mentioning something about the ice cores I had found that makes me question them somewhat, and noting the huge pre-SUV rises. Oh well. Around here there tends to be a love/hate relationship to ice cores, we like the CO2/temp lag and the higher temps, don’t like the presumably-low CO2 measurements. Guess we’ll have to treat them like we do the historical temperature records, we have reasons to (strongly) suspect they could be a mess but they are what we have to work with, so let’s see what real info we can tease out of them. And look there, possible evidence of a global temperature-limiting mechanism!

Gail Combs
June 8, 2010 4:42 am

#
#
Michael Larkin says:
June 7, 2010 at 6:17 pm
This is coming from left field, but if no one has mentioned it, I wondered if it might have relevance. If not all “fossil fuel” is in fact biogenic, then wouldn’t that affect C12/C13 ratios?
I have no opinion either way, incidentally, about abiogenic oil or anthopogenic CO2 increase. The more I read pro- and contra, the more I come to realise that agnosticism is the only sane position for me.
________________________________________________________________________
I looked into that a while ago and you run into a confounding factor – microbes living in the coal
Carbon-14 in Coal Deposits
“…it would also seem to represent a problem for the established geologic timescale, as conventional thought holds that coal deposits were largely if not entirely formed during the Carboniferous period approximately 300 million years ago. Since the halflife of carbon-14 is 5,730 years, any that was present in the coal at the time of formation should have long since decayed to stable daughter products. The presence of 14C in coal therefore is an anomaly that requires explanation….”
bacteria/fungi hypothesis: Lowe then makes a reasonable case for fungi and bacteria – there are fungi that can degrade lignite (Polyporus versicolor and Poria montiola), as well as autotrophic “thiobacillus-like” bacteria that oxidize pyrites in coal, and he points out that bacteria have been found 3km underground apparently living on granite. Lowe states that fungal and bacterial activity is particularly likely in warm, damp coal exposed to air, and he points out that microbial action only has to result in the deposition of ~0.1% by weight of modern carbon in the coal to produce an apparent age of 45,000 years for the specimen.
Since Lowe’s paper, there have been many more reports of deep subterranean bacteria, which apparently form a heretofore unrecognized ecosystem deep below the earth in rocks and in oils (abstracts below). Presumably most of these bacteria never interact with the “modern” 14C of the atmosphere. But some deep bacterial activity apparently can result in increased concentrations of 13C….”

On the next Ice Age: Everyone seems to forget the default setting for the Earth is a deep freeze.
About the onset of an ice age, this comment from E.M.Smith:
From “Ice Age” by John and Mary Gribbin (a wonderful read, gives the richness of the characters in the discovery of the ice ages, the history of the process, and a gentle introduction to some of the science involved.):
Pg.53: […]the single most important thing to emerge from these discussions was Koppen’s realization of the key season in the Ice Age saga. Adhemar and Croll had thought that the decisive factor in encouraging Ice to spread across the Northern Hemisphere must be the occurrence of extremely cold winters, resulting in increased snowfall. At first, Milankovitch had shared this view. But it was Koppen who pointed out that it is always cold enough for snow to fall in the Arctic winter, even today, and that the reason that the Northern Hemisphere is not in the grip of a full Ice Age is because the ‘extra’ snow melts away again in the summer.
[EMS: Note that the Southern Hemisphere is similarly irrelevant to the ice age cycle since it is always cold enough for snow to stay frozen. It just doesn’t change enough to matter.]
Pg 54: He reasoned that the way to encourage the ice to spread would be to have a reduction in summer warmth, because then less of the winter snowfall would melt. If less snow melted in summer than fell in winter, the ice sheets would grow – and once they had started to grow, the feedback effect of the way the ice and snow reflect away incoming solar energy would enhance the process.
Pg 57: It isn’t so much that Ice Ages occur when the astronomical influences conspire to produce particularly cool summers, rather what matters is that Interglacials only occur when the astronomical influences conspire to produce unusually warm summers, encouraging the ice to retreat. Without all three of the astronomical rhythms working in step this way, the Earth stays in a deep freeze.
End Quote.
So, to summarize:
1) The south pole doesn’t matter to the process, it’s always frozen.
2) We are normally in a long ice age and only pop out for short intervals when conditions are just right.
3) The ‘just right’ is Northern Hemisphere summers warm enough to melt the snow and ice.
4) Warm enough is when the N. hemisphere: must be pointed at the sun in summer when: at close approach to the sun with the right elliptical shape, with the pole tilted over far enough, with… or we freeze.
I would add a note that I think it is particularly illuminating that we are near the end of an Interglacial (next stop is an ice age), the only thing that keeps it away is the summer Arctic ice and snow melt. So what is the AGW crowd in histrionics about? That the Arctic ice and snow are not sticking through the summer… Think about it…

Right now Earth is nearest the sun during northern winters, farthest from the sun during northern summers.
E.M.Smith sums up my concerns very nicely. Thank you Mr. Smith and also Willis for giving us a chance to thrash out the pros and cons of the increase in CO2 is caused by man hypothesis.

tonyb
Editor
June 8, 2010 5:04 am

Richard Courtney June 8, 2010 at 2:50 am made the point about the possibility of CO2 concentration dropping substantially and by implication of course the other way round.
This was discussed at great length on my thread on this subject over at Air Vent when Becks figures around 1940 were dissected.
The Oceans are a giant source or sink dependent on temperatures. I believe Richard himself gave a very good indicator of the Billions of Tonnes that could theoretically cycle between ocean and atmosphere on another thread a few months ago.
So a considerable and rapid change in atmospheric concentration is theoretically possible IF the right conditions prevail. It does happen sometimes that the Arctic and Antarctic can be warming or cooling in unison.
http://i43.tinypic.com/a4wiu8.png
If this should occur at the same time there and in most other parts of the world the absorption or outgasing can be very considerable. Of course the time scale of cause and effect needs to be taken into account-1 year? 1000 years?
I have asked before if anyone knew of the amount of outgassing or absorption of CO2 per one tenth of a degree of oceanic temperature change, in order to try to determine if the temperature changes we witness in the graph can possibly have the effect of significantly affecting the atmospheric concentration. However to date no one has come up with an appropriate formula.
Tonyb

tallbloke
June 8, 2010 5:13 am

Willis Eschenbach says:
June 8, 2010 at 12:52 am
tallbloke says:
June 7, 2010 at 11:19 pm
1) Why no account taken of the work done by Georg Ernst Beck in gathering empirical records which show that co2 levels were much more upsy downsy in the C19th and early C20th than the warmists would like to have us know?
I discussed Beck’s work (and Beck himself commented) on my previous thread.

I think your reasons for rejecting his work don’t hold up in the light of his latest paper:
http://www.biokurs.de/treibhaus/CO2_versus_windspeed-review-1-FM.pdf
It shows the levels of this “well mixed” gas were at around 332 back in the 1800’s and as high as today in 1940. I trust direct air measuements more than ice core proxies from the south pole where co2 levels are generally lower anyway.

John Finn
June 8, 2010 6:02 am

Richard S Courtney says:
June 8, 2010 at 4:24 am

John Finn:
At (June 8, 2010 at 2:56 am) you say:
“If we can establish that ML and the ice core data are reliable, then it will become a relatively straightforward task to show that human emissions are mainly responsible for the rise.”


I agree it is a pity that we cannot determine the reliability of either of them.
1. Do you think that the agreement between ML, Barrow, South Pole and AIRS data is a coincidence or doy you believe they are all wrong for the same reason?
2. Do you think the same argument applies to all 8 ice core datasets?

dr.bill
June 8, 2010 6:11 am

I have learned quite a lot from the many commenters on this thread, most notably (but not exclusively) Richard S Courtney and tonyb. I also share the concerns of anna v regarding both energy and the mixing issue, and am not nearly as confident about the procedures at Mauna Loa (and elsewhere) as Willis seems to be, but I am grateful for the clear way in which he has outlined exactly what they do with the initial measurements.
In tonyb’s first post yesterday, he included a link to a graph that I think sums up a lot regarding the effects of CO2 on ‘global warming’.
I have long had a poster-sized version of the equivalent of this graph hung up on my office wall, except that the curves have been labelled f(t) and g(t), and no reference is made to climate issues. As people come and go, I ask them what they think of the connection between the two functions. So far, the most common responses have been:
– No connection.
– Very little connection.
– Is this a trick question?
– You’re joking, right?

/dr.bill

Richard S Courtney
June 8, 2010 6:11 am

Fred H. Haynie:
At June 7, 2010 at 6:20 pm you say:
“A more informative exercise is to use the Scripps seasonally adjusted monthly CO2 averages, convert to global gigitons/year (annual difference or accumulation rate) and compare the cyclic behavior to the relatively straight line for anthropogenic emissions. They average about the same but the natural cycles vary by orders of magnitude. This does not play well for cause and effect.”
With respect it is not true that “This does not play well for cause and effect”, but it does deny the CO2 ‘budget’ analyses used by e.g. the IPCC to determine “accumulation” of anthropogenic CO2.
Your comment pertains to the fact that the annual pulse of anthropogenic CO2 into the atmosphere should relate to the annual increase of CO2 in the atmosphere if one is directly causal of the other, but their variations greatly differ from year to year. This necessary relationship is because the direct causation would require that the carbon cycle were so near to saturation that the system could not sequester all the anthropogenic addition.
However, the rates of the seasonal variations to atmospheric CO2 concentration demonstrate that the system is not near such saturation.
(ref. Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005) ).
A caveat is that the use of annual data for anthropogenic CO2 may be an error. Some data on e.g. fuel consumption may not be collated in time so may be misallocated to an adjacent year, so 2-year smoothing of the data is justifiable. And some countries may use different 12-month periods for their accounting years which – together with the reason for 2-year smoothing – provides justification for 3-year smoothing. But smoothing of the data over 4 or more years is not justifiable.
The IPCC uses 5-year smoothing to get the data to ‘fit’ its model of ‘accumulation’ of anthropogenic emissions to ‘fit’ the observed rise in atmospheric CO2 concentration as determined at Mauna Loa.
However, in our paper that I cite here and outline at June 7, 2010 at 2:43 am above, we provided six models that each match the annual data for the anthropogenic emission to the observed rise in atmospheric CO2 concentration as determined at Mauna Loa. None of these models used any smoothing or other adjustment to any of the data. As I explained above (at June 7, 2010 at 2:43 am):
“Our paper then used attribution studies to model the system response. Those attribution studies used three different basic models to emulate the causes of the rise of CO2 concentration in the atmosphere in the twentieth century. They each assumed
(a) a significant effect of the anthropogenic emission
and
(b) no discernible effect of the anthropogenic emission.
Thus we assessed six models.
These numerical exercises are a caution to estimates of future changes to the atmospheric CO2 concentration. The three basic models used in these exercises each emulate different physical processes and each agrees with the observed recent rise of atmospheric CO2 concentration. They each demonstrate that the observed recent rise of atmospheric CO2 concentration may be solely a consequence of the anthropogenic emission or may be solely a result of, for example, desorption from the oceans induced by the temperature rise that preceded it. Furthermore, extrapolation using these models gives very different predictions of future atmospheric CO2 concentration whatever the cause of the recent rise in atmospheric CO2 concentration.”
This provides an apparent paradox. The annual anthropogenic emission of CO2 should relate to the annual increase of CO2 in the atmosphere if one is directly causal of the other but these two parameters do not correlate. But – using each of our different models – we were able to model the increase of CO2 in the atmosphere as being a function solely of the annual anthropogenic emission of CO2. And we did not use any ‘fiddle factors’ such as the 5-year-averageing used by the IPCC to get a ’fit’. (Adoption of that smoothing really is a disgrace. There can be no justification for it because there is no known physical mechanism that would have that effect).
The apparent paradox is resolved by considering the calculated equilibrium CO2 concentration values. These show an important difference between the three models. They diverge.
But each model indicates that, for each year, the calculated CO2 concentration for the equilibrium state is considerably above the value of the observed CO2 concentration in the air. This demonstrates that each model indicates there is a considerable time lag required to reach the equilibrium state when there is no accumulation of CO2 in the atmosphere.
The short term sequestration processes can easily adapt to sequester the anthropogenic emission in that year. But, according to our models, the total emission of any year affects the equilibrium state of the entire system. Some processes of the system are very slow with rate constants of years and decades. Hence, the system takes decades to fully adjust to the new equilibrium. And the models predict the atmospheric CO2 concentration slowly rising in response to the changing equilibrium condition.
Simply, we demonstrated that it is possible that the total natural flux of CO2 from the Earth to the air may increase over time as a response to increasing anthropogenic emission. And this provides an explanation of why the apparent accumulation of CO2 in the atmosphere continued when in two subsequent years the anthropogenic flux into the atmosphere decreased (this happened, for example, in the years 1973-1974, 1987-1988, and 1998-1999).
So, in summation, your observation does disprove the IPCC model of “accumulating” anthropogenic CO2 in the air, but it does not negate the possibility that the anthropogenic emission is responsible for the recent (i.e. since 1958) rise in atmospheric CO2 concentration. And our models demonstrate that the cause of the recent rise may be entirely natural, or entirely anthropogenic, or some combination of anthropogenic and natural causes.
So, a question:
Is the cause of the rise in atmospheric CO2 concentration.natural or anthropogenic in part or in whole?
Answer:
God alone knows.
Richard

John Finn
June 8, 2010 6:14 am

Malaga View says:
June 8, 2010 at 3:44 am
……
of a discharge plume as cool North Pacific water warming on its trip southward swings westward. The warming results in a steady elevation in atmospheric CO2. This map seems to confirm that Mauna Loa overestimates global atmospheric CO2 levels.

Then so does the AIRS satellite data from the mid-troposphere – and ‘coincidentally’ – by almost exactly the same amount.
=====================================================
So it looks like the MLO and Ice Core data have all sorts of associated issues….
So this CO2 analysis seems to have very questionable foundations… so say the least.

ML data is fine. It is validated by a host other data. Ice Core data is consistent across all datasets. Ice age concentrations show there is a lot of variability – just not so much in the inter-glacial periods, i.e. +/-20ppm.

John Finn
June 8, 2010 6:18 am

Malaga View says:
June 8, 2010 at 12:45 am

Have you actually looked at the scale on your linked maps. We do know about the seasonal cycle, by the way.

Phil.
June 8, 2010 6:38 am

tonyb says:
June 8, 2010 at 5:04 am
I have asked before if anyone knew of the amount of outgassing or absorption of CO2 per one tenth of a degree of oceanic temperature change, in order to try to determine if the temperature changes we witness in the graph can possibly have the effect of significantly affecting the atmospheric concentration. However to date no one has come up with an appropriate formula.

The most recent evaluation of this in a Nature paper came out with a median value of ~8 ppm CO2/ºC (upper limit ~20).
David C. Frank, Jan Esper, Christoph C. Raible, Ulf Büntgen, Valerie Trouet, Benjamin Stocker, & Fortunat Joos. Ensemble reconstruction constraints on the global carbon cycle sensitivity to climate. Nature, 2010; 463 (7280): 527 DOI: 10.1038/nature08769

June 8, 2010 6:58 am

The term well mixed in some cases has been misused. As concerning the Scripps data, they tend to define it as when there is enough turbulance in the surface boundary layer that concentrations are relatively uniform and are representive of a background concentration. That layer is roughly around a kilometer between the surface and the base of clouds. They only include measurements in their monthly averages that meet these criteria. “Well mixed” and “E-fold” should not be used to try to explain the global uniformity of these measurements. There are other physical processes controlling the atmospheric concentration of CO2. For example, how long does it take for CO2 “out-gassing” from the tropical ocean to be absorbed into clouds and then be returned to the ocean as rain? What fraction is returned to the ocean and what fraction is transported to the top of the atmosphere in towering clouds? How do wind patterns carry CO2 from it’s tropical source to it’s Artic sink? How and why do both source and sink change with time? Until these processes are better understood, it is not much better than speculation to attribute some fraction of the rise in background CO2 levels to anthropogenic emissions.

phlogiston
June 8, 2010 7:13 am

A nice paper in pnas by Beerling and Berner 2005 from Sheffield (pdf available without paywall) shows how evolution of trees resulted in a sharp drop in atmospheric CO2:
http://www.pnas.org/content/102/5/1302.full
In particular increasing height and leaf size, plus weathering action of roots building soils, all transformed the global carbon budget and cycle.
Its a good illustration of the close interplay between the earth’s biota and its atmosphere and environment – showing there is substance to Lovelock’s GAIA hypothesis. Also, if the biosphere coped with CO2 levels of several thousand ppm and adapted in such a way as to change the atmospheric composition, it is not too far-fetched to imagine that the present biosphere will have no difficulty adapting to, and even benefiting from, a modest anthropogenic increase in the rate of CO2 emission to the atmosphere.
The authors ended with a throw-away line about present day feedbacks possible being positive and harmful, but this contradicted the main message of the paper – the biosphere’s optimising adaptability – and seems half-hearted, perhaps a perfunctory bone thrown to AGW mandarins in the publication process.

tonyb
Editor
June 8, 2010 7:33 am

In my post 5.04 am
I linked to a graph and asked a question;
http://i43.tinypic.com/a4wiu8.png
So that I can properly understand Phils helpful reply to me at 6.38 am I confirm he said;
“The most recent evaluation of this in a Nature paper came out with a median value of ~8 ppm CO2/ºC (upper limit ~20).”
So do I correctly understand that IF there were a general warming of the oceans by around 1 Degree C that it would outgass the equivalent of 80ppm (presumably a similar cooling would have the opposite effect.)
This amount is not far off the 80ppm ‘outlier’ that Beck recorded in the 1940’s. The world certainly warmed from 1920 to 1940 so a considerable amount of outgasing should theoretically have happened. I don’t know if most of it disappeared back into the oceans when it cooled again or went into plant growth.
My thanks to Dr Bill for his kind comments. An ‘unlabelled’ graph sounds an excellent idea to show the lack of cause and effect.
Tonyb

David C
June 8, 2010 8:00 am

Willis
It is generally accepted that there is a correlation between CO2 levels and temperature, derived from ice-core evidence, over many thousands of years. Cause or effect, lag or lead, are of course disputed. If we think increased CO2 in past epochs was an effect of warming, we should be seeing some effect of recent warming in CO2 levels now. Could the analysis be extended to take such an effect into account?

Malaga View
June 8, 2010 8:01 am

Warming in Last 50 Years Predicted by Natural Climate Cycles
by Roy W. Spencer, Ph. D.

http://wattsupwiththat.com/2010/06/07/minority-report-50-year-warming-due-to-natural-causes/
From this record I computed the yearly change rates in temperature. I then linearly regressed these 1-year temperature change rates against the yearly average values of the PDO, AMO, and SOI.

Compare the 1900 through 1960 period in Roy W. Spencer’s article:
http://www.drroyspencer.com/wp-content/uploads/20th-Century-NH-Tsfc-model-based-on-PDO-AMO-SOI.gif
With Ernst-Georg Beck’s background CO2 for the same period:
http://www.biomind.de/realCO2/bilder/CO2back1826-1960eorevk.jpg
It begins to look like the background CO2 levels are driven by the oceans… when the oceans warm they emit more CO2… when they cool the absorb more CO2… now there is surprise…

Slioch
June 8, 2010 8:08 am

Fred H. Haynie says:
June 8, 2010 at 6:58 am
“it is not much better than speculation to attribute some fraction of the rise in background CO2 levels to anthropogenic emissions.”
Even though those anthropogenic emissions to the atmosphere since 1850 are more than twice the rise in those background (atmospheric) CO2 levels during the same period … ?

Steve Fitzpatrick
June 8, 2010 8:34 am

Nick Stokes says:
June 8, 2010 at 3:33 am
“The reason is that exponential decay just isn’t the right model for diffusion of CO2 into the sea. ”
I’m not so sure about this. There is little reason to think that diffusion a diffusion based model is reasonable. The problems I see with the Bern model are twofold:
1. The Bern model appears to ignore the importance of thermohaline circulation, which is (I think) mainly responsible for net CO2 absorption. Net CO2 absorption is driven mainly by the difference between absorption by sinking (very cold) high latitude water and the out-gassing from upwelling deep waters as they warm at low latitudes, and yields an expected response to rising CO2 emissions which almost perfectly matches the historical data. It also predicts a continuing net absorption rate (for hundreds of years) that depends almost exclusively on the CO2 concentration in the atmosphere; that is, it would predict an exactly exponential decay if all CO2 emissions were suddenly to stop.
2. To match historical atmospheric data, the Bern model assumes preposterously high mixing rates between surface waters in the tropics and sub-tropics and deeper underlying waters (essentially matching the preposterously high mixing rates that are used to justify very long ocean heat lags).
The Bern model predicted a falling capacity for CO2 uptake, which should long ago have been evident in the data. Recent publications, showing that the uptake capacity has in fact not fallen at all, suggest the model is not a good representation of reality. The almost constant Argo ocean heat content since ~2004 – 2005 simultaneously casts doubt on long ocean heat lag values and on the Bern model, since CO2 mixing and thermal mixing are essentially identical.
There must be (of course) some downward mixing due to turbulent eddies where the thermocline contacts the well mixed layer, but I suspect the Bern model grossly overestimates the importance of that mixing in order to maintain agreement with expected long ocean heat lags. Both appear to me to be quite wrong.

Malaga View
June 8, 2010 8:36 am

Thanks for this great WUWT site….
Thanks for this thought provoking post…
Thanks for the thought provoking comments….
Thanks to James Taylor for Fire and Rain

I’ve seen fire and I’ve seen rain
I’ve seen sunny days that I thought would never end
I’ve seen lonely times when I could not find a friend
But I always thought that I’d see you again

For FIRE read CHERRY PICKING
For RAIN read TRICKING
For YOU read SENSE
I’m outta here……

Spector
June 8, 2010 8:44 am

I am suspicious of the pre-1800 constant CO2 concentration indicated by this data as I would expect the CO2 concentration to mirror global average ocean temperature changes. These temperature changes should affect the static quantity of CO2 that can remain dissolved in the ocean. Perhaps this information might have been lost as a result some long-term gas diffusion process. I think we may need a second, unrelated proxy to confirm this result.

Steve Fitzpatrick
June 8, 2010 8:56 am

Willis,
This post seems to have made a little progress with a few (which is in fact amazing), but I’m not sure the result justifies the effort. Where do you find the time?

thethinkingman
June 8, 2010 9:05 am

I am sorry if this sounds a bit , well, simple but what is the resolution in years, decades, centuries etc. for the CO2 level revealed by ice cores?
I ask because I am a civil engineer and I design and build dams which means I rely on rainfall probabilities. One in a hundred, thousand etc. floods are my bread and butter but these are averaged out trends and the resolution we use is poor so we pad the numbers a lot.
Now back to the ice cores. Is it possible that the variations decade on decade are high but the methodology used in CO2 estimates may well smooth out peaks and troughs with hundred year averages for instance. I have tried to find some definitive answer by researching this on line but I can’t find anything useful.
Any links someone may have would be welcome so I can go off and read up before I even have the hubris to think I can contribute to Willis’ challenge.
Thanks.

Richard S Courtney
June 8, 2010 9:30 am

John Finn:
At June 8, 2010 at 6:02 am you ask me:
“1. Do you think that the agreement between ML, Barrow, South Pole and AIRS data is a coincidence or doy you believe they are all wrong for the same reason?
2. Do you think the same argument applies to all 8 ice core datasets?”
I would answer your questios if I knew the answers, but I don’t.
I wish I were omniscient but I regret that there is much, much more that I do not know than I do.
Richard

June 8, 2010 9:32 am

To TheThinkingMan
Try http://www.kidswincom.net/climate/pdf in the section on ice cores.

thethinkingman
June 8, 2010 10:40 am

To . .
Fred H. Haynie . ..
Thanks for the link but it 404’s out 🙁
I will keep on searching.

Steve Fitzpatrick
June 8, 2010 10:44 am

Richard S Courtney says:
June 8, 2010 at 9:30 am
““1. Do you think that the agreement between ML, Barrow, South Pole and AIRS data is a coincidence or doy you believe they are all wrong for the same reason?
2. Do you think the same argument applies to all 8 ice core datasets?”
I would answer your questios if I knew the answers, but I don’t.”
If you really do not know, then it might be a good time to apply Occam’s razor. The simplest and most probable explanation is that the data agree because they are measuring the same thing.

kadaka (KD Knoebel)
June 8, 2010 10:57 am

Fred H. Haynie said on June 8, 2010 at 9:32 am

To TheThinkingMan
Try http://www.kidswincom.net/climate/pdf in the section on ice cores.

404 Error!
I backed the URL up to http://kidswincom.net/climate/ which shows http://www.kidswincom.net/climate.pdf as a file, which downloads as 1.0 MB… and is a 59 slide presentation. Where Fred H. Haynie has as the first sentence on slide 59 titled “Finally, the Bottom Line!”:

Anthropogenic emissions of carbon dioxide have not caused a rise in background levels of carbon dioxide.

I will take this to mean you disagree with Willis.
From slides 52 to 56 I see doubts about the ice core data as I’ve seen expressed before here on WUWT from several commentators.
Note: I realize graphs with a black/dark blue background with white grid and light-colored lines do look “stylish” and may work well with an actual projection screen and darkened room presentation, but here on my monitor they are hiding detail. Slide 46 is rather bad, and to me looks like the squirting of mustard, ketchup, and BBQ sauce onto a BBQ grill, I can’t make out much of anything.

anna v
June 8, 2010 11:33 am

Willis Eschenbach says:
June 8, 2010 at 10:37 am
My point is
1) that 8% (in the japanese numbers in the column averaged )CO2 is not a few percent and probably in the radial dimension if taken before averaging the percentages would be higher.
2) Well mixed is being defended by hand waving, and not by measurements. Where measurements exist it is not only in phi and theta but also in r that there are at least 10% differences. BTW H2O is lighter than CO2, in atomic weight: one is 18 and the other is 44.
3) The Maona Loa measurements and the Keeling organized measurements are limited measurements typical of the location and the height at which they are measured.
4) That the ice core measurements are by construction depleted and long period averaged measurements so the statement that there is unprecedented CO2 increase that your fig 1 shows is not proven by the data in fig 1 , no more than the hockey stick graph proved the non existence of the medieval warming period.
because of 1,2,3,4 Beck’s data, and the stomata data must be taken more seriously, and great effort must be made for a three dimensional map, r theta phi of the concentrations of CO2 from now on.
I think that the climate community is in no way ready to present proof that the increase of CO2 is unprecedented, let alone anthropogenic.

thethinkingman
June 8, 2010 11:36 am

Got this off of American Thinker . . .
Posted by: tmead new
Jun 08, 06:23 AM
Definitive evidence already exists to prove/disprove global warming. The GPS Master Control System has been measuring average atmospheric drag on each of the GPS satellites for 35 years. This is needed to predict satellite positions to NINE decimal places. If the Atmosphere is warming, it expands. If the atmosphere expands, the amount of gas encountered by the satellites increases. Thus the average drag over an orbit increases. The USAF Space Command has this drag data recorded from 1975 onward. If the trend is up, warming is occurring, if the trend is level or down, it is NOT. The measurements are accurate to at least 7 decimal places (far more than any temperature measurements). I was one of the original developers of the Kalman Filter Estimator software in GPS and have a Ph.D in Engineering. Every citizen with a GPS knows that it is the most accurate and verified system in existence. Someone like Roy Spencer should ask Space Command for the data. I am retired and no longer have the contacts or tools to do the work, or I would.
Link . . http://comments.americanthinker.com/read/42323/611689.html

thethinkingman
June 8, 2010 11:51 am

Thanks Kadaka I have got it now.

AnonyMoose
June 8, 2010 12:03 pm

A simpler explanation for the flat ice core graph is that the number reflects a common level for CO2 in ice, that it’s just a characteristic of the ice rather than the atmosphere. The non-flat data counters that idea. But there are a lot of questionable things about ice cores and their processing. I do admire the lab skill in teasing out the data, but the interpretations of the data are peculiar in several ways.

thethinkingman
June 8, 2010 12:52 pm

Thanks Fred, that is quite a read.
Now for a second time to see how much more I can get out of it. The concept of cycles upon cycles over a 10 000 year trend is familiar to me.
I intend sending this on to a number of my friends, with attribution of course.

John Finn
June 8, 2010 1:37 pm

anna v says:
June 8, 2010 at 11:33 am
because of 1,2,3,4 Beck’s data, and the stomata data must be taken more seriously, and great effort must be made for a three dimensional map, r theta phi of the concentrations of CO2 from now on.

Which “Beck’s data” would this be? There’s a spreadsheet with Beck measurements that shows a reading of 308 ppm in 1843, 400 ppm in 1844 and 359 ppm in 1847. Well – that’s me convinced. That dodgy old siple record only shows a change of ~0.6 ppm over the same period which can’t be right. Of course Mauna Loa , Barrow, Sth pole and AIRS satellite readings all suggest annual changes of similar magnitude to the ice core – so we can ditch them as well. The Beck data must be correct because as we all earth’s biosphere regularly pumps an extra ~200 GtC (more than the normal annual cycle) into the atmosphere every so often. And a couple of years later – not a trace of it. Gone – as though it had never been there.
We apparently have a situation whereby
1. All 8 ice core records are wrong – by exactly the same amount.
2. All CO2 measurements from Mauna Loa and dozens of other sites around the world are also wrong – again by remarkably similar amounts.
3. The AIRS satellite data from the mid-troposphere is also wrong – simply because it agrees with ML and other surface based observations.
Sheesh!

Jay Davis
June 8, 2010 2:02 pm

The number of people populating the planet has risen dramatically in the past 160 years, almost corresponding to your CO2 graphs. Since each person exhales approximately 2 metric tons of CO2 annually, how much do these human emissions of CO2 contribute to the overall increase in CO2 concentration? The Chinese alone must exhale at least 2 billion metric tons annually!

Bart
June 8, 2010 2:03 pm

thethinkingman says:
June 8, 2010 at 11:36 am
I’m a bit doubtful – atmospheric density is not generally considered significant above maybe 800 km altitude, and the GPS satellites are at “1/2 GEO” (about 20,000 km altitude for a 12 hr period). It is difficult for me to imagine that atmospheric drag is anything but infinitesimal up there – we tend to discount it entirely above 800 km so I do not recall how quickly it falls off, and am feeling too lazy right now to look it up.
However, so are the effects of General Relativity, and I do know that these accumulate fast enough that the onboard clocks have to be compensated for it. But, that is really mostly an effect on local time, and there are no other influences, whereas you would have to tease out the effects of any tiny atmospheric drag from much larger solar pressure and Earth and Moon gravity , and possibly even Coulomb force effects. I will not render a final verdict without knowing more of the author’s thinking – assuming his stated qualifications are valid, he may have something valid and very specific in mind. But, well… maybe he jumped to a conclusion without thinking it through.
LEO satellites are, however, markedly influenced by atmospheric drag, and the solar cycle has a huge effect. Some recent missions would not have been feasible if the sun had been more active.

John Finn
June 8, 2010 2:12 pm

Richard S Courtney says:
June 8, 2010 at 9:30 am
John Finn:
At June 8, 2010 at 6:02 am you ask me:
“1. Do you think that the agreement between ML, Barrow, South Pole and AIRS data is a coincidence or doy you believe they are all wrong for the same reason?
2. Do you think the same argument applies to all 8 ice core datasets?”
I would answer your questios if I knew the answers, but I don’t.
I wish I were omniscient but I regret that there is much, much more that I do not know than I do.

Richard
Richard
I bet every so often – quite successfully as it happens. Almost always on football- I know nothing about horse racing. But I effectively use statistical probability and select “value for money” bets. I know you don’t know for certain th answer to my questions – just as I don’t know if Man Utd will definitely beat Hull City. However, I don’t think it’s likely that Hull will beat United and I don’t think it’s likely that the agreement between the datasets is due to coincidence.

Phil.
June 8, 2010 2:14 pm

thethinkingman says:
June 8, 2010 at 11:36 am
Got this off of American Thinker . . .
Posted by: tmead new
Jun 08, 06:23 AM
Definitive evidence already exists to prove/disprove global warming. The GPS Master Control System has been measuring average atmospheric drag on each of the GPS satellites for 35 years. This is needed to predict satellite positions to NINE decimal places. If the Atmosphere is warming, it expands. If the atmosphere expands, the amount of gas encountered by the satellites increases. Thus the average drag over an orbit increases. The USAF Space Command has this drag data recorded from 1975 onward. If the trend is up, warming is occurring, if the trend is level or down, it is NOT.

True but the GPS orbits are so high (20,000 km) that they’re well up in the exosphere (mainly Helium/H). The connection between the temperature/density up there and in the troposphere is limited, probably correlates well with the 10.7 flux.

sdcougar
June 8, 2010 2:14 pm

Comments on Dr. Martin Hertzburg’s presentation re: atmospheric CO2 ?

June 8, 2010 2:15 pm

Willis, thanks for this post which inspired a wonderful scientific discussion.
Richard S Courtney, thank you for your reasoned views and independence. The 800 year lag in CO2 discussion points you brought out are interesting.
Anthony, as always, WUWT is a truly great venue.
John

sdcougar
June 8, 2010 2:16 pm

sorry, “Hertzberg”

Richard S Courtney
June 8, 2010 2:20 pm

Steve Fitzpatrick:
At June 8, 2010 at 10:44 am you assert to me:
“If you really do not know, then it might be a good time to apply Occam’s razor. The simplest and most probable explanation is that the data agree because they are measuring the same thing.”
No!
The simplest and most probable explanation is that the data agree because they have been adjusted such that they agree.
Personally, I prefer to admit my ignorance as to the true explanation for their agreement and not to assume anything.
Richard

Joel Shore
June 8, 2010 2:34 pm

barry moore says:

The subject which you are consistently ducking is that you claim in your opening statement that human emissions are responsible for the increase in CO2, this you clearly stated and I take issue with it since I think it is giving aid and comfort to the enemy.

As one of “the enemy”, I suppose, I will give you a very different perspective: Continually denying scientific facts for which the evidence is overwhelming does little to help the”skeptic” cause within the scientific (and, likely, policymaking) communities. It really just discredits you and makes your views easy to dismiss. For example, whenever I begin to wonder if Roy Spencer might be on to something in regards to cloud feedbacks, I remind myself that this is the same person who made some particularly poor arguments for why the CO2 rise might not be primarily anthropogenic. That [along with his stated views on human origins] helps me calibrate the likelihood that his analysis is correct (and the analysis of many other scientists is wrong) in an area where I feel less competent to judge.
I have often offered the advice here that you guys should focus on the issue of feedbacks and climate sensitivity. While I may not believe that most of the scientific evidence is on your side in this realm either, I admit that there is at least legitimate scientific uncertainty that there is room for intelligent debate on the subject.
Alas, my advice seems to go unheeded by many…which, in a way I suppose, is okay since I disagree with you on the seriousness of AGW and whether actions should be taken to mitigate it and I think not heeding my advice probably does “your side” more harm than good. Still, the scientific part of me cringes at the poor arguments that pass for serious debate here on subjects such as the cause of the current rise in CO2 levels.

Steve Fitzpatrick
June 8, 2010 3:30 pm

Richard S Courtney says:
June 8, 2010 at 2:20 pm
“The simplest and most probable explanation is that the data agree because they have been adjusted such that they agree.”
If you really believe that, then there is not much more to discuss… good luck, I wish you well.

June 8, 2010 3:33 pm

Joel Shore says:
June 8, 2010 at 2:34 pm
Continually denying scientific facts for which the evidence is overwhelming does little to help the”skeptic” cause within the scientific (and, likely, policymaking) communities. It really just discredits you and makes your views easy to dismiss.

Joel,
Crtical evaluation of “scientific facts” to ascertain if they are real is not unscientific, quite the contrary. You assume critical evaluation is wrong. Bad assumption.
John

Dr A Burns
June 8, 2010 3:48 pm

>> bubbagyro says:
>>June 7, 2010 at 3:45 pm
Exellent bubbagyro ! I was not aware of this work and obviously Willis Eschenbach and many others aren’t either.
Those wishing to duplicate Eschenbach or Mann style hockey sticks need simply:
1. Plot long term smoothed, or data averaged over long term by some means (diffusion in this instance, or by averaging lots of data from different sources with various time shifts etc)
2. Splice on some recent instantaneous (eg daily) data that you know is rising.
3. Hey presto … a hockey stick. Instant alarm !

tonyb
Editor
June 8, 2010 3:51 pm

Joel Shore
Nice to see you here again, where have you been hiding-you’re a bit late coming to this particular CO2 party. Also I haven’t seen Ferdinand up to now-I hope hes OK 🙂
Tonyb

Murray Duffin
June 8, 2010 3:51 pm

Willis, I am very late into this discussion, so I hope you find this contribution. I did all this about 5.5 years ago. It adds some insight to your observations. Hope it is useful. Note, I used “snip – snip” to indicate where I had clipped out part of the wording from a reference. This useage seems to have become inappropriate since then, but I don’t have time just now to edit all of this. Murray
1) http://public.ornl.gov/ameriflux/about-history.shtml
Just to set the stage:
snip”Yet, for many reasons our understanding of the global carbon budget is incomplete. At present, 40 to 60% of the anthropogenically-released CO2 remains in the atmosphere. We do not know, with confidence, whether the missing half of emitted CO2 is being sequestered in the deep oceans, in soils or in plant biomass. Uncertainties about flows of carbon into and out of major reservoirs also result in an inability to simulate year to year variations of the annual increment of CO2″. Snip.
This from a government site. Clearly the uncertainties in GCMs are much larger than the degree of certitude expressed by AGW advocates would suggest. We are not dealing with linear rates of change, and the results of analyses can change dramatically depending on the rate sensitivity of the factor being analyzed and the time period used.
2) http://cdiac.esd.ornl.gov/trends/co2/contents.htm
CO2 delta in the atmosphere from 1970 through 2004 averaged 1.5 ppm/yr. From 1958 to 1974 it averaged 0.9 ppm/yr. From 1994 through 2004 it has averaged 1.8 ppm/yr. Snip “On the basis of flask samples collected at La Jolla Pier, and analyzed by SIO, the annual-fitted average concentration of CO2 rose from 326.86 ppmv in 1970 to 377.83 ppmv in 2004. This represents an average annual growth rate of 1.5 ppmv per year in the fitted values at La Jolla. ” snip.
That’s the one site that can be seriously affected by nearby emissions. All eight regularly measured sites track precisely. The major measuring sites are widely spread from north to south, and the uniform measurement results indicate that CO2 emissions are quickly and well mixed in the atmosphere.
3) http://cdiac.esd.ornl.gov/ftp/ndp030/global.1751_2004.ems
From tables accessible at 2) and 3) we can do some decadal average annual analysis as:
Decade 1 2 3 4 5
Years ’54-63 ’64-’73 ’74-’83 ’84-’93 ’94-`03
Ave. annual fuel emissions (Gt/yr) 2.4 3.4 5.0 6.0 6.7
Percent change decade to decade 42 47 20 12
Ave. annual atmos. conc’n delta (ppm/yr) 0.8 1.1 1.4 1.5 1.8
Atmos. conc’n delta per Gt emission (ppB) 333 324 280 250 270
Implied atmospheric retention (Gt) 1.7 2.3 2.9 3.1 3.7
Airborne fraction (%) 71 68 58 52 55
Ocean uptake from fuel (Gt) 0.7 1.1 2.1 2.9 3.0
Deforestation factor (%) guesstimate* 1.03 1.06 1.09 1.12 1.15
Total emissions (Gt) 2.5 3.6 5.5 6.7 7.7
Airborne fraction of total (%) 68 64 53 46 48
Ocean uptake total (Gt) 0.8 1.3 2.6 3.6 4.0
*The above fuel emissions from 3) do not include any factor for deforestation/land use. Recent total emissions have been estimated by AGW advocates as slightly less than 8 Gt/yr total, giving about an additional 15% for deforestation/land use. As deforestation is to a degree linked to third world population, we can assume that factor was sequentially lower going back to prior decades. Using a higher factor for prior decades won’t change anything much. Column 3 fuel emissions data corresponds almost exactly with IPCC SAR figures.
While total average annual emissions have gone up by a factor of 3, ocean uptake has gone up by a factor of 5. That is hardly consistent with slow mixing or near saturation of surface waters. What seems to be happening is that increasing atmospheric partial pressure is increasing the rate of ocean uptake with the rate of increase slowed by surface warming/acidification. We can expect a large emissions
increase for the next decade, with corresponding relatively large increase in partial pressure. It remains to be seen how much of that will be offset. The decade to decade rate of increase in fuel emissions has declined very rapidly, from mid 40s% to about 12%. Based on the last couple
of years, one could expect the decade ’04-’13 to have total average annual emissions in the order of 9.0 Gt, with total fuel emissions near 7.6 Gt, (a decadal increase of 13%) and with an airborne
fraction near 45%. After that, with declining petroleum, CO2 sequestration for tertiary petroleum recovery, and rising fuel prices driving major accelerations of efficiency, nuclear and renewables, the annual emissions to the atmosphere are likely to begin declining, and to reach a very low level by 2060 or so. The IPCC 50% probability estimate (Wigley et al) is very close to 7.5 Gt near 2010, but goes to 15 Gt by 2060, requiring a compound growth rate of 15% per decade, which isn’t going to happen.
4) http://cdiac.esd.ornl.gov/pns/faq.html
snip Q. How long does it take for the oceans and terrestrial biosphere to take up carbon
after it is burned?
A. For a single molecule of CO2 released from the burning of a pound of carbon, say from burning coal, the time required is 3-4 years. This estimate is based on the carbon mass in the atmosphere and up take rates for the oceans and terrestrial biosphere. Model estimates
for the atmospheric lifetime of a large pulse of CO2 has been estimated to be 50-200 years (i.e., the time required for a large injection to be completely dampened from the atmosphere). Snip
This range seems to be an actual range depending on time frame, rather than the uncertainty among models. [See (5) below].
5) http://www.accesstoenergy.com/view/atearchive/s76a2398.htm
For the above decades 1 through 5, we have now had 4, 3, 2, 1, and 0 half lives respectively. From 3) and 5) and using an average half life of 11 years, (based on real 14C measurement) we get a total remaining injection in 2004 from the prior 5 decades of 139 Gt, which equates to an increase in atmospheric concentration of 66 ppm. The actual increase from 1954 to 2004 was very near 63 ppm. This result lends some credibility to the 50 year atmospheric residence time estimate. [See (9) below]. A 200 year residence time gives an 81 ppm delta since 1954, which is much too high.
Surprisingly, if we go all the way back to 1750 and compute the residence time using fuel emissions only we get a value very close to 200 years. (A 40 year ½ life gives a ppm delta of 99 vs an actual of 96 using 280 ppm as the correct value in 1750). If we assume that terrestrial uptake closely matches land use emissions, (this is essentially the IPCC assumption), and we know that the airborne fraction from 1964 through 2003 had a weighted average of 58%, to
shift to a long term 40 year ½ life from a near term 11 year ½ life, we would have to have prior 40 year period weighted average airborne fractions like 80% for ’24-’63, and 90%-100% before that. Since emissions in the last 40 years have been 3 times higher than in the period from 1924 to 1963 and 30 times higher than 1844 to 1883 it is not too hard to believe that the rapid growth in atmospheric partial pressure has forced such a change in airborne fraction. With rising SSTs we can expect the partial pressure forced rate of ocean uptake to be offset to a growing degree. (Of course we now know that since 2003 we have not had rising SSTs, rather a slight cooling.)As emission rates decline in the future, and with the delayed impact of ocean warming the half life can be expected to begin growing again but it seems very unlikely that the residence time for a pulse of CO2 would get back to 200 years.
6) http://www.hamburger-bildungsserver.de/welcome.phtml?
unten=/klima/klimawandel/treibhausgase/carbondioxid/surfaceocean.html
Here we find a nice description of atmosphere/ocean interchange mechanisms, with the major fault that it gives the impression that the exchange magnitudes are well known. While this was published sometime after 2001, the net ocean uptake from the atmosphere shown would be roughly correct for about the mid `70s, and has since well more than doubled, (see above) despite surface warming. This would suggest that a near surface increase in ocean carbon concentration
considerably upsets the exchange between the surface and deeper ocean waters. It seems possible that carbon fertilization plus warming considerably accelerate growth of ocean biota. The IPCC downplay this possibility, but do not outright deny it, which suggests a fairly high degree of probability to me.
7) http://www.grida.no/climate/ipcc_tar/wg1/105.htm
From the IPCC TAR we read snip In principle, there is sufficient uptake capacity (see Box 3.3) in the ocean to incorporate 70 to 80% of anthropogenic CO2 emissions to the atmosphere, even when total emissions of up to 4,500 PgC (4500 Gt) are considered (Archer et al.., 1997).snip. That’s a 3400 Gt sink capacity, and we are talking about sinking less than another 1000 Gt at a rate of about 4 Gt/yr peak, for a very few years at peak rate. However, the 3400 Gt additional capacity, which would add less than 10% to the ocean inventory seems like a very low value for 3 reasons. First the equilibrium concentration [see 8) below] is more than 3x the present concentration. Second, atmospheric concentrations were at least 5 times higher 100 million years ago, so seawater concentrations can be that much higher also. Third, experiments to test CO2 clathrate hydrate formation see formation at dissolved CO2 concentrations two orders of magnitude higher than the present concentration. Since 1900 total anthropogenic carbon emission has been about 300 Gt, (about 83% since 1945) of which about 170 are still in the atmosphere. In the next century, net emissions to the atmosphere may be no more than another 400 Gt., which would likely add less than another 90 ppm of atmospheric concentration. The idea that we are
saturating the ocean sink is not even remotely consistent with available numbers.
The IPCC goes on to say snip The finite rate of ocean mixing, however, means that it takes several hundred years to access this capacity (Maier-Reimer and Hasselmann, 1987; Enting et al., 1994; Archer et al., 1997). Chemical neutralization of added CO2 through reaction with CaCO3 contained in deep ocean sediments could potentially absorb a further 9 to 15% of the total emitted amount, reducing the airborne fraction of cumulative emissions by about a factor of 2; however the response time of deep ocean sediments is in the order of 5,000 years (Archer et al., 1997) snip. They then show a CO2 system diagram with sediment take up of 0.2 Gt/yr. The present
airborne fraction of 170 Gt would be taken up by the total system in only 800 years at that rate.
The SAR shows a net sink from atmosphere to ocean of about 2.2 Gt/yr. The problem here is that the level of uncertainty in the rate of ocean mixing, and in how that rate might change, is greater than the rate at which we are injecting carbon. [See 1) above]. The IPCC doesn’t discuss uncertainty. The increase we have already seen in the rate of ocean uptake [3) above] is 2x this number, but the difference is only 1% of the estimated round trip exchange.
For reference, also from the IPCC SAR we can find the following carbon inventory and exchange estimates. These were finalized in 1994, but some data may be base on mid `70s estimates.
a) Inventory
Intermediate and Deep ocean – 38,100 Gt; terrestrial soil, biota and detritus – 2190 Gt; surface ocean (down to about 400 m max) 1020 Gt; atmosphere – 750 Gt; ocean sediments – 150 Gt; marine biomass – 3 Gt. That’s a total of 42,213 Gt, excluding carbonaceous rock. I find
the level of precision amusing.
b) Annual Exchanges
Anthropogenic emissions to atmosphere – 5.5 Gt.
Atmosphere to surface ocean – 92Gt, surface ocean to atmosphere – 90 Gt, net to ocean – 2 Gt.
Surface ocean to marine biota – 50 Gt, reverse – 40 Gt; marine biota to deep ocean – 9 Gt; marine biota to DOC -1 Gt
Surface ocean to deep ocean – 92 Gt; reverse – 100 Gt; deep ocean to sediments 0.2 Gt; net ocean uptake 2.2 Gt.
8) http://cdiac.esd.ornl.gov/oceans/ndp_065/appendix065.html
There is a huge volume of data about the concentration of CO2 in seawater, including variability with both depth and latitude. The above reference is for the south Pacific. Data for the south
Atlantic showing variability with depth, but not with latitude is also available. The present concentration is about 25 mg/kg (2100 umol/kg). The variation in concentration, by both depth and latitude is similar in both bodies, varying about +-7% around the mean, with localized excursions up to +-13%. Since atmospheric concentration has increased about 32% in the last 150 years, and about 25% in the last 50 years, one would expect much greater variation in oceanic
concentration if the take-up by the deep ocean is slow. CO2 concentration varies directly with salinity, and inversely with temperature. Greatest concentrations are at depth (1500 to 2500 m),
and at higher latitudes. The equatorial regions are spoken of as a source for CO2, which must be a function of temperature as is the slightly lower surface concentrations. Heavy rainfall in the tropics may also contribute to reduced concentration. High latitudes are spoken of in the TAR as having “CO2 rich upwellings”, which is consistent with the observed data, but not consistent with the claim of slow mixing between surface and deep water. In deep, dark, cold waters, one would expect very slow local oxidation, so the likely source of deep water concentration would seem to be rapid transport from the surface, by the likes of the Atlantic Conveyer. Concentration would increase with both increasing salinity and decreasing temperature as the conveyer moves north.
There is essentially no variation with longitude except for the depth of the isolines in deeper waters. Curiously the partial pressure reaches a maximum at mid depths. Are currents near the
bottom carrying mixed relatively younger surface water with the lower partial pressures?
9)
http://ijolite.geology.uiuc.edu/02SprgClass/geo117/lectures/Lect18.html
Atmospheric gases in sea water
— saturation = equilibrium
Molecule Percent in atmosphere Equilibrium concentration in seawater (mg per kg seawater)
N2 78% 12.5
O2 21% 7
Ar 1% 0.4
CO2 0.03% 90
In surface sea water, atmospheric gases are close to their “saturation” concentration (or equilibrium concentration). But note that CO2 has a much higher solubility (equilibrium
concentration) than the other gases.
10) http://stommel.tamu.edu/~baum/paleo/ocean/node37.html
snip Thermocline – Specifically the depth at which the temperature gradient is a maximum. Generally a layer of water with a more intensive vertical gradient in temperature than in the layers either above or below it. When measurements do not allow a specific depth to be pinpointed as a thermocline a depth range is specified and referred to as the thermocline zone. The depth and thickness of these layers vary with season, latitude and longitude, and local environmental conditions. In the midlatitude ocean there is a permanent thermocline residing between 150-900 meters below the surface, a seasonal thermocline that varies with the seasons (developing in spring, becoming stronger in summer, and disappearing in fall and winter), and a diurnal thermocline that forms very near the surface during the day and disappears at night. There is no
permanent thermocline present in polar waters, although a seasonal thermocline can usually be identified. The basic dynamic balance that maintains the permanent thermocline is thought to be one between the downward diffusive transport of heat and the upward convective transport of cold water from great depths. Snip There is a lot of variability evident in that quote, that makes giving firm single figure values pretty questionable. The mid latitude permanent thermocline has a maximum extent from about 40 degrees north to 40 degrees south. At latitudes above about
60 degrees there is usually no thermocline. The depth of the top of the thermocline can be from about 20 meters to about 400 meters, and the thickness can vary from less than 100 meters to about 400 meters. The depth of the bottom of the thermocline varies from less than 100 meters to about 900-1000 meters. The IPCC gives an average depth of the thermocline of 400 meters, but do not define whether they are considering top, middle or bottom. They seem to be taking the average depth of the top as 200 meters and the thickness as 400 meters average , but these would be very rough estimates at best, and could hardly justify the 3 significant digits they use.
Depth and thickness vary quite rapidly with both areal location and time, with time generally from hours to seasons, or in the case of ENSO to years. In a given location the thermocline depth can move up and down by 10s of meters diurnally and 100s of meters in a season, or, as noted above, can disappear altogether. Also in the near equatorial Pacific, where the thermocline is normally well established, there is also the well established “equatorial cold tongue”, a huge upwelling of cold water far from the high latitudes. The average depth of the oceans is generally taken as 4000 meters. The IPCC estimates the upper mixed layer as holding 2.6% of the total ocean CO2, (1020 of 39,120 Gt) which implies near 2.6% of the water or an average depth of 200 meters if it is taken to exist under 50% of the ocean surface. They refer to the water above the
thermocline as the “mixed layer” and consider the thermocline as a barrier that severely limits mixing between the intermediate and deep ocean. The intermediate layer is the thermocline zone. The deep ocean contains 90% of the total water, so the intermediate zone must be assumed to hold about 2.5% also where it exists. The other 5% of the water is in the upper 10% of the depth, where there is no thermocline.
There are major mixing mechanisms between surface and deep ocean other than diffusion through the thermocline. These include wave motion in the “furious fifties and screaming sixties” of the southern oceans, the giant delayed oscillator of the equatorial pacific, major sinks and upwellings, the Atlantic conveyer and the Antarctic Circumpolar Current. The surge at depth from passing swells can be felt clearly at a depth where the swell causes a 10% depth change, and can be detected at 5%. In the screaming 60s, where winter can find 1000 mile wavetrains of 20 meter waves, mixing can be expected to 400 meters. [This is consistent with Fig 4 in 11) below]. The ACC alone moves water at the rate of 130 million cubic meters per second, which is enough to exchange the entire Atlantic ocean in about 100 years. The IPCC says the thermocline is the cause of slow mixing between surface and deep waters. With it’s degree of variability in depth, extent and time it is more likely a mechanism of fairly rapid mixing. The total surface layer to about 200 meters depth, must hold near 5% of the total CO2 vs the 2.6% represented by the IPCC, and near half of the 5% must mix much more rapidly than the estimate used by the IPCC for the 2.6% they consider. Their exchange rate of about 100 Gt/yr between surface and intermediate/deep ocean is probably underestimated by a min. factor of 2 and maybe as much as 4 or 5. The differential between up and down transfer can easily be understated by an even larger factor. This would account for the observed ocean uptake rate of CO2 from the atmosphere, which is already 2x the IPCC estimate.
Wherever there is a great range of uncertainty in estimates, the IPCC seems to choose the extreme that will paint the most perilous picture. AGW advocates seem prone to this selective behavior.
11) http://www.aip.org/pt/vol-55/iss-8/captions/p30cap4.html See fig. 4
The first thing to note about Fig. 4 is that there is no evidence at all of a thermocline barrier at near 200 m depth. At 30 degrees S in the Pacific the 50 umol/kg concentration extends to beyond 400 m and at about 20 degrees N in the N Pacific the 40 umol/kg concentration gets to 400 m. The mid latitude Pacific is relatively warm, has relatively low saline concentration and can therefore be expected to have relatively low total CO2 concentration. Forty umol/ kg would be
about 2% anthropogenic CO2. The surface share of anthropogenic CO2 is about 2.5% in this region. Even though this is the zone that should have the strongest permanent thermocline, the anthropogenic concentration is well mixed way below the expected thermocline depth. In the colder and saltier N Atlantic, in the region which should at least have seasonal thermoclines, (30 to 60 degrees N), we find the anthropogenic share at 1.7% (65% of surface share) at a
depth of 1200 m.
We didn’t get to an ocean uptake equal to 10% of the last decade until about 1900, and yet we find the anthropogenic share equal to 10% of the surface share at a depth of >5000 m in the N. Atlantic. The Atlantic conveyer is certainly sinking surface anthropogenic CO2 emissions to the ocean bottom in less than a century. Since we have no longitudinal distribution, it may seem questionable to try and estimate the total Gt of anthropogenic CO2 in the oceans from Fig 4. However we know that there is little longitudinal variation in the Pacific, and probably the S. Atlantic is similar. In the N. Atlantic the share would be lower than shown to the west, but given that the N. Atlantic is much more saline than the N. Pacific, still higher than the N. Pacific. A rough estimate would be 120-140 Gt. Since we have emitted about 310 Gt since 1750, and about
>170 Gt is still in the atmosphere, the total ocean uptake is about 130-140 Gt, so this figure looks pretty realistic. If we accept Fig 4., which is based on measurement, then we have to conclude that the IPCC contention of slow mixing to the deep ocean because of the thermocline barrier is simply wrong.
12) http://www.aoml.noaa.gov/ocd/gcc/co2research
The key quote from this url is “The global oceanic CO uptake using different wind speed/gas transfer velocity parameterizations differs by a factor of three (Table 1)”. The IPCC seems to have used the lowest transfer rate. The actual current transfer rate is 2x the IPCC figure, and evidently some models support a rate of 3x the IPCC figure, which seems consistent with the above observations.
13) http://www.surfnewquay.co.uk/knowledge/articles.php?
A problem here, and probably in much of the IPCC work is the tendency to use averages. It is best to distrust averages. For example, this reference says it takes 1000 to 2000 years to turnover ocean bottom water.
Here http://calspace.ucsd.edu/virtualmuseum/climatechange1/10_5.shtml we find:
snip It takes, on the whole, one thousand years to renew the deep waters of the world’s ocean. This estimate is based on radiocarbon measurements from the CO2 dissolved within the ocean snip.
So we have already gone from “one thousand to 2 thousand” to “one thousand”. For purposes of CO2 take-up we are not concerned with the whole ocean bottom, or long averages. The relatively miniscule amount of C we are generating can be taken up by a very small fraction of the ocean. The present ocean inventory of carbon is about 39,000 Gt, and that gives a concentration of 25mg C per kg seawater. There are 1 million mg in a kg. Now we are going to add about 600 Gt by 2100, which if evenly distributed would raise the concentration by about 1.5% to 25.4 mg/kg. Since the variability of distribution of C in the ocean is +-7%, this addition isn’t even noticeable.
But what if we just penetrate 10% of the ocean in the short run? Then concentration in that portion goes temporarily to 29 mg/kg. And we do it in 100 years, not 1000 years. Then the ocean can spend the next 900 years equalizing the concentration. Thus it takes 1000 years to distribute our C injection, averaged throughout the ocean, but that has no bearing on the time required to take up the “pulse”.
Now lets look at the big currents. The first url in 13) above gives a speed of 0.5 km/hr in some of the trenches. (The gulf stream moving past South Carolina recently moved some stranded boaters 150 miles in 5 days. That’s 2 km/hr). If we assume an average speed of 0.2 km/hr, the current makes 4 complete circuits in 100 years. Given all the eddies along the way, it probably touches much more than 10% of the ocean and can transport a lot of C.

June 8, 2010 3:51 pm

Steve Fitzpatrick says:
June 8, 2010 at 3:30 pm

Richard S Courtney says:
June 8, 2010 at 2:20 pm
“The simplest and most probable explanation is that the data agree because they have been adjusted such that they agree.”

If you really believe that, then there is not much more to discuss… good luck, I wish you well.
Steve,
After what I have seen come out since November 2009 about data “adjustments” in some “consensus climate science” communities, the possibility of “adjustments” seems much more likely to me than it used to. Now I think we should critically evaluate that possibility of “adjustments” when looking at the product of the “consensus climate science” community.
John

Gail Combs
June 8, 2010 4:02 pm

You seem to have missed the point as to why remote locations, such as Barrow and Mana Loa, were chosen and why Beck’s measurements are of no use. I’m quite sure you can get higher (much higher ) readings if you take measurements in Piccadilly Circus or at the Arc de Triomphe but these would not be, in any way, representative of global CO2 concentrations. CO2 maps show that “well-mixed” concentrations vary by only a few ppm across the world.
________________________________________________________________________
Beck did have a series of measurements made at Barrow but not by a CAGW scientist and that is what I referred to
Please note I was a chemist working in industry, that is why I do not understand how anyone can believe CO2 is “well mixed” in the atmosphere and at equilibrium. I can not believe anyone could think it would be within a couple ppm from location to location. Here is why I came to that conclusion.
First: do you really trust the scientists: It seems the temperature readings were adjusted six times after analysis in July 1999 indicated that the temperature anomaly for 1934 was nearly 60% higher than for 1998. And this is just from one email
“At Mauna Loa we use the following data selection criteria:
4.In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step, in which we fit a curve to the preliminary daily means for each day calculated from the hours surviving step 1 and 2, and not including times with upslope winds. All hourly averages that are further than two standard deviations, calculated for every day, away from the fitted curve (“outliers”) are rejected. This step is iterated until no more rejections occur…..”
Do you not understand? The assumption is made that there is NO VARIABLITY and the data is adjusted to reflect that.
Second WHY are the results from various sites so in close to each other:
In the paper by Tom Quirk “ Sources and Sinks of Carbon Dioxide” The isotopic balance in the atmosphere is far more complex and there are many more variables than most think. Consider 94% of all anthropogenic CO2 is released into the northern hemisphere. Next the CO2 is not as well mixed as the IPCC state. From the nuclear tests in the 60’s the mixing north to south is very slow, like several years ( another rhetorical question) so why is the average northern hemisphere CO2 not higher than the south?
As J. A. Glassman so aptly put it in one of his replies,
Exert,
“So why are the graphs so unscientifically pat? One reason is provided by the IPCC:
The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically calibration procedures within and between monitoring networks (Keeling et al., 1989; Conway et al., 1994). Bold added, TAR, p. 211.
So what the Consensus has done is to “calibrate” the various records into agreement. And there can be no other meaning for “calibration procedures … between monitoring networks”. It accounts for coincidence in simultaneous records and it accounts for continuity between adjacent records. The most interesting information in this procedure would be the exact amount of calibration necessary to achieve the objective of nearly flawless measuring with the modern record dominating. The IPCC’s method is unacceptable in science. It is akin to the IPCC practice of making “flux adjustments” to make its various models agree. See TAR for 87 references to “flux adjustment”, and see 4AR for its excuse, condemnation, and abandonment. 4AR p. 117. ”
End of exert.
In other words there is agreement between site because they were ADJUSTED just like the temperature records.
Now let us look at the “pristine site” Mauna Loa.
1. Volcano out gassing
2. Land based Photosynthesis
3.Ocean based Photosynthesis
4. Diurnal warming/cooling of the sea surface as well as longer term cycles and its effect on CO2 not to mention calm vs turbulent seas. (Co2 absorption rate is dependent on surface area)
5. Soil Microbes
6.Rain: PROMOTION EFFECTS OF FALLING DROPLETS ON CARBON DIOXIDE ABSORPTION
ACROSS THE AIR-WATER INTERFACE
OF THE OCEAN
—– In addition to CO2 transfer by impinging raindrops, there is CO2 absorption during the fall of raindrops. CO2 absorption by rain alone is going to keep the CO2 in the atmosphere from ever being uniform.
If you go to Barrow there are microbes, and the oceans mucking up the works too.
Temperature dependence of metabolic rates for microbial growth, maintenance, and survival
Here is Becks information from Barrow:
Date – –Co2 ppm * * latitude * * longitude * * *author * * location
1947.7500 – – 407.9 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1947.8334 – – 420.6 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1947.9166 – – 412.1 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.0000 – – 385.7 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.0834 – – 424.4 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.1666 – – 452.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.2500 – – 448.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.3334 – – 429.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.4166 – – 394.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.5000 – – 386.7 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.5834 – – 398.3 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.6667 – – 414.5 * * *71.00* * * -156.80 * * *Scholander * *Barrow
1948.9166 – – 500.0 * * * * *71.00* * * -156.80 * * *Scholander * *Barrow
These data must not be used for commercial purposes or gain in any way, you should observe the conventions of academic citation in a version of the following form: [Ernst-Georg Beck, real history of CO2 gas analysis, http://www.biomind.de/realCO2/data.htm ]
Scholander got more than a 100ppm swing at Barrow over a year’s time. This type of variation makes more sense to me because I see CO2 in the atmosphere in terms of a huge mixing vessel with mixmen adding ingredients, others taking ingredients out, and a third, haphazardly flipping the switch on the mixer blades. If I take ten samples in different locations, under these conditions there is no way I would expect close agreement between the samples.

Z
June 8, 2010 5:32 pm

Is there a flight-path over Mauna Loa?

Steve Fitzpatrick
June 8, 2010 6:11 pm

Willis,
“At the moment I have time because I’m retired, although I could easily be called out of retirement by a great job offer. Or by increasing hunger.”
Well, that certainly answers my question.. Keep up the good work, but I hope you don’t get too hungry.

Keith Minto
June 8, 2010 6:16 pm

Willis Eschenbach says:
June 8, 2010 at 10:3

I’ve seen the experiment with pouring CO2 before. But Brownian motion and turbulence keeps it well stirred in the atmosphere. Otherwise we’d end up with all the water vapor (lighter than air) way up in the stratosphere, and all the CO2 (heavier than air) in a layer next to the surface. Doesn’t happen that way.

But we have a phase change in one but not (at least on this planet) the other.

anna v
June 8, 2010 9:04 pm

John Finn says:
June 8, 2010 at 1:37 pm
We apparently have a situation whereby
1. All 8 ice core records are wrong – by exactly the same amount.
2. All CO2 measurements from Mauna Loa and dozens of other sites around the world are also wrong – again by remarkably similar amounts.
3. The AIRS satellite data from the mid-troposphere is also wrong – simply because it agrees with ML and other surface based observations.

Who has this situation? Maybe you are fast reading and not getting the point.
The point is that the problem is three dimensional and all the Keeling measurements you think I am saying are wrong are two dimensional extrapolated to three dimensions by the assumption of “well mixed” which is wrong in a gravitational field when the molecules have different atomic weights. The gases are mixed, because of turbulence but how much mixed is something that has to be measured experimentally not assumed by hand waving “mixed”. They could be absolutely correct for that altitude ( for all the objections raised ) and still not reflective of world data .
The ice core data are obviously integrators in slices as large as the fig1 slice and are in locations that have very few sources of CO2 and are measuring the wind born fraction anyway. Creating a hokey stick by joining two different method and altitude measurements is a No No, until there is experimental evidence from the rest of the altitudes and the rest of the longitudes and heights .
The japanese measurements that average over the column of air, the stomata data and Beck’s compilations give more details than the ice cores and imply that observed CO2 measurements are within the natural variations and not unprecedented.

anna v
June 8, 2010 9:38 pm

Gail Combs says:
June 8, 2010 at 4:02 pm
Thanks for the chemist POV.
I have been looking at this “well mixed” from a physicist POV and it seems basic science agrees 🙂 in the principles.

wayne
June 8, 2010 11:22 pm

Keith Minto
anna v
I spent an hour Monday trying to find the co2 vs. altitude data you both seek. I read a paper month’s ago on flights by the U.S.A.F. around 1969-79 measuring co2 up to 60 km, if I remember correct, and can’t seem to relocate it. The per altitude band data is within with the gradients. Hope you will be able to find it, it’s there somewhere, or was.

Keith Minto
June 9, 2010 1:11 am

Wayne,
I could not turn see anything worthwhile, we desperately need a re launch of the OCO. http://www.timesonline.co.uk/tol/news/world/us_and_americas/article5796656.ece

June 9, 2010 1:15 am

Willis Eschenbach says:
June 8, 2010 at 3:54 pm
“but how does that actually physically happen in the real world, where that physical setup is not happening?”

No, it’s the simplest version of the physical setup. A gas diffusing in a uniform semi-infinite medium. Do you have a better one? Which gives exponential decay?
It’s just standard diffusion theory – the solutions are set out in Carslaw and Jaeger’s “Conduction of heat in solids”, for example.
“The exponential decay has theoretical underpinnings (le Chatelier’s Principle). Does such a thing exist for your style of decay?”
Yes, it’s the Green’s function for diffusion in a semi-infinite region, with concentration prescribed at the surface. There’s a downloadable textbook on heat transfer by Lienhard here, and you’ll find a corresponding formula at Eq 5.54. This is for a sustained temp rise, power t^-1/2; you have to differentiate this to get the Greens function.
As far as theoretical underpinning goes, you could have cited Newton’s Law of Cooling (not one of his better ones). But, as Wiki says:
“This form of heat loss principle is sometimes not very precise; an accurate formulation may require analysis of heat flow, based on the (transient) heat transfer equation in a nonhomogeneous, or else poorly conductive, medium.”
Heat transfer and diffusion of dissolved substances follow much the same rules.
As for why the Bern model approximates with exponentials, I think it’s just for the mechanics of convolution. In general you have to work out a summation for each point, which some people think is hard. But exponentials are especially easy – just a recurrence relation.

Slioch
June 9, 2010 1:37 am

anna v says:
June 8, 2010 at 9:04 pm
“gases are mixed, because of turbulence”
I think you confuse kinetics with thermodynamics. From thermodynamics (ie. the application of delG = delH -TdelS) it can be shown that even the EQUILIBRIUM distribution of the gases in the atmosphere, under the influence of gravity, would be such that there would be very little difference from the top to the bottom of the atmosphere, ie. the proportion of CO2 molecules at ground levels would be only very slightly greater than high up in the atmosphere. That is because of the importance of TdelS (the entropy factor) in the above equation. IF only enthalpy were involved, then, in the absence of turbulence (ie at equilibrium) the atmosphere would be a series of layers with the heavier molecules, like CO2, at the bottom (since in this case the enthalpy factor is concerned with gravitational potential energy): but because of the entropy factor that is indubitably not the case – entropy trumps enthalpy.
So, thermodynamics dictates that the gases in the atmosphere should be well mixed at equilibrium even in the absence of turbulence. Turbulence does two things: it increases the rate of mixing and it produces a result (a well mixed atmosphere) which (by coincidence) is very close to the thermodynamic equilibrium position.
An example of where turbulence can produce a result different from the thermodynamic equilibrium is provided by shaking a mixture of oil and water. The equilibrium position is two layers, turbulence produces a mixture without layers.

Malaga View
June 9, 2010 2:35 am

anna v says:
June 8, 2010 at 9:04 pm
The point is that the problem is three dimensional and all the Keeling measurements you think I am saying are wrong are two dimensional extrapolated to three dimensions by the assumption of “well mixed” which is wrong in a gravitational field when the molecules have different atomic weights. The gases are mixed, because of turbulence but how much mixed is something that has to be measured experimentally not assumed by hand waving “mixed”. They could be absolutely correct for that altitude ( for all the objections raised ) and still not reflective of world data .

Have you looked at pages three and four in
http://www.biokurs.de/treibhaus/CO2_versus_windspeed-review-1-FM.pdf
Page 3:

Meanwhile vertical profiles of the atmospheric CO2 concentration are available at many environments; they usually show different mixing ratios, with location having a much greater influence than altitude.
The North Hemisphere (NH) mean near ground mixing ratio shown in fig.2B differs by only 1.13 ppm from the background level above 4 km altitude. Extrapolation to ground level results in a 2.56 ppm difference. Large seasonal variations of the order of 30 ppm are typical at continental environments e.g. Surgut (SUR, Wetland, Siberia); at marine locations (e.g. Cape Grim Baseline Air Pollution Station, Bass Strait, Cape Grim, Australia) the variations around the local background level are small.

Page 4:

Fig 2: Vertical CO2 profiles from 12 global stations derived from flask samples collected from aircraft during midday with records extending over periods from 4 to 27 years (Stephens, 2007).

But the problems with CO2 measurements are multiple:
Location / Environment issues
Altitude
Daily cycles
Seasonal cycles
Wind Speed
Wind Speed seems to be the biggest fly in the ointment at it seriously effects the level of CO2 mixing…
It would be very interesting to see the virgin MLO raw data… to say the least!!!

John Finn
June 9, 2010 3:00 am

Steve Fitzpatrick says:
June 8, 2010 at 3:30 pm
Richard S Courtney says:
June 8, 2010 at 2:20 pm
“The simplest and most probable explanation is that the data agree because they have been adjusted such that they agree.”
If you really believe that, then there is not much more to discuss… good luck, I wish you well.

Richard’s statement makes this whole debate pointless. Steve’s response sums it up. Eventually no matter what evidence is presented, however compelling, the last resort will always be that the data has been adjusted.
Thanks for your efforts, Willis, but it seems that multiple data sources have been adjusted to produce some sort of worldwide conspiracy. Apparently only Beck’s chaotic measurements can be trusted. I’m seriously beginning to question about where I’m positioned in the AGW debate.

June 9, 2010 3:01 am

Steve Fitzpatrick says:
June 8, 2010 at 8:34 am
“There is little reason to think that diffusion a diffusion based model is reasonable.”

It’s where you have to start. It is diffusive. What realistic model do you think yields exponential decay?
“The Bern model appears to ignore the importance of thermohaline circulation, which is (I think) mainly responsible for net CO2 absorption.”
I understood Bern is empirical, so I don’t think it breaks down by factors. But I think net CO2 absorption involves the temperature difference, but the actual circulation has effect on a much longer timescale.
“The almost constant Argo ocean heat content since ~2004 – 2005 simultaneously casts doubt on long ocean heat lag values and on the Bern model”
That’s an interesting one – if heat uptake by the ocean is slow, then so would be CO2. I’m still waiting for the Argo story to settle.
I’m not familiar with the implied downward mixing rates in the Bern model.

anna v
June 9, 2010 4:12 am

Slioch says:
June 9, 2010 at 1:37 am
anna v says:
June 8, 2010 at 9:04 pm
“gases are mixed, because of turbulence”
I think you confuse kinetics with thermodynamics. From thermodynamics (ie. the application of delG = delH -TdelS) it can be shown that even the EQUILIBRIUM distribution of the gases in the atmosphere, under the influence of gravity, would be such that there would be very little difference from the top to the bottom of the atmosphere,

I will check this, but if true, why is there an ozone level? Why does not the ozon mix well and differ very little from stratosphere to surface? It is the opposite problem, created up high, whereas CO2 is created at the surface.
It varies by a factor of 2 over the globe, in the funny units that compress the column.
http://en.wikipedia.org/wiki/File:Future_ozone_layer_concentrations.jpg
We need experimental evidence of this well mixed business, and all independent observations say there is no well mixing. Maybe what you call small percentage difference is of the order of 100ppm.

tonyb
Editor
June 9, 2010 5:15 am

John Finn said
‘Thanks for your efforts, Willis, but it seems that multiple data sources have been adjusted to produce some sort of worldwide conspiracy. Apparently only Beck’s chaotic measurements can be trusted. I’m seriously beginning to question about where I’m positioned in the AGW debate.’
These are not ‘Becks chaotic measurements.’ They were taken in thousands of locations by hundreds of scientists- many of them highly respected- at a time when the scientific method was considered meaningful, and within a field of science that had been practised since 1830 and over its 120 year existence achieved a degree of knowledge and accuracy.
That does not mean to say that ALL the measurements are by any means correct but equally it does not mean that ALL of them are incorrect. An independent audit would put the matter to bed. I certainly don’t claim to know definitively either way.
Tonyb

tonyb
Editor
June 9, 2010 5:20 am

Anna V
I would particularly value your comments on an earlier post I made, repeated here for your convenience. Do you think this theoretical 80ppm is possible?
“I linked to a graph and asked a question;
http://i43.tinypic.com/a4wiu8.png
So that I can properly understand Phils helpful reply to me at 6.38 am I confirm he said;
“The most recent evaluation of this in a Nature paper came out with a median value of ~8 ppm CO2/ºC (upper limit ~20).” (per tenth of a degree centigrade warming of the oceans)
So do I correctly understand that IF there were a general warming of the oceans by around 1 Degree C that it would outgass the equivalent of 80ppm (presumably a similar cooling would have the opposite effect.)
This amount is not far off the 80ppm ‘outlier’ that Beck recorded in the 1940′s. The world certainly warmed from 1920 to 1940 so a considerable amount of outgasing should theoretically have happened. I don’t know if most of it disappeared back into the oceans when it cooled again or went into plant growth.”
tonyb

June 9, 2010 5:30 am

Tonyb is right, the 90,000+ CO2 measurements that Beck recorded were taken with great care by true scientists who did it for knowledge, not grant money.
Maybe some readings were influenced by local CO2 sources. But it is hard to believe that thousands of measurements taken on very sparsely populated, isolated sea coasts, and in mid-ocean transits between continents, on the windward side of ships traveling across the Arctic, Beaufort, South Pacific, Antarctic and Atlantic oceans, were all in error.

June 9, 2010 6:03 am

To Slioch,
I think you are a little mixed up on your thermodynamics and kinetics. Gravity is a force which tends to separate heavy from light. It is included in the free energy term and the entropy term but not the enthalpy term. If there were no turbulence, gravity and Brownian movement would equilibrate to create a vertical concentration gradiant. The lighter oil goes to the top.

Malaga View
June 9, 2010 6:08 am

tonyb says:
June 9, 2010 at 5:20 am
So do I correctly understand that IF there were a general warming of the oceans by around 1 Degree C that it would outgass the equivalent of 80ppm (presumably a similar cooling would have the opposite effect.)

I think you are on the right track…
As Beck states:
The main globally effective controllers for CO2 flux in the lithosphere/atmosphere system are the oceans and the biomas.

http://www.biomind.de/realCO2/realCO2-1.htm

Spector
June 9, 2010 6:16 am

I believe that the data presented in figure 1 of this article, if valid, would tend to support the Dr. Mann’s hypothesis that overall global temperatures have remained constant from Y1000 to Y1800 and that the medieval warm period and little ice-age periods were probably local North Atlantic regional weather anomalies.
It is my understanding that the action of Henries Law causes CO2 to be out-gassed from the ocean during warm periods and absorbed when the ocean cools. Henry’s Law is, I believe, a linear, one-for-one, relationship whereas the CO2 Greenhouse Effect is logarithmic, one-for-doubling, relationship. I believe critics have suggested that thermal out-gassing and re-absorption is the primary explanation for the huge temperature-lagging CO2 fluctuations depicted in “An Inconvenient Truth.”
I find it hard to believe that all this data might have been ‘concocted’ just to support Dr. Mann’s hypothesis, but there may be uncompensated natural processes that degrade or smear out this information over long periods of time. Perhaps the data derived from gases trapped in 1000-year old ice is no more reliable as a temperature indicator than tree-ring data. In any case, I think it’s best to present this data rather than ‘hide’ it.

Phil.
June 9, 2010 6:17 am

tonyb says:
June 9, 2010 at 5:20 am
Anna V
I would particularly value your comments on an earlier post I made, repeated here for your convenience. Do you think this theoretical 80ppm is possible?
“I linked to a graph and asked a question;
http://i43.tinypic.com/a4wiu8.png
So that I can properly understand Phils helpful reply to me at 6.38 am I confirm he said;
“The most recent evaluation of this in a Nature paper came out with a median value of ~8 ppm CO2/ºC (upper limit ~20).”

That is correct, note that it is /ºC not per tenth of a degree.

tonyb
Editor
June 9, 2010 6:37 am

Thank you Smokey
I find it extraordinary that the tens of thousands of measurements taken by skilled and diligent scientists over a period of 120 years of improving technology could ALL be wrong. These were taken in a huge variety of places, many far away from contamination-a concept they understood well and enshrined in law in the 1889 Factories Act.
Tonyb

June 9, 2010 6:38 am

To Anna v,
I suspect both the Mauna Loa and Southpole Scripps CO2 data have been adjusted to sea level to compensate for the gravitational effect. Mauna Loa and Cape Kumukahi are almost identical. The actual measured differences may have been used to make the adjustments. In any case, I think the Scripps data is our best estimate of natural background levels into which plumes of both natural and anthropogenic emissions e-fold. Pick your fudge factor to fit this model.

Phil.
June 9, 2010 6:39 am

anna v says:
June 9, 2010 at 4:12 am
Slioch says:
June 9, 2010 at 1:37 am
anna v says:
June 8, 2010 at 9:04 pm
I will check this, but if true, why is there an ozone level? Why does not the ozon mix well and differ very little from stratosphere to surface? It is the opposite problem, created up high, whereas CO2 is created at the surface.

Because the O3 is created locally in the stratosphere as a result of the photodissociation of O2 by UV. The relevant UV frequencies are completely absorbed in the stratosphere so that the O3 precursors are not produced at lower altitudes. The lifetime of the O3 molecule in the atmosphere depends on altitude, it is an extremely reactive molecule, in the troposphere it only has a lifetime of the order of hours to days, even in the stratosphere it lasts only about a week. so it is necessarily confined to the region where it is created. The timescale for atmospheric distribution around the world is a few years as the following graph clearly shows:
http://en.wikipedia.org/wiki/File:Radiocarbon_bomb_spike.svg

anna v
June 9, 2010 7:00 am

Hi Tony
I had noted your comment, and I thought Phil. would reply.
From the abstract in his refered Nature article:http://www.nature.com/nature/journal/v463/n7280/full/nature08769.html#B7
Here we quantify the median γ as 7.7 p.p.m.v. CO2 per °C warming, with a likely range of 1.7–21.4 p.p.m.v. CO2 per °C.
Maybe you were aware of the quoted also in that abstract:
Our results are incompatibly lower (P < 0.05) than recent pre-industrial empirical estimates of ~40 p.p.m.v. CO2 per °C (refs 6, 7)
So I would question your quote of Phil’s numbers:
“The most recent evaluation of this in a Nature paper came out with a median value of ~8 ppm CO2/ºC (upper limit ~20).” (per tenth of a degree centigrade warming of the oceans) Since you are careful with your quotes, maybe you could explain this “tenth of a degree” instead of “degree”.
I do not have access to Nature, and assume you do and the “per tenth of a degree centigrade ” is what you found in the main paper, but it does not seem probable from the abstract.
If it is not per degree, but if it is per tenth of a degree than your quote of 80ppm is fine and the conclusions you draw.
Look at this:
http://brneurosci.org/co2.html
When all these equations are put together, we can calculate that an increase of 0.6°C would increase atmospheric CO2 from 288 to 292.4 ppmv, an increase of about 4.4 ppm, far short of the 80.4 needed to produce today’s levels.

Phil.
June 9, 2010 7:11 am

Willis Eschenbach says:
June 9, 2010 at 1:52 am
Finally, the implication that I have done what Mann did is childish, uninformed, nasty and untrue. I have made it very clear where each dataset came from, and have cited where you can go get them. I have painted the Mauna Loa data bright orange to keep people from thinking that they were the same as the other data. I have not averaged the data as Mann did. I am just showing exactly what the observations show. You don’t like it? Tough. You think it is wrong? Fine, but take your snide accusations elsewhere. I’m showing what is known about the history of CO2. It may be wrong, but your claim that I am somehow trying to deceive people as Mann did is baseless, desperate, and unpleasant. If you can show that there is something wrong with the eight different ice core records that I’ve shown, bring it on. That’s science. If not, go somewhere else to make your puerile claims.

I agree that the accusations that you are trying to deceive by the graphic you show is indeed “baseless, desperate and unpleasant”. However you did do exactly what Mann did, in his case he used blue and red to distinguish between proxy and direct measurement:
http://en.wikipedia.org/wiki/File:Hockey_stick_chart_ipcc_large.jpg
Mann’s ‘trick’ is not what some say it is!

anna v
June 9, 2010 7:22 am

Slioch says:
June 9, 2010 at 1:37 am
OK, I looked up in my thermodynamics book, (Sears, 1959, when I took the course) and though
G=H-TS
in infinitesimal processes
deltaG=deltaH -TdeltaS -SdeltaT
In the atmosphere the temperature is not constant in the column.
In any case, I do not know how the equilibrium distribution of masses in the height of the columns would be derived from these equations. Maybe you have a link of the derivation?

anna v
June 9, 2010 7:24 am

Phil. says:
June 9, 2010 at 6:39 am
And are you not saying it is not well mixed, in other words?

Steve Fitzpatrick
June 9, 2010 7:27 am

Nick Stokes says:
June 9, 2010 at 3:01 am
“I understood Bern is empirical, so I don’t think it breaks down by factors.”
I believe it incorporates an earlier “four-box” ocean mixing model (which describes both CO2 and heat fluxes) as well as some modeling of land based CO2 sinks. The ocean model seems OK in general form, but as usual, the devil is in the details. The big uncertainties are 1) the rate of physical down-mixing through the thermocline, which is said to be based mostly on “tracer studies”, and 2) assumed overall circulation rates. When you dig a little deeper, you find that there’s lots of apparently conflicting data, and the down-mixing rate turns out to be (apparently) based on yet another model…. all seems very uncertain to me. The total circulation rate and the efficiency of CO2 absorption by cold (sinking) surface waters are critically important at all time scales, not just at long scales.
“That’s an interesting one – if heat uptake by the ocean is slow, then so would be CO2.”
Yes, that would be true if the Bern model accurately describes the ocean uptake, but not true if CO2 uptake is primarily driven by thermohaline circulation rather than down-mixing in the thermocline. Some climate scientists are unhappy with the ARGO data because it shows that the heat uptake isn’t as fast as expected, and so calls into question the ocean circulation models and rates of ocean heat accumulation that are needed (long ocean lags) for high climate sensitivity to be correct. They are also not happy with published reports showing no decline in ocean CO2 uptake capacity, since this also casts some doubt on ocean circulation models.
There appears to be a bit of arm-waving going on now, with at least one model group saying that even though the true sensitivity has to be near 3C or 3.5C for a doubling, the total warming up to the time of that doubling is more like 1.5C, and that it will then take several hundred years at a constant doubled CO2 level to approach the “ultimate response” of 3C or more. In other words, they are saying that after an initial 1.5C warming over ~80-90 years, an additional 1.5 C warming will take place over 300+ years, at constant atmospheric CO2 level of 560PPM!. In light of the available fossil fuel reserves, it would appear that maintaining 560 PPM for 300+ years is very unlikely, so the “ultimate response” would almost certainly never be reached.
.
I think we can safely bet over the next few years on more arm-waving and a gradual decline in model projections of rapid temperature increase.

Phil.
June 9, 2010 7:46 am

Fred H. Haynie says:
June 9, 2010 at 6:03 am
To Slioch,
I think you are a little mixed up on your thermodynamics and kinetics. Gravity is a force which tends to separate heavy from light. It is included in the free energy term and the entropy term but not the enthalpy term. If there were no turbulence, gravity and Brownian movement would equilibrate to create a vertical concentration gradiant. The lighter oil goes to the top.

Our atmosphere is a gas not a liquid and is mixed by turbulent diffusion, separation of gas molecules by molecular weight doesn’t occur until the mean free path becomes long compared with the mixing length. This doesn’t occur until an altitude of ~100km, below that the atmosphere is referred to as the ‘homosphere’, reactive gases such as O, O3 etc which are created locally and have insufficient lifetime to mix can form layers in the homosphere, i.e. the ozone layer.
Most of the sources and sinks for CO2 are at or near the surface so [CO2] there depends strongly on the proximity of such sources and sinks. Above the mixing layer the CO2 becomes well mixed.

Phil.
June 9, 2010 7:48 am

anna v says:
June 9, 2010 at 7:00 am
Hi Tony
I had noted your comment, and I thought Phil. would reply.

I did this morning (I do sleep at night). Also bear in mind that my replies are delayed and so appear higher up in the thread.

Steve Fitzpatrick
June 9, 2010 7:52 am

Nick Stokes,
“It’s where you have to start. It is diffusive. What realistic model do you think yields exponential decay?”
I forgot: If CO2 absorption by the ocean is dominated by thermohaline circulation, then that “model” does in fact hind cast the historical trend very well, and does project an exponential decay if CO2 emissions were to stop immediately, as Willis suggested. The diffusive model predicts a very different response to a sudden stop in CO2 emissions, much more ocean heat accumulation than ARGO data suggests, and predicts a gradually declining ocean CO2 uptake capacity…. which does not appear to be happening. The Bern model appears to be in clear conflict with the data.

tonyb
Editor
June 9, 2010 7:59 am

Anna
My question- which Phil answered- was posed on the basis of the potential ppm increase per tenth of a degree C rise in temperature, so I asumed that was what the reply referred to. Clearly if the amount quoted is per whole degree C (which seems very small) is it difficult to see how there could be sufficient outgassing in a short time to support the 80ppm rise that we see in Becks figures.
I wil reread all the material and try to cross reference it elsewhere. Thanks for your help
tonyb

anna v
June 9, 2010 7:59 am

Willis Eschenbach says:
June 9, 2010 at 2:12 am
Anna, I truly don’t understand your point. Suppose for the moment that your idea is true, and that the CO2 is not “well mixed”, whatever that might mean to you.
So what? What does that have to do with the question of whether humans are responsible for the recent increase in CO2, well mixed or not?

The association is made that since the CO2 curve is going up in an unprecedented way, your figure 1, the cause is anthropogenic. You make this conclusion yourself.
If CO2 is not well mixed, and there are equally high values over the globe over the last two centuries, it has no meaning to take the measurements on a longitude in the middle of the ocean at 2000 meters and call it “global CO2” . To call it global CO2 you would need to integrate over the globe and if that curve showed unprecedented rise there would be an argument for the anthropogenic part.
So even if the measurements of the Keeling curves are not doctored to show this “unprecedented” rise, still, unless a full global picture is measured we cannot know. We have indications from Beck’s compilations and from the Japanese preliminaries in any case.
There is variation going down in latitude too as the plots of H.Haynie show.
I think the “well mixed’ is the lynch pin on which the “anthropogenic” depends, so it is not a red herring. It is how they integrate from a few locations to all over the world.
Fred H. Haynie says:
June 9, 2010 at 6:38 am
I suspect both the Mauna Loa and Southpole Scripps CO2 data have been adjusted to sea level to compensate for the gravitational effect.
It is quite possible considering the mentality of the “scientists” working in the climate field: “this is how nature is, better find it or else”.

tonyb
Editor
June 9, 2010 8:01 am

Phil
Just saw your clarification above Anna’s reply, thanks.
Tonyb

Murray Duffin
June 9, 2010 8:08 am

Willis, wrt your 8 sets of data – see Lundgardh, Van Slyke, Scholander and Law Dome DE08-2. Given the long closing time of the ice DE08-2 largely represents concentration in air, not ice, and it continues a pretty smooth curve with the other 3 sources that is about 15 ppm above the ice core data in the first year of your curve. You can extrapolate back farther using the high confidence level Beck data, and you have air 20 to 30 ppm above the ice core in the early to mid 19th century. I think we can safely say that the ice core loses some atmospheric concentration through at least 3 different mechanisms as the ice closes and as samples are taken and analyzed. Hence the hocky stick is not quite so bad as presented, with concentration up about 70 ppm rather than 100 ppm since preindustrial measures.
Certainly with 70 to 180 years for the ice to close (depending on location and rate of snow accumulation), short term peaks get smoothed (clearly visible in the ice at WW2). It is not unlikely that depressurization at time of coring leads to a fairly constant level near 270 ppm, eliminating longer term peaks. And there is some contribution from fractionation during closing also that reduces fern bottom concentration at least a bit.
I strongly suspect that ice core data is worthless in terms of displaying a useful historic record, but that’s just my opinion. Murray

anna v
June 9, 2010 8:51 am

Phil. says:
June 9, 2010 at 7:46 am
I really would like to see some measurements and calculations of this homosphere business. Do you have a link? To me it sounds like an assumption. Mixing length? How is that measured or calculated?
The atmosphere is not always turbulent, and its turbulence is not uniform. There are circulation patterns, complete lulls, etc. etc. while the sources and sinks of CO2 keep going on and on .

June 9, 2010 8:58 am

Phil. says @7:11 am:
“Mann’s ‘trick’ is not what some say it is!”
Mann’s trick is not what you say it is, true. But that is a red herring. He used a couple of other tricks to misrepresent.
Spector says @6:16 am:
“I believe that the data presented in figure 1 of this article, if valid, would tend to support the Dr. Mann’s hypothesis that overall global temperatures have remained constant from Y1000 to Y1800 and that the medieval warm period and little ice-age periods were probably local North Atlantic regional weather anomalies.”
Every once in a while someone comes back and tries to flog the dead horse of a regional, rather than a global MWP.
Do a search of the WUWT archives using the keyword: MWP. You can get up to speed, but it may take a while; the amount of information is voluminous. Or you can use this interactive graphic to search the globe.

Bart
June 9, 2010 10:03 am

Murray Duffin says:
June 9, 2010 at 8:08 am
“I strongly suspect that ice core data is worthless in terms of displaying a useful historic record, but that’s just my opinion.”
I want to point out that, without the ice core data, the “smoking gun” fit in Willis’ Figure 4 is nothing more than an arbitrary scaling to fit one increasing slope to another.
To those who dwell on superficial resemblances, the inflections around 1950 in both the emissions estimates and the CO2 data are convincers. But, the inflection in the CO2 data is surely an artifact of the splice between proxy and direct measurement data, and has even less justification if the ice core data are bad. I have a strong suspicion that the inflection in the emissions data is no accident – it would not be difficult for the keepers of records to tweak the data to produce it, if one desired to do so, and if one knew where one wanted to place it.
My final word to Willis: I urge you to examine the requirements for consistency in your modeling which I advised at June 7, 2010 at 11:16 pm.

Phil.
June 9, 2010 10:06 am

anna v says:
June 9, 2010 at 8:51 am
Phil. says:
June 9, 2010 at 7:46 am
I really would like to see some measurements and calculations of this homosphere business. Do you have a link? To me it sounds like an assumption. Mixing length? How is that measured or calculated?

Any textbook on the physics of atmospheres should cover it, the following is well written but someone with a physics background such as yourself should be able to handle any of them.
“Chemistry of Atmospheres: An Introduction to the Chemistry of the Atmospheres of Earth, the Planets, and Their Satellites”, Richard P. Wayne, Oxford Science Publications.
The atmosphere is not always turbulent, and its turbulence is not uniform. There are circulation patterns, complete lulls, etc. etc. while the sources and sinks of CO2 keep going on and on .
Over the lifetime of the CO2 it’s plenty turbulent enough (don’t forget the diffusion part). Once you get out of the boundary layer the air is usually moving in any case.
In your living room with all fans, AC etc. switched off have someone open a bottle of perfume in the opposite corner of the room, how long is it before you smell it?
In the homosphere the N2/O2 ratio remains constant with altitude, also Ar (which has a MW of 40) has a constant mixing fraction with altitude. In contrast in the heterosphere molecules are sorted by mass.

June 9, 2010 10:23 am

Phil,
The air isn’t always mixing. CO2 will hug the surface on a clear no wind night when the surface is cooling by radiation. I know the difference between water and air and have worked with turbulent diffusion in my research. The Scripps data is considered well mixed because the daily flask data used to calculate monthly averages are only used when there is enough wind to have turbulence.

Spector
June 9, 2010 10:31 am

RE: Smokey (June 9, 2010 at 8:58 am) “dead horse”
Figure 1, above, would seem to indicate that CO2 concentrations were flat, perhaps contained within a narrow 265 to 285 ppm band over the whole period from Y1000 to Y1750. If we accept this as true and valid data indicating that the average global climate was constant during the whole interval — I am not ready to do that yet — then we may be forced to admit that the regional climate horse is not as dead as we thought it was.

June 9, 2010 10:41 am

Phil said.
“Over the lifetime of the CO2 it’s plenty turbulent enough (don’t forget the diffusion part). Once you get out of the boundary layer the air is usually moving in any case.
In your living room with all fans, AC etc. switched off have someone open a bottle of perfume in the opposite corner of the room, how long is it before you smell it?
In the homosphere the N2/O2 ratio remains constant with altitude, also Ar (which has a MW of 40) has a constant mixing fraction with altitude. In contrast in the heterosphere molecules are sorted by mass.”
What do you think the halflife is of CO2 hugging a cooling ocean surface on a cold clear night? In the same way it concentrates in forest canopies where the halflife can be a matter of hours. There could be a whole lot of CO2 that never gets out of this layer into turbulent conditions.

tonyb
Editor
June 9, 2010 11:59 am

Willis/Anna/Phil
Any idea what period this is over i.e would it take a constant 1 degree C rise a year to outgas 8ppm, or a month or week?
tonyb

Phil.
June 9, 2010 12:47 pm

Willis Eschenbach says:
June 9, 2010 at 12:39 pm
Spector says:
June 9, 2010 at 10:31 am
RE: Smokey (June 9, 2010 at 8:58 am) “dead horse”
“Figure 1, above, would seem to indicate that CO2 concentrations were flat, perhaps contained within a narrow 265 to 285 ppm band over the whole period from Y1000 to Y1750. If we accept this as true and valid data indicating that the average global climate was constant during the whole interval — I am not ready to do that yet — then we may be forced to admit that the regional climate horse is not as dead as we thought it was.”
Say what? This is global CO2 data, not global climate data. It says very little about the climate. We would expect a change in global background CO2 of ~ 8 ppmv/°C, which is smaller than the margin of error in the ice core measurements. So all we can conclude from this about the climate is … nothing.

In fact a range of ~20ppmv in the core implies a range of global temperature of ~3ºC over that timespan (hardly constant!) which is exactly the opposite of Smokey’s statement.

anna v
June 9, 2010 12:48 pm

tonyb says:
June 9, 2010 at 11:59 am

Willis/Anna/Phil
Any idea what period this is over i.e would it take a constant 1 degree C rise a year to outgas 8ppm, or a month or week?
tonyb

The value is the equilibrium value for that temperature of the ocean, at least as far as the calculation I gave you a link to. In terms of time, it would be the same time where the difference in temperatures is observed. Given the temperature of the ocean, the ppms are the equilibrium ones.

anna v
June 9, 2010 12:55 pm

Phil. says:
June 9, 2010 at 10:06 am
Thanks, I will have to go to a library, if I do not find any links.

tonyb
Editor
June 9, 2010 12:55 pm

Anna
Just read your link, it’s very good. I believe sea temperatures are even more suspect than global temperatures and taking a very limited number of measuring points then averaging it does not give a realistic picture of what is happening at the ocean/air interchange.
For example our piece of ocean (the English Channel) 200 yards from my house will range from 7 degrees C in winter to (if we are lucky) around 20C in a warm summer. So in theory it is outgasing madly from now on in, then will be sucking it all back in as it cools again in the Autumn. But does that mean it is outgasing at 15C on the way up yet absorbing at 15C when it comes back down again? Is the huge variability of temperatures in medium latitude oceans countered by the limited variabilty of temperature in tropical oceans, whilst at the same time the arctic and antarctic are also exactly countering each other in order that we can then come up with an ‘average’ ocean temperature that barely changes, and in consequence CO2 outgasing by it is barely noticeable?
Surely the net result of all this is that temperatures have risen and fallen dramatically throughout our history with a supposedly constant level of Co2 at 280ppm and that this gas really doesn’t seem to have much to do with anything let alone CAGW.
http://c3headlines.typepad.com/.a/6a010536b58035970c0120a7c87805970b-pi
Tonyb

Spector
June 9, 2010 1:03 pm

RE Willis Eschenbach: (June 9, 2010 at 11:50 am ) “You believe wrong..” — I hope so — “… The logical conclusion is not that Mann is correct. It is that CO2 has almost no effect on global temperature.”
The issue that bothers me here is not the effect of CO2 on temperature; rather it is the absolute absence of any effect of temperature on the observed the atmospheric CO2 levels. During global cold intervals, I expect to see more CO2 dissolved in the ocean and during global warm periods, see more CO2 in the atmosphere because the oceanic CO2 carrying capacity (solubility) is temperature dependent.

Ryan
June 9, 2010 1:10 pm

“During the ice age to interglacial transitions, on average a change of 7°C led to a doubling of CO2.”
We don’t know that. You are presenting that as fact when it is not. The ice core data is the only useful source of such information and it presents CO2 concentrations that are averaged over periods of hundreds of years. We don’t know what the natural variation in CO2 might be over much shorter periods.
Note that the Mauna Loa data shows a gradual uninterrupted rise in CO2 when the output of human CO2 is not smooth and is interrupted by recessions that reduce the consumption of energy significantly.

Bart
June 9, 2010 1:12 pm

Willis Eschenbach says:
June 9, 2010 at 11:52 am
“And my claim that the change in CO2 was not due to global temperature change was correct.”
Er,… I don’t think so.

anna v
June 9, 2010 1:18 pm

googling “Physics of the homosphere,” I found a book open on the net by Gerd.W.Prolss ( needs an umlaut).
It has the mathematics, it will take me some time to wade through. Have not found data yet.

Gail Combs
June 9, 2010 1:33 pm

anna v says:
June 8, 2010 at 9:38 pm
Thanks for the chemist POV.
I have been looking at this “well mixed” from a physicist POV and it seems basic science agrees 🙂 in the principles.
________________________________________________________________________
“Well mixed” is a fallacy because rain (and fog and dew) is constantly removing CO2 from parts of the atmosphere. “Carbonic acid even appears as a normal occurrence in rain. As rainwater falls through the air, it absorbs carbon dioxide, producing carbonic acid. Thus, when it reaches the ground, it has a pH of about 5.5.”
One of the more interesting bits of info I noticed is the time line.
Mauna Loa CO2 measurements were started in 1959
Club of Rome: Founded in 1968 at David Rockefeller’s estate in Bellagio, Italy
Club of Rome member Henry Kissinger in 1970 states “Control oil and you control nations; control food and you control the people; control money and you control the world.”
The concept of ‘environmental sustainability’ is first brought to the general public’s attention in 1972 by the Club of Rome in their book entitled The Limits to Growth.
At the same time CoR affiliate Maurice Strong chaired the UN first Earth Summit:
“It is instructive to read Strong’s 1972 Stockholm speech and compare it with the issues of Earth Summit 1992. Strong warned urgently about global warming, the devastation of forests, the loss of biodiversity, polluted oceans, the population time bomb. Then as now, he invited to the conference the brand-new environmental NGOs [non-governmental organizations]: he gave them money to come; they were invited to raise hell at home. After Stockholm, environment issues became part of the administrative framework in Canada, the U.S., Britain, and Europe. “ Source
Twenty years later the CoR published The First Global Revolution in which they state:
“The common enemy of humanity is man.
In searching for a new enemy to unite us, we came up
with the idea that pollution, the threat of global warming,
water shortages, famine and the like would fit the bill. All these
dangers are caused by human intervention, and it is only through
changed attitudes and behavior that they can be overcome.
The real enemy then, is humanity itself.”

A listing of who is who in the Club of Rome will give you an idea of just how powerful this organization is. We have already seen the temperature data has been “manipulated” to support AGW. We have seen them try to rewrite history by removing the little Ice Age and the Medieval Warm period so why the heck do people here suddenly believe the CO2 data, gathered by the US government is pristine?
OK so I do not trust the US government, on the other hand I have caught the US gov’t lying too many times to ever trust it again.

Steve Fitzpatrick
June 9, 2010 1:44 pm

Willis,
“Ah, well. I can’t complain, I lit the fuse and I failed to run …”
True, but you did maybe make a tiny bit of progress with a few… was it worth the effort?

Malaga View
June 9, 2010 2:00 pm

A study: The temperature rise has caused the CO2 Increase, not the other way around
http://wattsupwiththat.com/2010/06/09/a-study-the-temperature-rise-has-caused-the-co2-increase-not-the-other-way-around/

Using two well accepted data sets, a simple model can be used to show that the rise in CO2 is a result of the temperature anomaly, not the other way around. This is the exact opposite of the IPCC model that claims that rising CO2 causes the temperature anomaly.

Reckon we can put the ice cores back to bed as not appropriate for splicing onto MLO data… and lets draw a veil over the conclusion that the preponderance of evidence shows that humans are the main cause of the increase in atmospheric CO2.

Bart
June 9, 2010 2:25 pm

Willis Eschenbach says:
June 9, 2010 at 1:17 pm
“In either case, you are making AGW supporters like Joel Shore happy, which should give you some pause …”
I am sure Joel Shore is competent in whatever it is that is his area of speciality, but simply put, dynamics of continuous systems is clearly not his bailiwick. I do not really care what he has to say on the subject.

Gail Combs
June 9, 2010 5:41 pm

On the subject of a “well mix” atmosphere”
“Scientists have found a temporary “chemical equator” that separates the heavily polluted air of the Northern Hemisphere from the cleaner air of the Southern Hemisphere over the Western Pacific — only it isn’t where they expected to find it.
…These boundaries, or chemical equators, can typically be found at a “wall” created by global air circulation patterns that separates Northern and Southern hemispheric air. Called the Intertropical Convergence Zone (ITCZ), it is a belt of low pressure that circles the Earth roughly at the equator…
But this schematic is an “over-simplification,” said Geraint Vaughan of the University of Manchester in England,…
Over parts of the Pacific Ocean, the clear band of the ITCZ visible over other oceans gives way to a “big blob of convection,” Vaughan told LiveScience. Around Northern Australia, this convection is dominated by the Australian-Indonesian monsoon (a reversal in the usual surface wind direction) in the Southern Hemisphere summer…
So, they used a specially-equipped plane to fly north of Darwin to “find some dirty air,” as Vaughan put it, when they happened upon a steep gradient in carbon monoxide levels — an indicator of a chemical equator of sorts. Carbon monoxide is a toxic gas found in polluted air and therefore more strongly associated with the Northern Hemisphere…
Earth’s Air Divided by Chemical Equator
This is where the “well mixed” idiocy comes from: An explanation of CO2’s thorough mixing in the atmosphere given on page 8 of the EPA’s Response to Public Comments, Volume 2, in the section “Response 2-8”:
“…turbulent mixing (e.g., through wind and convection) dominates the distribution of gases throughout the atmosphere (below 100 kilometers in altitude). The mixing of substances in a gas or fluid is only dependent on mass when the gas or fluid is perfectly still, or when the pressure of the gas is low enough that there is not much interaction between the molecules. Therefore, all long-lived gases become well-mixed at large distances from their sources or sinks…”
“well mixed” was never based on sound science just assumed because of the assumed long residence time of CO2. That is why the NASA scientist was surprised the CO2 distribution in the upper atmosphere was “lumpy” (his word willis)
“The real atmospheric CO2 residence time (lifetime) is only about 5 years,….
13-C/12-C isotope mass balance calculations show that IPCC’s atmospheric residence time of 50-200 years make the atmosphere too light (50% of its current CO2 mass) to fit its measured 13-C/12-C isotope ratio. This proves why IPCC’s wrong model creates its artificial 50% “missing sink”. IPCC’s 50% inexplicable “missing sink” of about 3 giga-tonnes carbon annually should have led all governments to reject IPCC’s model.
…. Callendar (1940, 1958) selected atmospheric CO2 data from the 19th and 20th centuries. Fonselius et al. (1956) showed that the raw data ranged randomly between about 250 and 550 ppmv (parts per million by volume) during this time period, but by selecting the data carefully Callendar was able to present a steadily rising trend from about 290 ppmv for the period 1866 – 1900, to 325 ppmv in 1956.
Callendar was strongly criticized by Slocum (1955), who pointed out a strong bias
in Callendar’s data selection method. Slocum pointed out that it was statistically
impossible to find a trend in the raw data set, and that the total data set showed a
constant average of about 335 ppmv over this period from the 19th to the 20th century.
Bray (1959) also criticized the selection method of Callendar, who rejected values 10%
or more different from the “general average”, and even more so when Callendar’s
“general average” was neither defined nor given….
Craig (1957) pointed out from the natural (by cosmic rays) radiocarbon (14-C) production rate that atmospheric CO2 is in active exchange with very large CO2 reservoirs in the ocean and biosphere.
During the same period atmospheric CO2 measurements were started near the top of the strongly CO2-emitting (e.g., Ryan, 1995) Hawaiian Mauna Loa volcano. The reason for the choice of location was that it should be far away from CO2-emitting industrial areas. At the Mauna Loa Observatory the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques. Critique has also been directed to the analytical ethodology
and sampling error problems (Jaworowski et al., 1992 a; and Segalstad, 1996, for further references), and the fact that the results of the measurements were “edited” (Bacastow et al., 1985); large portions of raw data were rejected, leaving just a small fraction of the raw data subjected to averaging techniques (Pales & Keeling, 965)….”

Source: http://www.co2web.info/

dr.bill
June 9, 2010 7:19 pm

The issues raised by anna v and particularly by Gail Combs regarding CO2 levels and the mixing of CO2 in the atmosphere are somewhat removed from my area of day-to-day familiarity, but it seems increasingly difficult and imprudent to simply brush aside the points that they have made ( and supplemented with references that seem just as compelling as their logic). They have certainly given me the incentive to look more deeply into these matters. I find it particularly bothersome to note that so many scientists have apparently disregarded any measurements that fall outside of some range that they feel is the ‘right one’.
/dr.bill

AJ
June 9, 2010 7:46 pm

Nice work Willis!
I did something similar and found a half-life of ~35 yrs.
My regression model was:
c[x]=c[x-1]+ax+b-r(c[x-1]-280)
Where:
c[n] – atmospheric co2 in ppm for year n
a – slope of emissions in ppm
b – intercept of emissions in ppm
r – rate that excess co2 is absorbed
I used excel’s Solver add-in to find a, b, and r.
The regression also gave a year 2100 co2 concentration of ~600ppm.
Here’s a google version of the spreadsheet:
https://spreadsheets.google.com/ccc?key=0AiP3g3LokjjZdFlvZVdMT0dCTDR4OHozSzFYTzNQYXc&hl=en
Note that Google’s Solver won’t work on the non-linear model, but you can download to Excel.
Thanks, AJ

Gail Combs
June 9, 2010 8:06 pm

Ok back on the topic of the Mauna Loa data.
(see quote at bottom)
THE deal breaker is: “At the Mauna Loa Observatory the measurements were taken [in the 1950’s] with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques….”
The first FTIR’s were not made commercially until the late 60’s so the new infra-red (IR) absorbing instrument would have been a dispersive IR Spectrophotometer probably a Perkin Elmer. A dispersive IR Spectrophotometer, especially one without a computer is not a particularly good tool for precise measurements in the PPM range in my experience. In the 1970’s my lab tried to develop an IR analytical method and gave up and went back to wet chemical methods because we could never get the precision and accuracy we needed. Back in the seventies a dispersive IR Spectrophotometer was considered a qualitative tool used for identifying samples and not a quantitative tool for measuring the amounts of components in a sample.
Reading the history of the measurement of CO2 in “Carbon cycle modelling and the residence time of natural and anthropogenic atmospheric CO2: on the construction of the “Greenhouse Effect Global Warming” dogma.” by Tom V. Segalstad of the Mineralogical-Geological Museum, University of Oslo, Norway click Convinced me my original instinctive distrust of the CO2 data you presented was correct.
Thank you Willis, before I looked into it I also thought the CO2 data was solid. I never realized the “rot” of dishonest “science” went all the way back to Callendar in the forties and Revelle, Pales & Keeling in the sixties. Now I understand it was an idea floating around that got promoted to the front page because it was a convenient tool for limiting access to energy and keeping the masses in poverty.
THE QUOTE:
“…North-European stations measured atmospheric CO2 over a 5 year period from 1955 to 1959. Measuring with a wet-chemical technique the atmospheric CO2 level was found to vary between approximately 270 and 380 ppmv, with annual means of 315 – 331 ppmv, and there was no tendency of rising or falling atmospheric CO2 level at any of the 19 stations during this 5 year period (Bischof, 1960). The data are particularly important because they are unselected and therefore free of potential biases from selection procedures, unlike the CO2 measurements based on the procedures at Mauna Loa …. During the same period atmospheric CO2 measurements were started near the top of the strongly CO2-emitting (e.g., Ryan, 1995) Hawaiian Mauna Loa volcano. The reason for the choice of location was that it should be far away from CO2-emitting industrial areas. At the Mauna Loa Observatory the measurements were taken with a new infra-red (IR) absorbing instrumental method, never validated versus the accurate wet chemical techniques….”

Phil.
June 9, 2010 9:11 pm

anna v says:
June 9, 2010 at 1:18 pm
googling “Physics of the homosphere,” I found a book open on the net by Gerd.W.Prolss ( needs an umlaut).
It has the mathematics, it will take me some time to wade through. Have not found data yet.

I should think that will be fine I recall reading something by Prölss regarding satellites. LEO satellites are why these things have been studied intensely because of the importance for drag effects.

anna v
June 9, 2010 9:57 pm

Gail Combs says:
June 9, 2010 at 1:33 pm
Thanks for the links and exposition.
Yesterday I skimmed through the pages I was allowed of the Prolls book. He has a lot of differential equations counting in eddies etc. ( I am not forgetting that all the IPCC models are just that, differential equations ). One interesting tidbit from the discussion of O in the “well mixed” is it seems he will explain trace gases in section 4.3, that ( monatomic Oxygen in this case) they are not well mixed but do not matter in the density calculations. I will see if I am allowed what the 4.3 section says today with a new IP address :).
Thinking about experimental proof of well mixing, I thought of the different trace elements measured by the Aqua satellite in AIRS. If the atmosphere is well mixed, regardless of the sources, then patterns appearing at 500mb of all trace elements should be the same.
They are not, and maybe that is why AIRS makes the statement of “not well mixed” from this experience more than from the few ppms differences plotted of CO2.
http://airs.jpl.nasa.gov/multimedia/geophysical_products_multimedia/carbon_monoxide/
http://airs.jpl.nasa.gov/multimedia/geophysical_products_multimedia/methane/
Unfortunately no world map of SO2.
Water vapor is not a trace element in the strict sense, and it is not well mixed either, even though there are so many sources (75% of surface)
http://airs.jpl.nasa.gov/multimedia/geophysical_products_multimedia/water_vapor/, and a different pattern than the others.
Note, for the sake of my argument it is the similarity of patterns that is the experimental check not the amount of ppms or ppbs of difference. The difference is useful as showing up patterns.
So this is another experimental evidence that the atmosphere does not “well mix”

John Galt II, RA
June 10, 2010 12:11 am

This is a very good discussion.
I recall the discussion of the Volstock Ice Cores described chemical migration over time and with the composition of the ice cores and a disclaimer that the data may not be entirely accurate.
Along these same lines of thought, a 155 year ‘history’ when considered in terms of the earths history of some 4 billion years +/- is rather insignificant.
However, considering only one gas, CO2, while ignoring the effects of the chemical soup mankind has been increasingly adding to the atmosphere during this same period leaves the hypothesis needing much more substance.
Or, lets just stop focusing on CO2 and look at the entire atmosphere for whatever we may think is happening.
BTW, the ‘warming’ is so very insignificant it can be considered non-existent , as it is no-one really knows because the data we have has been ‘adjusted’, interpolated, smoothed, and otherwise faked.

tallbloke
June 10, 2010 2:33 am

The increased biomass due to higher co2 levels is absorbing proportionately more of the isotope it prefers. This leaves proportionately more of the fossil fuel isotope in the atmosphere and so gives the impression the human emitted fraction has increased more than it really has.

toho
June 10, 2010 5:21 am

Steve Fitzpatrick says:
June 8, 2010 at 8:34 am
“1. The Bern model appears to ignore the importance of thermohaline circulation, which is (I think) mainly responsible for net CO2 absorption. Net CO2 absorption is driven mainly by the difference between absorption by sinking (very cold) high latitude water and the out-gassing from upwelling deep waters as they warm at low latitudes, and yields an expected response to rising CO2 emissions which almost perfectly matches the historical data. It also predicts a continuing net absorption rate (for hundreds of years) that depends almost exclusively on the CO2 concentration in the atmosphere; that is, it would predict an exactly exponential decay if all CO2 emissions were suddenly to stop.”
Nick Stokes says:
June 9, 2010 at 3:01 am
“I understood Bern is empirical, so I don’t think it breaks down by factors. But I think net CO2 absorption involves the temperature difference, but the actual circulation has effect on a much longer timescale.”
Well, the THC overturns the top 100 m of the ocean in 30-40 years (which happens to be roughly the time constant Willis found). I agree with Steve – it appears that the Bern model ignores the THC. Your claim that the circulation has effect on much longer time scales is simply not correct.
A diffusion model for CO2 in the ocean is clearly not realistic. The diffusion is dominated by the THC (and by mixing in the top layer).

anna v
June 10, 2010 7:11 am

🙁 The interesting pages are protected in Prölss book glimpse on the net.
Still need a physical library.

anna v
June 10, 2010 8:40 am

I have not delved into ocean dynamics, but what about the tides?
They mix whole blobs from the bottom up, no matter how deep, and create turbulences breaking against the lands. Are they in the standard calculations?

AJ
June 10, 2010 9:03 am

If anyone cares, I’ve created another model similar to the one that I posted above, but this time using a continuous formula (i.e. calculus).
I’ve assumed the following:
– net co2 emissions rate = ax+b (i.e. linear, x = time)
– excess co2 absorption rate = e^(-rx) (-r=continuous rate, x = time)
– excess co2 = concentration over 280 ppm.
So the atmospheric accumulation should be the summation (from 0 to t) of Integral[(a*x+b)*e^(-r*x), dx]
To this summation, however, I will also account for the excess co2 already in the atmosphere. To do this I will simply add the starting excess co2 multiplied by the absorption formula giving the unabsorbed amount: s*e^(-rx).
So, being lazy, I plugged Integral[(a*x+b)*e^(-r*x), dx] into Wolfram’s Online Integrator to get:
-(e^(-r*x)*(a + b*r + a*r*x)/(r^2))
I then sum the above between 0 and t and add the unabsorbed starting excess co2 giving:
Excess CO2 = -(e^(-r*t)*(a + b*r + a*r*t)/(r^2)) +((a + b*r)/(r^2)) +(d*e^(-r*t))
The link below is a copy of the excel spreadsheet that I uploaded to Google. This spreadsheet shows the results of the non-linear regression that I did of the above formula to the ppm values from Mauna Loa for the years 1959-2009. Interesting, to me anyway, is that the half-life result doubled to 73 years from what I posted earlier. The year 2100 projection, however, dropped to ~575ppm from ~600ppm.
Feel free to have a look and let me know where I messed up. My calculus skills are iffy, so I wouldn’t be surprised if it’s flawed.
https://spreadsheets.google.com/ccc?key=0AiP3g3LokjjZdHFaNV85b1BrX0hEazFsLUJKWjZ1Qnc&hl=en
Thanks, AJ

Joel Shore
June 10, 2010 2:48 pm

anna v says:

Thinking about experimental proof of well mixing, I thought of the different trace elements measured by the Aqua satellite in AIRS. If the atmosphere is well mixed, regardless of the sources, then patterns appearing at 500mb of all trace elements should be the same.

No…It depends on the details of the gas, e.g., how long-lived it is in the atmosphere. I don’t think anybody has ever claimed that carbon monoxide is well-mixed. And, SO2 and other aerosols are known not to be well-mixed.

Water vapor is not a trace element in the strict sense, and it is not well mixed either, even though there are so many sources (75% of surface)
http://airs.jpl.nasa.gov/multimedia/geophysical_products_multimedia/water_vapor/, and a different pattern than the others.

Its lifetime in the atmosphere is short and it is condensable with a very strong dependence of the amount that can be present on temperature. You are wacking down lots of strawmen here.

So this is another experimental evidence that the atmosphere does not “well mix”

If you actually read a textbook on the subject, you might discover that there is no claim made that the atmosphere in generally is well-mixed. The claim is that certain gases in the atmosphere are well-mixed (except very close to strong sources or sinks, e.g., for CO2 close to ground sources), a claim well-verified by empirical data for the gases in question.

anna v
June 11, 2010 1:08 am

Joel Shore says:
June 10, 2010 at 2:48 pm
If you actually read a textbook on the subject, you might discover that there is no claim made that the atmosphere in generally is well-mixed. The claim is that certain gases in the atmosphere are well-mixed (except very close to strong sources or sinks, e.g., for CO2 close to ground sources), a claim well-verified by empirical data for the gases in question.
I am 100 kilometers away at the moment from the nearest library that might have atmospheric physics textbooks, so I have to rely on what I can find on the net.
The well mixed claim is made all over the place.
There is no clear evidence that CO2 is well mixed, as you claim. The main measurements are at one longitude in the middle of an ocean. Stomata measurements say there have been large variations over time. AIRS notes say it isn’t well mixed.
The same is seen in the GOSSAT measurements that have been posted, to which I linked above.
The claim that the CO2 rise is unprecedented is crucial in order to tie it to anthropogenic sources. It has not been proven and it rests on the assumption of “well mixed” to extrapolate from one longitude in the middle of the ocean to the whole globe.
I did find in Prölss’ book in the part that was open to scrutiny that the homosphere is called so because the gasses are well mixed, so he is one that is calling the homosphere generally well mixed. He discussed monatomic Oxygen and said that one is not well mixed and the process will be described in paragraph 4.3, except that paragraph is not available to the public scrutiny.
So I do not know about straw men, but do know when it sounds like cherry picking to make a point:
“CO2 is well mixed by fiat, go construct it” otherwise how could one control and tax the whole world through the guilt principle . We have to get them to say: “mea culpa, mea culpa, mea maxima culpa”.
/end sarcasm

anna v
June 11, 2010 1:16 am

Joel Shore says:
June 10, 2010 at 2:48 pm
No…It depends on the details of the gas, e.g., how long-lived it is in the atmosphere. I don’t think anybody has ever claimed that carbon monoxide is well-mixed.
Lets look at this straw man.
Why would carbon monoxide not be well mixed if dioxide is? Are there not the same eddie currents and turbulences entering in the differential equations?
If the turbulence in the atmosphere is supposed to make it a nice big mixer , like the ones mixing cement I suppose, why would CO mix differently than CO2?
That is what my check on the patterns of different gases mean. If the mixing is kinematic, up there where it is measured by AIRS both should mix the same way.

Phil.
June 11, 2010 6:55 am

anna v says:
June 11, 2010 at 1:16 am
Joel Shore says:
June 10, 2010 at 2:48 pm
“No…It depends on the details of the gas, e.g., how long-lived it is in the atmosphere. I don’t think anybody has ever claimed that carbon monoxide is well-mixed.”
Lets look at this straw man.
Why would carbon monoxide not be well mixed if dioxide is? Are there not the same eddie currents and turbulences entering in the differential equations?

CO reacts in the atmosphere with OH to form CO2 which means it has a lifetime of a few months. Hence it isn’t long-lived enough to be completely mixed around the planet, if you look at the AIRS results it’s easy to see which months of the year the forst burning takes place in the Amazon.

Joel Shore
June 11, 2010 7:29 am

anna v says:

There is no clear evidence that CO2 is well mixed, as you claim. The main measurements are at one longitude in the middle of an ocean.

There are many sites around the world where CO2 is measured, not just Mauna Loa.

AIRS notes say it isn’t well mixed.

Obviously, the term “well mixed” is in the eyes of the beholder. If you want variability of only a tenth of a percent, say, then it is not that uniform. However, the AIRS data demonstrate that it is uniform within about +/- 1%. (See here http://airs.jpl.nasa.gov/AIRS_CO2_Data/ and note that the entire range of the color scale is 8 ppm, which is about 2%…hence it varies about +/-1%.) And, this is for a case when CO2 levels are changing at rates that are likely unprecedented, at least over the ice core record. For the historical variations in CO2, the rate of change was slower and thus the uniformity was, if anything, greater.

Why would carbon monoxide not be well mixed if dioxide is? Are there not the same eddie currents and turbulences entering in the differential equations?

Mainly because they have different lifetimes. Carbon monoxide is reactive. How much something gets mixed depends not just on the processes of the mixing but on the amount of time that you have to mix it. Think about making a batter with an electric mixer: when you only let the mixer run for a short time, the batter will not get very well mixed; if you let it run for a longer time then it will. Carbon monoxide has considerably less time to get mixed than CO2 does.

June 11, 2010 11:21 am

For those of you who are having problems with the term “well mixed” I suggest that it can be applied the the Scripps data for specific locations but should not be used to explain the global consistancy of the data. The data that is used to calculate monthly averages has been taken when the local atmosphere was considered well mixed and probably is our best esitmate of natural background levels near sea level. The relatively spacially constant concentrations is not caused by turbulence but by the effects of water (gas, liquid, and solid) in controlling the level of CO2. It is these processes for which we don’t have rates that I expect causes CO2 to follow temperature and give the elusion of long e-fold times.

anna v
June 11, 2010 12:08 pm

Joel,
thanks for your response but I need numbers and links. What are the other CO2 measurements that are not Keeling et al?
I do not trust the lifetimes of CO2,
Example: it dilutes in water vapor , it comes down in rains, let alone the biosphere take up.
particularly for anthropogenic CO2 one which per definition is over polluting sources: pollution increases rain, for example
The Japanese measurement have 10% difference in the column averaged numbers, not a 1% and this means that lower in the columns the values could be much higher.
I need to see more and detailed three dimensional measurements, not opinions turned into models to be convinced that from all the trace elements, it is CO2 that is well mixed.

Joel Shore
June 11, 2010 2:36 pm

anna v says:

thanks for your response but I need numbers and links. What are the other CO2 measurements that are not Keeling et al?

See here: http://cdiac.ornl.gov/trends/co2/contents.htm

Scott
June 11, 2010 3:30 pm

Hot of the press and just released ASAP today in Analytical Chemistry:
http://pubs.acs.org/doi/abs/10.1021/ac1001492
I think you need a subscription to read the whole thing, but it obviously pertains to this posting and should make for an interesting read.
-Scott

Scott
June 11, 2010 3:31 pm

Obviously, I meant hot ofoff the press!

anna v
June 11, 2010 10:04 pm

Willis Eschenbach says:
June 11, 2010 at 3:02 pm
and Joel,
thank you for the globalview-co2 link. I probably missed it before, though I always check if my nickname is quoted. I will check again later to see what other links have been provided that I missed.
http://www.esrl.noaa.gov/gmd/ccgg/globalview/co2/co2_intro.html
from their introduction:
To facilitate use with carbon cycle modeling studies, the measurements have been processed (smoothed, interpolated, and extrapolated) resulting in extended records that are evenly incremented in time. Be aware that information contained in the actual data may be lost in this process. Users are encouraged to review the actual data in the literature, in data archives (CDIAC, WDCGG), or by contacting the participating laboratories.
OK, and particularly after climate gate, I am allowed my doubts on the ability of climate scientists to be objective with the data, no?
I suppose I could find links for the publications entering in the squiggly lines to extract height of measurement and method of calibration and whether the Keeling values are somewhere implicitly assumed. I will hit pay walls and am too old to start spending days in libraries. 75% from 5 to 14 kilometers, if “well mixed” is nonsense, cannot be a show case for global trends.
1. The stomatal data agree very well with the Mauna Loa data that you say you don’t believe. You can’t have it both ways.

It is not a matter of belief. It is a matter of questioning the statistical methodology at Mauna Loa as they set it forth, and a question on the validity of theconcept of “well mixed” that is necessary for the splicing of two independent measurement methods to create “unprecedented”.
It is also a questioning on whether the whole field of CO2 measurements has been following the “throw away data that does not fit” methodology to agree with the guru’ directions. What has happened with temperatures.
2. The recent rise is unprecedented, whether you believe the stomatal data, the ice core data, or both.
The rise is not great, either in the stomatal records or in the ice records. It is the splicing that does the trick.
And not to forget Beck’s compilations.
At the moment “CO2 is well mixed” seems to be a belief not supported by three dimensional data. . The very concept of going where the air is “pure” and throwing away 2sigma outliers is scientifically invalid and suspect that it is not reflective of a real measurement on a real world. Satellite measurements should very soon clear the three dimensional picture.
I have had these discussing with other people before, so lets leave it at that. You will not convince me unless there is a 3d representation that shows unprecedented CO2 to the extent Maona Loa does, and I will not convince you. Everybody can do their own thinking and delving into data, to the extent that they can.

anna v
June 12, 2010 3:57 am

continued:
Have to take back about the stomata record because that one publication, W. Finsinger et al , and Wagner et al no1,2 in the figure, take off precipitously. I now have to decide how much time I want to spend chasing stomata publications to see if the methodology is Keeling biased, in throw away data and include data manner.
Anyway, my ending, that I am waiting for more and better 3 dimensional satellite data stands.

dr.bill
June 12, 2010 5:02 am

re anna v: June 11, 2010 at 10:04 pm
She makes good points, Willis. Two sigmas and the false positive issue with things that occur infrequently or in small quantities is a recipe for self-deception (to say nothing of the potential for purposeful deception). It needs more digging.
/dr.bill

June 12, 2010 2:30 pm

Willis Eschenbach, on 6/7/10 at 1:25 pm, responding to a request by Steve Hempell to comment on “On Why CO2 is Known Not To Have Accumulated in the Atmosphere, etc.”, said,
>>Yes. Like many others, he is conflating e-folding time and residence time.
My paper dealt with IPCC’s crucial and often repeated claim that CO2 was a Long-Lived Greenhouse Gas. Neither IPCC’s reports nor my paper used the term or concept of e-folding time to have conflated it with anything.
Specifically I quoted from the following by IPCC:
>>[Aerosols] have a much shorter lifetime (days to weeks) than most greenhouse gases (decades to centuries) … . TAR, pp. 24-25.
And from the following,
>>Turnover time (T) (also called global atmospheric lifetime) is the ratio of the mass M of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal S from the reservoir: T = M / S. For each removal process, separate turnover times can be defined. In soil carbon biology, this is referred to as Mean Residence Time. AR4, Glossary, p. 948.
IPCC explicitly refers to the lifetime of CO2 in the atmosphere, not its e-folding time, and makes Lifetime equivalent to Turnover time and to Mean Residence Time, which IPCC explicitly defines.
I reported the following in my paper:
>>Regardless of which way one poses the problem, the existing CO2 in the atmosphere has a mean residence time of 1.5 years using IPCC data, 3.2 years using University of Colorado data, or 4.9 years using Texas A&M data. The half lives are 0.65 years, 1.83 years, and 3.0 years, respectively.
I used the identical term used and defined by IPCC, amplified by the half life figures, to show that IPCC was wrong and inconsistent about the Lifetime/Turnover Time/MRT of CO2 in the atmosphere. Furthermore, even had I converted to e-folding times, it would have effected no more than a scaling by 0.69, immaterial to whether CO2 persists a few years or “decades to centuries”, or why natural CO2 and anthropogenic CO2 might have, as IPCC implies, different lifetimes.
I conflated nothing, and Mr. Eschenbach’s dismissive criticism, resting on that single allegation, was disingenuous and unresponsive.

Joel Shore
June 12, 2010 7:59 pm

Jeff: Willis’s criticism is exactly right and what you are saying is confused and incorrect. The mean lifetime of a CO2 molecule in the atmosphere is short because of the large exchanges that occur between the atmosphere, the biosphere + soils, and the mixed layer of the ocean. However, the decay time for a pulse of CO2 such as produced by the burning of fossil fuels is governed by the much slower rate at which CO2 can be removed from these reservoirs into the deep ocean.
So, what happens when you add in some new CO2 from burning of fossil fuels is that it rapidly partitions itself between the atmosphere, biosphere + soils, and the mixed layer…and now all three of these reservoirs have an elevated level of CO2. And, the decay of this elevated level is very slow.

Spector
June 13, 2010 8:41 pm

RE: Willis Eschenbach: (June 11, 2010 at 3:02 pm) Stomatal Data
The primary inference I draw from the included stomatal data plot is that this seems to confirm that CO2 levels in the past were probably much more variable than that indicated by ice-core data. It seems reasonable to suppose that ice-core data may have been subject to gradual in situ diffusion or annealing processes that would progressively degrade the temporal resolution of this data over long periods of time.
I do think that modern CO2 levels may well be unprecedented as maintained in this article, but perhaps a more balanced approach might be to say there also could be significant physiogenic (natural) as well as anthropogenic effects contributing to this change.
I know that it is possible to ‘force-fit’ a filtered SST data curve to account for most the recent CO2 variation, but it appears there is no readily-available historical data from BT drops over the years to show just how much past and recent thermal forcing has actually penetrated into the depths of the ocean. Without this data, I do not think we can make an intelligent estimate of the amount of CO2 that could have been forced out of the sea as a result of the recent warming.

anna v
June 13, 2010 9:26 pm

Spector says:
June 13, 2010 at 8:41 pm
I do think that modern CO2 levels may well be unprecedented as maintained in this article, but perhaps a more balanced approach might be to say there also could be significant physiogenic (natural) as well as anthropogenic effects contributing to this change.
I will add to your comments volcanic activity as an unknown possible source of sudden increase. There is no reason to assume that the volcanoes , of which an estimate of over 200.000 vents in the ocean floors have been estimated, are in a steady state as far as CO2 exhausts go. That they are not in a steady state is evident by the occasional explosions. Maybe, if the Maona Loa etc data are not doctored out of reality, we are measuring an upswing in the chaotic mechanisms of magma circulation and venting.

June 14, 2010 7:34 am

Willis Eschenbach says:
June 7, 2010 at 12:59 pm
My language was extreme because, as a result of Dr. Ravetz’s inanities, I have suffered innumerable personal attacks because I don’t buy into his nonsense. In addition, his “post-normal science” theory is responsible in part for the destruction of good scientific practice we see in too much of climate science.

Just a quick comment; I hope that it’s not too far down the tail to be read.
First, greetings to my old sparring partner Willis. I’m glad that you didn’t consider your remarks to be ad hominem.
Then, I do regret that you were exposed to innumerable personal attacks, all because of me. As to my ‘inanities’ and ‘nonsense’, I suggest that you consult ‘Uncertainty and Quality in Science for Policy’ (co-authored with Silvio Funtowicz), and also the many publications of Jeroen van der Sluijs, including the ‘Guidance’ for uncertainty management adopted by the Dutch environmental agency. You will find there that I do not airily substitute a politically-correct ‘quality’ for Truth; but with others I try to develop standards and methods whereby quality can be assessed and ensured in those sciences where simple Truth is not so easily obtained. If you still believe that Truth is just a simple matter of applying Scientific Method in such areas as the evaluation of the CO2 record, then we still have some fundamental disagreements. If that be corruption, so be it.
I am now in dialogue, in terms of mutual respect, with some other severe critics. It’s a pity that you still consider me to be some mixture of villain and idiot.
All best wishes – Jerry Ravetz

JPeden
June 15, 2010 11:49 pm

Jerome Ravetz says:
June 14, 2010 at 7:34 am:
If you still believe that Truth is just a simple matter of applying Scientific Method in such areas as the evaluation of the CO2 record, then we still have some fundamental disagreements.
Dr. Ravetz, if you think you can approach Truth in the case of CO2CAGW without applying the Scientific Method – a defect which characterizes ipcc Climate Science – then you are not even interested in the search for Truth to begin with, nor in the wellbeing of Humanity. It’s as simple as that.

tonyb
Editor
June 16, 2010 1:36 am

Jerome Ravetz said;
“If you still believe that Truth is just a simple matter of applying Scientific Method in such areas as the evaluation of the CO2 record, then we still have some fundamental disagreements.”
I wonder if, when you were working towards your PHD, your lecturer had said that the scientific method could be set aside as irrelevant in some instances, would you have protested against this?
Would you have said that if a theory can not stand up to the scientific method it should be discarded and another one examined?
The notion that the scientific method is irrelevant in some cases is extraordinary to many of us. I wonder if Willis could do an article enumerating the other areas of science where this proven method has been set aside after vast resources have been thrown at it, in order to follow the much less time honoured system of post normality? A short article I think Willis.
What I find concerning is the obfuscation that surrounds climate science. In particular the rewriting of history to demonstrate that there was only the merest hiccup to represent the Little Ice age or the Medieval Warm period. No doubt the melting of the glaciers that allowed the Romans to march their armies across the High Alps was also an illusion, and the well trodden path I attempted to take to follow in their footsteps did not exist?
If the case for radiative physics is so strong why does it need to be shorn up with the nonsenses of temperature manipulation as our climate history is rewritten to suit the new post normal ‘facts’ Why has sea level history been allowed to be based on absurd reconstructions from three highly disjointed northern hemisphere tide gauges which are then taken to represent the whole globe? The pillars of present climate science are very shaky. Temperatures have been rising since the depths of the LIA around 1690. Whoever would have thought it?
Cast your mind back to your student days Mr Ravetz and and see for yourself how far the path you have now chosen to tread in pusuing post normal science lies from the methods you would have learnt at your University.
As Willis says it would be good if you stuck around for a normal dialogue as surely we all have much to learn from each other?
Tonyb

June 16, 2010 2:25 pm

100612 ESCHENBACH WUWT 100616
Willis Eschenbach says on June 12, 2010 at 8:31 pm
>>>>Adjustment time or response time (Ta) is the time-scale characterising the decay of an instantaneous pulse input into the reservoir. The term adjustment time is also used to characterise the adjustment of the mass of a reservoir following a step change in the source strength. Half-life or decay constant is used to quantify a first-order exponential decay process. See: →Response time, for a different definition pertinent to climate variations. The term lifetime is sometimes used, for simplicity, as a surrogate for adjustment time. [Quoting from IPCC glossaries]
>>Note that they say “half-life or decay constant”. “Decay constant” refers to using something other than 0.5 (half, for half-life) as the constant for measuring the decay. The number “e” (2.71828) is the other commonly used decay constant in the form of 1/e, or ~ 0.37, hence “e-folding time”.
>> So as I said, you have conflated the two.



Mr. Eschenbach reaches into IPCC’s glossaries for a term IPCC explicitly excludes from use in climate. IPCC does not violate its edict. It never uses e-folding time with respect to CO2. As a result, and combined with the fact that I didn’t use the term e-folding time either, I could not have conflated e-folding time, or any synonym of it, with anything as Mr. Eschenbach imagines. Mr. Eschenbach reads “e-folding time” where it is not written to make a false accusation.
Next, Mr. Eschenbach conflates “e”, a number, with a “decay constant”, a parameter. Even Wikipedia manages to get this straight where it says “(lambda) is a positive number called the decay constant: … N(t) = N_0*e^(-lambda*t). Accord: HyperPhysics, Britannica Online, Weisstein’s World of Physics, etc., etc. The number e, appropriately dimensionless, and the decay constant, which has the dimension of time, are quite different, and should not have been conflated.
Mr. Eschenbach never responds to the fact that using half-life vs. e-folding time is a matter of a small scale factor (0.69 = ln(2)). Even if I had conflated the terms, as he wrongly alleges, it would have caused no substantive difference in my argument that CO2 lasts a few days, supported by IPCC’s formula, vs. “decades to centuries” as IPCC concludes. His accusation, in which he persists, that I conflated terms is not only false, but immaterial.

June 16, 2010 2:39 pm

Correction: lambda has the units of reciprocal time.

Joel Shore
June 16, 2010 7:23 pm

Jeff,
I doubt your sophistry in the above post will convince anyone (except perhaps yourself). Frankly, it’s embarrassing. You write a whole post about half life vs. e-folding time as if that is the primary subject of the argument. It is not. You are the king of trouncing strawmen.
The primary point is your inability to distinguish between the characteristic time (whatever you want to define it as, e-folding, half-life, or whatnot) that a CO2 molecule spends in the atmosphere vs. the characteristic time it takes for a pulse of CO2 to decay. The two are very different because there are fast processes of exchange between the atmosphere, the land sink (biosphere + soils), and the mixed layer of the oceans. So, when you add some additional CO2 to the atmosphere, it rapidly partitions between these three reservoirs. However, the slow rate-limiting process is the transfer from the ocean mixed layer to the deep oceans. So, what you get well after the CO2 has equilibrated within these three reservoirs is a rise in the “height” (i.e., CO2 level) in all three reservoirs and it is this that then takes a long time to decay.
It’s easy to make an analogy with reservoirs containing water: Take two reservoirs that each contain 10 gallons of water and two pumps, one that pumps from the reservoir A to reservoir B and one that pumps from reservoir B to reservoir A, each at 1 gallon / minute. Now, imagine adding 1 gallon of water to reservoir A. The half-life of an individual molecule in that added gallon will be on the order of 5 minutes or so. (Actually, I think the precise answer is ~6.93 minutes, assuming the reservoirs are perfectly well-mixed, but we won’t quibble on the details.) However, it is wrong to conclude that after, say, 1 hour (when nearly all…~99.75%…of the molecules from the added gallon would have gone through the pump from reservoir A to B) that the amount of water in reservoir A will be essentially 10 gallons. In fact, as the problem is strictly stated, Reservoir A would remain with 11 gallons in it forever. (A better analogy would have the pump from A to B pump at a higher rate than the pump from B to A when there is more water in reservoir A than B and vice versa. In that case, the long term limit would have 10.5 gallons in each reservoir. So, the additional gallon has been divided between the two reservoirs but then remains in them forever.)
In this simple analog, the half-life of a water molecule added to reservoir A is only about 7 minutes but a “pulse” of water persists in that reservoir forever (after, possibly, dividing itself equally between the two reservoirs); in the real system, the pulse of CO2 does not persist in the atmosphere forever (because there are processes that remove it from the reservoirs) but it does persist for a lot longer than the characteristic time for a molecule of CO2 to be in the atmosphere.

Spector
June 17, 2010 3:16 pm

RE: Willis Eschenbach: (June 15, 2010 at 8:23 pm) “Thanks, Spector. Anyone espousing your idea would have to explain why the historical stomata data vary so quickly and so far, while both the modern stomata and Mauna Loa data vary so little. What are your ideas on that question?”
I think the lack of variation in the two modern data sets is because that data was taken during a single monotonic CO2 increase interval and we probably have a better handle on the dates represented by the stomata. Perhaps we need a third proxy or a more robust collection of stomata data to get a better picture of what was going on in the past.
It might be interesting to take several CO2-free ice-cores and expose the open end-faces on one side to a pure CO2 atmosphere for several months, say at -5, -15, and -40 degrees C and see just how much and how far CO2 is absorbed into each core.

June 17, 2010 4:26 pm

Spector,
I think your diffusion experiment could put some numbers to what some of us postulate to explain “trapped” gas bubble ages being always younger than the ice containing it and the age differences increaseing with age. Also, the observation of C14 in ice older than 5000 years old. The mechanism that has been postulated is that as pressure is released on the ice cores, the highly compressed gases expand creating microcracks in the ice into which modern air diffuses.

June 18, 2010 11:12 am

Re Willis Eschenbach on 6/16/10 at 7:53 pm and Joel Shore on 6/12/10 at 7:59 and 6/16/10 at 7:23 pm:
Mr. Eschenbach continues not to respond to Steve Hempell’s request for his comments on my paper, “On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, etc.”. Clearly Mr. Eschenbach did not read it, yet was able to conclude that I had conflated two bits of physics. He quoted the following paragraph from my email,
>>>> Regardless of which way one poses the problem, the existing CO2 in the atmosphere has a mean residence time of 1.5 years using IPCC data, 3.2 years using University of Colorado data, or 4.9 years using Texas A&M data. The half lives are 0.65 years, 1.83 years, and 3.0 years, respectively.


omitting the following introductory part,
>>I reported the following in my paper.
Then he says,
>> First, you gave no citations for those figures.
He shows no evidence of reading emails either.
>>You go so far as to calculate (on some unexplained basis) what you refer to as the half-life of the residence time, a number which makes no sense … 

My paper says,
>>>>Turnover time (T) (also called global atmospheric lifetime) is the ratio of the mass M of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal S from the reservoir: T = M / S. For each removal process, separate turnover times can be defined. In soil carbon biology, this is referred to as Mean Residence Time. AR4, Glossary, p. 948.
>>Now throw in approximately 100% replenishment, and you have an eleventh grade physics or chemistry problem where the level in the bucket is only slowly changed but the solution is quickly diluted. This is a different question from residence time, elevated to a mass balance problem.
>> Regardless of which way one poses the problem, the existing CO2 in the atmosphere has a mean residence time of 1.5 years using IPCC data, 3.2 years using University of Colorado data, or 4.9 years using Texas A&M data. The half lives are 0.65 years, 1.83 years, and 3.0 years, respectively. This is not “decades to centuries” as proclaimed by the Consensus. Climate Change 2001, Technical Summary of the Working Group I Report, p. 25. See The Carbon Cycle: past and present, http://www.colorado.edu/GeolSci/courses/GEOL3520/Topic16/Topic16.html & Introduction to Biogeochemical Cycles Chapter 4, http://www.colorado.edu/GeolSci/courses/GEOL1070/chap04/chapter4.html, UColo Biogeochem cycles.pdf; The Carbon Cycle, the Ocean, and the Iron Hypothesis, http://oceanworld.tamu.edu/resources/oceanography-book/carboncycle.htm
The “unexplained basis” is IPCC’s formula, quoted and cited. The sources for all the data are cited specifically.
That the number makes no sense is Mr. Eschenbach’s personal limitation.
He quotes from IPCC, omitting the first sentence, included next:
>> In more complicated cases, where several reservoirs are involved or where the removal is not proportional to the total mass, the equality T = T_A no longer holds. Carbon dioxide (CO2) is an extreme example. Its turnover time is only about four years because of the rapid exchange between the atmosphere and the ocean and terrestrial biota.
This is a violation of Henry’s Law, the law of solubility, and a law which IPCC not only never uses but suppressed when it arose in an AR4 draft. The uptake of CO2 is proportional to its partial pressure in the atmosphere, and its partial pressure is the equivalent of its total mass. (See for example AR4, ¶7.3.4.3, p. 532, where IPCC gives a change in pCO2 in units of ppm.) Furthermore, IPCC makes leaf water the largest flux in that uptake at 270 Gtons/year (TAR, ¶3.2.2.1, p. 191), and then never uses leaf water in its carbon cycle (see for example TAR, Figure 3.1, p. 188; AR4, Figure 7.3, p. 515). By omitting that 270 Gtons/year, IPCC’s data yield a turnover time, which is the same thing (and according to IPCC specifically for “soil carbon biology” and SO2 aerosol), of 3.55 years (“about four”) instead of the 1.5 years IPCC data and formula yield.
Mr. Eschenbach says,
>>More to the point, you are still conflating residence (turnover) time, and half-life. One is how long the average CO2 molecule remains in the air. The other is how long it takes a pulse of CO2 to decay back to equilibrium. They are very different things, as seen in the IPCC definitions.
To the contrary, neither the glossary in the Third nor the Fourth Assessment Report refers to the lifetime of a CO2 molecule in any respect.
Similarly, Mr. Shore says,
>> The primary point is your inability to distinguish between the characteristic time (whatever you want to define it as, e-folding, half-life, or whatnot) that a CO2 molecule spends in the atmosphere vs. the characteristic time it takes for a pulse of CO2 to decay. The two are very different …
and
>>in the real system, the pulse of CO2 does not persist in the atmosphere forever (because there are processes that remove it from the reservoirs) but it does persist for a lot longer than the characteristic time for a molecule of CO2 to be in the atmosphere.


The model under discussion here is the exponential decay of a pulse, slug, or mass of CO2 in the atmosphere by its uptake into other reservoirs. The pulse, slug, or mass is given by the number of molecules. Computing the average lifetime of a molecule from the decay of the pulse is trivial. It is precisely the mean residence time, which is identically the half-life divided by ln(2), and the reciprocal of the decay constant, for the pulse. One wonders how Eschenbach and Shore might think the mean molecular lifetime of a molecule might be defined and calculated if not by observing a large mass of them being absorbed.
The notion that the characteristic time for a CO2 molecule is different than the characteristic time for a pulse, slug, or mass of them is foolishness. It is an invention, repeated here and there, and now by Eschenbach and Shore, to salvage IPCC’s justification for and reliance on anthropogenic CO2, but not natural CO2, accumulating in the atmosphere. It is related to IPCC claims that MLO data are global, that atmospheric CO2 is well-mixed, and that CO2 is a Long Lived GreenHouse Gas (LLGHG), and to IPCC’s implication that the ocean uptake of natural and manmade CO2 are significantly different. These are all false.

June 19, 2010 10:33 pm

Willis Eschenbach on 6/19/10 at 1:40 pm said,
>> My friend, if after all of this you continue to claim that the residence (or turnover) time is the same thing as the pulse decay time (half-life), I’m afraid that you need more help than either Joel or I can give you.
I never made the claim asserted, and consequently could not have continued to make it. To the contrary, I quoted from by paper specific pairs of values for residence time and half-life, and the numbers were not the same. To be helpful to anyone, a teacher must cite precisely and accurately.
Mr. Eschenbach says,
>> The half-life of a pulse of CO2, however, is variously estimated between 30 and over a hundred years. My figures put it in the 30′s.

Mr. E. has made no point, and the use of the passive voice here doesn’t help his argument. I had already cited in my posts here and on my blog, that IPCC puts the characteristic time of CO2 in the range of “decades to centuries” (citing TAR p. 25). This contradicts IPCC’s own formula. The fact that Mr. E. adopts IPCC’s figures is not to his credit.
Mr. Eschenbach says,
>> Neither I nor Joel “invented” the difference between residence time and half-life. It is found in the textbooks, in the scientific papers, and in the popular press.

I’m sure neither of them did. Half-life is mean residence time multiplied by ln(2), a fact about as old as Euler (1707-1783) and the Euler number, e. If Mr. E. is responding to my post, he misunderstands what I wrote. I actually said,
>> The notion that the characteristic time for a CO2 molecule is different than the characteristic time for a pulse, slug, or mass of them is foolishness. It is an invention, repeated here and there, and now by Eschenbach and Shore, to salvage IPCC’s justification for and reliance on anthropogenic CO2, but not natural CO2, accumulating in the atmosphere.
What I claimed was invented is the silly notion that the half-life of a molecule was different than the half-life of a pulse of CO2. And I did not claim that either Eschenbach or Shore invented it, only that they repeated it.
Mr. Eschenbach says,
>> Finally, the IPCC does not say that only anthropogenic CO2 “accumulates” in the atmosphere. That’s a misunderstanding, fairly common to be sure, but wrong none the less.
Quite to the contrary, IPCC refers to CO2 accumulation when it says,
>>Assuming that accumulation of CO2 in the ocean follows a curve similar to the (better known) accumulation in the atmosphere, the value for the ocean-atmosphere flux for 1980 to 1989 would be between −1.6 and −2.7 PgC/yr. TAR, ¶3.5.1 Atmospheric Measurements and Global CO2 Budgets, p. 207.
More specifically, IPCC makes the following analysis and attribution:
>>From 10 kyr before present up to the year 1750, CO2 abundances stayed within the range 280 ± 20 ppm. During the industrial era, CO2 abundance rose roughly exponentially to 367 ppm in 1999 and to 379 ppm in 2005. Citations deleted, AR4, ¶1.3.1 The Human Fingerprint on Greenhouse Gases, p. 100.
>>Cumulative carbon losses to the atmosphere due to land-use change during the past 1 to 2 centuries are estimated as 180 to 200 PgC and cumulative fossil fuel emissions to year 2000 as 280 PgC, giving cumulative anthropogenic emissions of 480 to 500 PgC. Atmospheric CO2 content has increased by 90 ppm (190 PgC). Approximately 40% of anthropogenic CO2 emissions has thus remained in the atmosphere; the rest has been taken up by the land and oceans in roughly equal proportions. Citations deleted, TAR, Box 3.2, p. 192.
So IPCC attributes all the observed rise in CO2 that has accumulated in the atmospheric during the industrial era to ACO2. The 119.6 PgC/yr from terrestrial sources, the 90.6 PgC/yr from the ocean do not accumulate. See AR4, Figure 7.3 p. 515. Nor does the 270 PgC/yr from leaf water. Op. cit. In summary, and contrary to Mr. Eschenbach’s guess, IPCC does indeed say that only ACO2 accumulates in the atmosphere.
A misunderstanding certainly exists. It lies in physics, and it belongs to IPCC and its disciples.

Ferdinand Engelbeen
June 20, 2010 2:56 am

Jeff Glassman:
So IPCC attributes all the observed rise in CO2 that has accumulated in the atmospheric during the industrial era to ACO2.
Yes, the IPCC does attribute the total of the rise to aCO2. That is about the increase of the total amount. That doesn’t mean that all individual molecules of aCO2 emitted in the past 150 years or so, still are in the atmosphere. Most of them were exchanged by “natural” CO2 during the seasonal exchanges at a rate of near 20% per year or about 150 GtC/800 GtC in the atmosphere (or a half live slightly over 5 years). But seasonal exchanges don’t change the total amount of CO2 (whatever the source) in the atmosphere when in equilibrium.
But we know from the CO2 emissions inventory and the CO2 measurements (at a lot of places) that nature aborbs about halve the emissions (as mass!): nature as a whole is a net sink for CO2. Whatever the source of the extra CO2, the current sink capacity of nature is about 4 GtC/year of the 800 GtC already present. That is only about halve the human emissions and forms a long(er) decay time (near 40 years average half life) if we should stop all emissions today.
It is like the difference between the turnover and the gain/loss of a bank/factory or anything you want: the amount of turnover of natural CO2 is not important at all, only the difference between the natural contributions and natural sinks is important, which is negative: there is zero net contribution by nature to the total amount of CO2 in the atmosphere and humans are (near) totally responsible for the increase.

Ferdinand Engelbeen
June 20, 2010 5:24 am

Gail Combs says:
June 8, 2010 at 4:02 pm
Beck did have a series of measurements made at Barrow but not by a CAGW scientist and that is what I referred to
Sorry for the late comment, was absent for a three weeks vacation (without Internet!)…
The historical data of Barrow would have been of interest, if the instrument was accurate enough. Unfortunately, they used the micro-Schollander instrument, which was intended for CO2 control of exhaled air, which is about 2% (and higher), or 20,000 ppmv. The instrument was calibrated with… outside air. If the value was between 200 and 500, the instrument was deemed OK. The accuracy of the instrument thus was +/- 150 ppmv… These outside air values were used by Beck as part of his curve (with the 1942 “peak”).
Other interesting place were measurements in Antarctica, but these show extreme CO2 levels at abnormal low oxygen concentrations, which point to local contamination.
The similar problems of local contamination can be found in many of the historical measurements, but Ernst Beck used them all without hesitation…
so why is the average northern hemisphere CO2 not higher than the south?
There is an increasing lag between the NH and the SH, which points to the NH as general source of the increase
So what the Consensus has done is to “calibrate” the various records into agreement.
Sorry, that is not right. The calibration gases are calibrated against each other and at one place on earth (formerly at Scripps, currently NOAA I suppose) against an original set which is measured with an extremely accurate manometric method. That has nothing to do with “adjusting” the results.

Ferdinand Engelbeen
June 20, 2010 6:38 am

About stomata data:
Stomata data are a proxy for current and past CO2 levels, but have there problems, as good as any other proxy.
The main problem is the same as for many of the historical measurements: plant with stomata grow by definition on land (sea creatures don’t need stomata), where CO2 levels are not very well mixed. Diurnal differences of over 200 ppmv in summer are not uncommon in vegetated areas. That alone gives a positive bias against “background” CO2 levels. That is not a problem itself if this was constant. Nevertheless, the stomata data show a resonable fit (+/- 10 ppmv) if calibrated against MLO and ice core data over the past 100 years.
According to the stomata people, stomata (index) density is defined by the average CO2 level of the preceding growing season. That fits reasonably good for the past 100 years. But the problem is in the past: how do we know that the landscape (and accordingly the CO2 levels) didn’t change over time? For several countries, we know that there was a huge evolution in landscape: from marshes to forests and to agriculture, all in the main wind direction of some of the stomata proxy places.
The same for weather influences: a warmer/colder climate may influence local/regional flora and thus CO2 levels far more than the “background” CO2 levels.
Thus while stomata data have their pro’s (a better resolution than ice cores), there are a lot of problems too, which make that past absolute levels or variability mainly reflect local/regional CO2 levels and should be taken with a grain of salt for “global” CO2 levels…

Ferdinand Engelbeen
June 20, 2010 7:57 am

thethinkingman says:
June 8, 2010 at 9:05 am
I am sorry if this sounds a bit , well, simple but what is the resolution in years, decades, centuries etc. for the CO2 level revealed by ice cores?
Depends of the accumulation rate of the snow/ice at the ice core drilling site and average temperature, which influence the depth/time to close the bubbles. The ice cores with the highest accumulation rate are two of the three ice cores at Law Dome (about 1.5 meter ice equivalent per year). These have the best resolution of about 8 years. That means that any peak value of about 20 ppmv over one year or an extra 3 ppmv over 8 years would be measurable in these ice cores. Thus Beck’s “peak” of about 80 ppmv around 1942 would be visible, but is not.
The drawback of the high accumulation is that the complete core until (near) rock bottom is not going far back into time: only about 150 years. The third Law Dome core has a lower accumulation rate (was drilled downslope), has a resolution of about 40 years and goes back about 1,000 years in time. That one shows a 6 ppmv CO2 drop coinciding with the LIA.
The Law Dome project is very interesting, as it gives a lot of answers to the objections of Jaworowski: they used three different drilling methods (wet and dry), measured CO2 in firn and ice at several depths, no clathrates, no cracks,…
The accuracy of all three cores for similar gas age was +/- 1.3 ppmv (1 sigma), CO2 in firn of still open pores and CO2 in ice of already closed pores was identical and there was an overlap of about 20 years with South Pole direct measurements, all within the accuracy of the ice cores.
For more (paywall) details, see:
http://www.agu.org/pubs/crossref/1996/95JD03410.shtml
Other ice cores, progressively are more inland, that means less precipitation and lower temperatures. That gives less resolution, but far longer periods back in time, the latest from Dome C, with only a few mm ice equivalent per year, some 600 years resolution but 800,000 years back in time.
No matter the differences in resolution, temperature, dust inclusions, accumulation, if one plots all ice core CO2 levels over time, the CO2 levels of the same average age are within 5 ppmv for all different ice cores.

June 20, 2010 10:37 am

Ferdinand Engelbeen says on 6/20/10 at 4:02 pm said,
>> Yes, the IPCC does attribute the total of the rise to aCO2. That is about the increase of the total amount. That doesn’t mean that all individual molecules of aCO2 emitted in the past 150 years or so, still are in the atmosphere. Most of them were exchanged by “natural” CO2 during the seasonal exchanges at a rate of near 20% per year or about 150 GtC/800 GtC in the atmosphere (or a half live slightly over 5 years). But seasonal exchanges don’t change the total amount of CO2 (whatever the source) in the atmosphere when in equilibrium.
I don’t believe anyone said all the “individual ACO2 molecules” are still in the atmosphere. That would be silly. The molecules from any source, if they could be tagged, would be absorbed randomly, some almost instantaneously, and some never because of the tail of the absorption probability curve. We can only deal profitably with means, and the mean for the molecule is the mean for the slug, and is derived from the mean of the slug.
When we speak of a molecule of ACO2 and nCO2, we are making an approximation because the two species of CO2 are indistinguishable in the first order process of dissolution in the waters, and they balance in the terrestrial flux. The two species are different mixes of 12CO2:13CO2:14CO2. Some plants are fractionating, favoring one molecule over the other, and an argument can be made that the mechanical process of dissolution should depend on molecular weight. But these are fifth order conjectures or less, well lost in the noise of estimating the primary effects of temperature, pressure, wind velocity, and salinity.
What IPCC needed to have done is publish its mass balance analysis on which it claims to have based its carbon cycle model of AR4, Figure 7.3, p. 515. In that model the flux from fossil fuels is about 6 Gt/y, mixed in with about 120 Gt/y from land, and 91 Gt/y from the ocean, but omits the 270 Gt/y from leaf water claimed in the TAR. That puts fossil fuel emissions at about 3% of the total without leaf water, and 1.3% with. Expect the mass balance analysis to show that that fraction is the amount that any increase in the atmosphere CO2 concentration could be attributed to ACO2. That should be the result even when the model accounts for 12CO2 and 13CO2 separately (14CO2 concentration being lost in the noise).
More importantly, IPCC needed to make the outgassing of CO2 from the ocean temperature dependent. It is a positive feedback that IPCC overlooked, and one that confounds its conjecture that ACO2 is the cause of global warming.
And as far as mistaken attribution is concerned, IPCC zeroed the natural rise in temperature occurring at the start of the industrial era. That natural temperature rise should have continued for another 3ºC or so, extrapolating from the Vostok record. By zeroing it, IPCC then attributed that on-going, natural temperature rise to the CO2 rise at MLO, which it wrongly attributed to a global rise and wrongly to ACO2.
By the way, the atmosphere is never in equilibrium, unless you have a definition of equilibrium that is different than thermodynamic equilibrium.
You say,
>> But we know from the CO2 emissions inventory and the CO2 measurements (at a lot of places) that nature aborbs about halve the emissions (as mass!): nature as a whole is a net sink for CO2.
What you cite is IPCC dogma. It is but a coincidence of numbers, and not a cause and effect model. IPCC says that 70 PgC/yr of nCO2 is absorbed into the ocean from 597 PgC in the atmosphere, which is 11.83%. For ACO2, IPCC numbers are 22.2 PgC/yr absorbed from 165 PgC, which is 13.45%. One might consider these numbers close enough (a ratio of 1.138:1) for a carbon cycle budget, but that would be naïve. The difference between nCO2 and ACO2 is a delta 13C of -25‰ vs. -8‰, respectively. That’s a ratio of 13CO2 in the total CO2 of 1.0838% vs. 1.1924%, respectively (a ratio of 1.012:1). IPCC’s discrepancy in absorption is far greater than the delicate difference in mix between nCO2 and ACO2.
If the absorption of nCO2 and ACO2 were to be different, the only known physical difference is the mix ratio of 13CO2:12CO2. Suppose we postulate different solubility coefficients for 12CO2 and 13CO2, itself not implausible though likely unmeasurable, and solve so that the bulk of the ACO2 absorbed is IPCC’s 13.45% of its atmospheric concentration per year, and IPCC’s corresponding number is 11.83% for nCO2. The solubility coefficient for 12CO2 turns out to be -.0826 and for 13CO2, 86.33. Regardless of the units lost in percentages, the solution to IPCC’s model is that the ocean must outgas 12CO2. Ignoring that little difficulty, the conjecture is that ACO2 and nCO2 are dissolved in water, but that water fractionates, changing all the mixes in the atmosphere and the water.
In summary, IPCC’s model cannot be solved under the laws of solubility. This explains why IPCC does not use Henry’s Law and does not supply its mass balance analysis. The best explanation for IPCC’s irregular fluxes is that the concept of a molecule of ACO2, a mix, and of nCO2, a mix, has no more meaning than a molecule of the atmosphere.
Your model by analogy to banks and factories is as worthless as the other analogies suggested in this thread. But that is the usual fate of all scientific models by analogy.
You conclude,
>> there is zero net contribution by nature to the total amount of CO2 in the atmosphere and humans are (near) totally responsible for the increase.



The greenhouse gases of water vapor and CO2 come from surface waters, the latter principally in accord with the law of solubility (Henry’s Law is for equilibrium, and the atmosphere and ocean surface are never in equilibrium), and the concentrations of those two GHGs are dynamic feedbacks of the surface temperature. Man’s contribution is negligibly small, especially in consideration of the noise in the variables. The surface temperature in Earth’s warm state follows the radiation of the sun, filtered by ocean currents, amplified in the short term, most probably by cloud albedo, and regulated in the longer term by the negative feedback of cloud albedo. Humans are not involved.

Ferdinand Engelbeen
June 20, 2010 12:00 pm

Jeff Glassman says:
June 20, 2010 at 10:37 am
When we speak of a molecule of ACO2 and nCO2, we are making an approximation because the two species of CO2 are indistinguishable in the first order process of dissolution in the waters, and they balance in the terrestrial flux.
In first instance not indistinguishable, but with 150 years of emissions, the effect can be measured as well as in the atmosphere as in the upper ocean waters:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/sponges.gif
Note the difference in d13C level between atmosphere and ocean water (sponges reflect the d13C level of surrounding (bi)carbonate in the water without fractionation). That is caused by fractionation at the surface (both ways).
That puts fossil fuel emissions at about 3% of the total without leaf water, and 1.3% with.
Here we disagree: you see natural emissions not as a part of the total cycle. To make the full mass balance, one need to include the other part of the cycle, the sinks. Taking your own figures (without leaf water, but that doesn’t change the essence):
After a full cycle the total balance is:
97% nCO2 + 3% aCO2 – x% (a+n)CO2 = 1.5% (measured increase in the atmosphere)
or x% in this case is 98.5% or with other words, the natural sinks are larger than the natural sources and nature doesn’t add anything to the mass balance, no matter how large the natural sources and sinks are, no matter any change in individual or total sinks or sources, no matter the partitioning between oceans and vegetation as net sinks.
Or put in another way: if there were no human emissions, would the amount of CO2 in the atmosphere increase, decrease or stay the same?
More importantly, IPCC needed to make the outgassing of CO2 from the ocean temperature dependent.
It is temperature dependent, but so is the uptake by vegetation, in opposite direction. The seasonal changes in CO2 level therefore are relative small and mainly in the NH (more land/vegetation). The 8 ppmv/K from the long-term CO2-temperature dependency is about 4 ppmv/K on short term temperature changes, around the trend.
Of course that is a dynamic equilibrium, not a static one.
In summary, IPCC’s model cannot be solved under the laws of solubility. This explains why IPCC does not use Henry’s Law and does not supply its mass balance analysis.
You overestimate the role of Henry’s Law for seawater: that plays a very tiny part of the whole equation. Most of the CO2 of seawater is not CO2 in solution, but in form of carbonate and bicarbonate. pH, DIC (total dissolved inorganic carbon) play a much larger role and of course temperature, but also biolife. Seawater contains much more CO2 in different forms than fresh water or Henry’s Law will show. For pCO2 differences between sea surface and atmosphere, see the excellent pages of Feely e.a.:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/exchange.shtml
Man’s contribution is negligibly small, especially in consideration of the noise in the variables
The (temperature induced) noise in the total mass balance at the end of the seasonal cycle is about halve the emissions over the past 50 years:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em.jpg

tonyb
Editor
June 20, 2010 12:21 pm

Jeff Glassman said
“And as far as mistaken attribution is concerned, IPCC zeroed the natural rise in temperature occurring at the start of the industrial era. That natural temperature rise should have continued for another 3ºC or so, extrapolating from the Vostok record. By zeroing it, IPCC then attributed that on-going, natural temperature rise to the CO2 rise at MLO, which it wrongly attributed to a global rise and wrongly to ACO2.”
Yes, exactly
In my view the IPCC (bereft of Historians) took a temperature snapshot from 1850/1880 and assumed that the (gentle) rise from that date was due to Co2 without realising that they were merely recording the tail end of a much longer rise.
The LIA is much misunderstood, in as much the second -and arguably most severe phase- started sporadically in 1601-possibly the coldest year ever- and developed in fits and starts with the real action between 1650 and 1698 when the Little Ice age- in the usual sense of the word (Age meaning extended) came to an end. There was a 40 year warm period where temperatures rose at a faster rate than any since, and in effect after that cold periods were very episodic and do not merit the word ‘little ice age’ but ‘little ice age interludes’.
Ironically, the last concerted gasp of anything approximating the little ice age was 1879 1880 and 1881. So James Hansen commenced measurements from a distinct trough in temperatures-I have commented on this curiosity a number of times in articles I have written that use historic instrumental records from around the world, which I collect here.
http://climatereason.com/LittleIceAgeThermometers/
I think it would be very interesting if someone like Willis were to analyse Hansens highly influential 1987 paper from which Giss was constructed, because his 1880 start date was, to me, completely illogical. I increasingly feel that 1880 was chosen for a reason other than the reasons that are usually stated.
Whether the depth of the LIA was 1601 or 1684, what is certain is that temperatures have been rising in fits and starts since at least the 1690 decade-that is for for some 320 years or more- and every decade since 1810 has been warmer than that decade.
The IPCC snapshot is too narrow to see this broader picture, which can be backed up by instrumental records. Indeed the Lamb graph pastiche of the LIA used in the first IPCC report was more accurate than the Hockey stick that replaced it.
The lack of correlation betwen Co2 and the start of the rise in temperatures can be seen in this graph.
http://c3headlines.typepad.com/.a/6a010536b58035970c0120a7c87805970b-pi
I am keeping out of this particular CO2 debate having crossed swords with most of the protagonists here when I ran my own thread on ‘Historic variations in CO2’ over at the Air Vent.
http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/
Tonyb

June 20, 2010 1:42 pm

Ferdinand Engelbeen says on 6/20/10 at 6:38 am said,
>>>> So what the Consensus has done is to “calibrate” the various records into agreement.
>>Sorry, that is not right. The calibration gases are calibrated against each other and at one place on earth (formerly at Scripps, currently NOAA I suppose) against an original set which is measured with an extremely accurate manometric method. That has nothing to do with “adjusting” the results.
That all may be true enough, but that is neither the end of IPCC’s “calibration”, nor the critical part of it. For example,
>>Based on an ocean carbon cycle model used in the IPCC SAR, tuned to yield an ocean-atmosphere flux of 2.0 PgC/yr in the 1980s for consistency with the SAR. After re-calibration to match the mean behaviour of OCMIP models and taking account of the effect of observed changes in temperature aon [sic] CO2 and solubility, the same model yields an ocean-atmosphere flux of −1.7 PgC/yr for the 1980s and −1.9 PgC/yr for 1989 to 1998. Citations deleted, TAR, Table 3.3, p. 208.
>>The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically >Such documentary data also need calibration against instrumental data to extend and reconstruct the instrumental record. Citations deleted, AR4. ¶1.4.2, p. 107.
In 2005, the global average abundance of CH4 measured at the network of 40 surface air flask sampling sites operated by NOAA/GMD in both hemispheres was 1,774.62 ± 1.22 ppb. This is the most geographically extensive network of sites operated by any laboratory and it is important to note that the calibration scale it uses has changed since the TAR. The new scale (known as NOAA04) increases all previously reported CH4 mixing ratios from NOAA/GMD by about 1%, bringing them into much closer agreement with the Advanced Global Atmospheric Gases Experiment (AGAGE) network. Footnote, citation deleted. AR4, ¶2.3.2, p. 140.
>>Note that the differences between AGAGE and NOAA/GMD calibration scales are determined through occasional intercomparisons. AR4, Figure 2.4, p. 142.
>>These differences are the result of different cross calibrations and drift adjustments applied to individual radiometric sensitivities when constructing the composites. Citation deleted, AR4, ¶2.7.1.1.2, p. 188.
>>The large spread in T2 trends stems from differences in the inter-satellite calibration and merging technique, and differences in the corrections for orbital drift, diurnal cycle change and the hot-point calibration temperature. Citations deleted, AR4, ¶3.4.1.2.1, p. 267.
>>These products rely on the merging of many different satellites to ensure uniform calibration. AR4, ¶3.4.2.2, p. 273.
>>For this reason, the proxies must be ‘calibrated’ empirically, by comparing their measured variability over a number of years with available instrumental records to identify some optimal climate association, and to quantify the statistical uncertainty associated with scaling proxies to represent this specific climate parameter. AR4, ¶6.6.1.1, pp. 472-3.
>>Evaluated on the scale typical of current AOGCMs, nearly all quantities simulated by high-resolution AGCMs agree better with observations, but the improvements vary significantly for different regions and specific variables, and extensive recalibration of parametrizations is often required. Citation deleted, AR4, ¶11.10.1.1, p. 918.
>>The set of GCM simulations of the observed period 1902 to 1998 are individually aggregated in area-averaged annual or seasonal time series and jointly calibrated through a linear model to the corresponding observed regional trend. AR4, ¶11.10.2.2.2, p. 923.
>>Additionally, the year by year (blue curve) and 50 year average (black curve) variations of the average surface temperature of the Northern Hemisphere for the past 1000 years have been reconstructed from “proxy” data calibrated against thermometer data (see list of the main proxy data in the diagram). TAR, Summary for Policymakers, Figure 1, p. 3.
>> Complex physically based climate models are the main tool for projecting future climate change. In order to explore the full range of scenarios, these are complemented by simple climate models calibrated to yield an equivalent response in temperature and sea level to complex climate models. These projections are obtained using a simple climate model whose climate sensitivity and ocean heat uptake are calibrated to each of seven complex climate models. TAR, Summary for Policymakers, p. 13.
>>CO2 has been measured at the Mauna Loa and South Pole stations since 1957, and through a global surface sampling network developed in the 1970s that is becoming progressively more extensive and better inter-calibrated. Citations deleted, TAR, ¶3.5.1, p. 205.
These clippings say nothing about the limitations and accuracy problems IPCC admitted to encountering with its calibrations.
You discussed a laboratory calibration of detectors. IPCC calibrates detector measurements from different locations and different times again, and using different instruments, so that the results look alike. It calibrates data supplied to models and its parameterizations so that the model results look like the results of other models. These, and every mention of an intercalibration, raise the specter of data adjustments. It graphs such calibrated records by calibrating the traces once again so that they merge or overlap.
IPCC uses calibration in lieu of error analysis. It uses visual correlation in place of numerical correlation, and is not above adjusting the offset and scale factor of one record to make it look like another, and then to claim a relationship. It calibrates its models and the data fed into its models so that the model results will look like the data. Then it fails to see if its well-tuned model with well-tuned data has any predictive power. This is not the stuff of science.

June 20, 2010 2:16 pm

Ferdinand Engelbeen says on 6/20/10 at 12:00 pm said,
>> You overestimate the role of Henry’s Law for seawater: that plays a very tiny part of the whole equation. Most of the CO2 of seawater is not CO2 in solution, but in form of carbonate and bicarbonate. pH, DIC (total dissolved inorganic carbon) play a much larger role and of course temperature, but also biolife. Seawater contains much more CO2 in different forms than fresh water or Henry’s Law will show.
IPCC needed CO2 to accumulate in the atmosphere for its AGW conjecture to work. It tried to resurrect the buffer factor, the Revelle & Suess conjecture that failed. When it tried to measure the buffer factor, it exhibited the temperature dependence of solubility. This appeared in a draft to AR4, which IPCC promptly removed and suppressed.
IPCC reinforced the buffer factor with the model that the surface water is in equilibrium. With that assumption, it applied the chemical equations of equilibrium. The solution to these equations is the Bjerrum plot. The equations showed that the surface layer would be a buffer against CO2 dissolution, and in the bargain, that what CO2 was absorbed would cause a nice, alarming increase in ocean acidity.
Of course, the surface layer is not in equilibrium. It is not in equilibrium even in the most stagnant pool.
If one made a solubility measurement in IPCC’s surface layer model, he would find that Henry’s Coefficient depends not just on temperature, pressure, and salinity, but also on the state of the surface layer according to the Bjerrum plot. This is novel physics, invented to make AGW work.
A far better model, one consistent with all physics but not AGW, is that the surface layer is not in equilibrium. Instead it contains a surplus of molecular CO2, sufficient to preserve Henry’s Law and to itself be a buffer for the ions in the chemical equations. The buffer is not in the atmosphere; it is in the surface layer.

Ferdinand Engelbeen
June 20, 2010 2:52 pm

Jeff Glassman says:
June 20, 2010 at 1:42 pm
That all may be true enough, but that is neither the end of IPCC’s “calibration”, nor the critical part of it.
I may agree with most of what you object to, but not about the CO2 data: the IPCC has nothing to do with the calibration or the endresults of the CO2 levels at Mauna Loa or any other station on earth. That was the work of Keeling at Scripps until the end-1990’s and NOAA today, but controlled by different methods (manometric, GC, mass spectrometry) by different labs from different organisations in different countries. That makes the CO2 data robust and far beyond the machinations used for the temperature records. The only moment that all the (worldwide) data had to be revised was when was discovered that the original CO2 in N2 calibration gases (out of fear for corrosion of the inside of the containers) did give different results with the NDIR measurements than CO2 in air. This needed a worldwide recalibration of all instruments with the new calibration gases. But as the raw (voltage) measurements still were available, that wasn’t a huge problem.
That are the base CO2 data, which represent 95% of the atmosphere. The problems are in the last 5% of the atmosphere: the first 200-1000 m over land, where the fast emitters/sinks are (humans, bacteria, vegetation),… and mixing is slow under low wind conditions. Thus never use these data for “global” CO2 levels. But the below 200 m data over land are used for flux measurements to try to understand the uptake/release of CO2 over different areas. That is of interest for the detailed carbon cycle but of no interest for the global CO2 mass balance.

Ferdinand Engelbeen
June 20, 2010 3:36 pm

Jeff Glassman says:
June 20, 2010 at 2:16 pm
If one made a solubility measurement in IPCC’s surface layer model, he would find that Henry’s Coefficient depends not just on temperature, pressure, and salinity, but also on the state of the surface layer according to the Bjerrum plot. This is novel physics, invented to make AGW work.
Jeff, one need to make a differentiation between what is measured, what can be calculated and what is speculation in climate science. Besides temperature plots, most direct measurements are done by people interested in good data, whatever these show. That the IPCC (and some intermediates) manipulate the interpretation of the data is of a different category (including the GCM’s and other computer games).
Ocean pCO2 (the result of all factors in seawater: DIC, pH, salinity, biolife,…) were sporadically measured at a lot of places by ships and systematically nowadays and on a few fixed places on earth. E.g. in Bermuda:
http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
That gives (as sent in a previous message) that we know with reasonable accuracy where the outgassing is from the oceans and where the sinks are. And the intermediates (source in summer, sink in winter). The flux measurements in and out are more difficult to estimate, as wind speed is a huge factor in the exchanges, as simple diffusion and surface crossing is quite slow (there is average only 0.000007 bar difference in pCO2 pressure between the atmosphere and the upper oceans).
All measured pCO2 changes are positive in both air and oceans. The upper oceans track the air concentrations or reverse, but as any (deep) ocean burp would increase the d13C level of the atmosphere (including the sea-air fractionation), and we see the reverse, that proves that the flow is from the atmosphere into the oceans, not the reverse. Moreover, the pH is getting lower while DIC increases. If the lower pH would be the cause of more outgassing (converting -bi-carbonate into CO2, thus increasing oceanic pCO2), that would reduce the DIC content, but we see the reverse.
Thus there is overwhelming evidence that the oceans are not the source of the extra CO2 in the atmosphere, but a net sink.

Stephen Wilde
June 20, 2010 3:54 pm

“Thus there is overwhelming evidence that the oceans are not the source of the extra CO2 in the atmosphere, but a net sink.”
No matter. All that is necessary is for the effectiveness of the sink to vary. Net outgassing is unnecessary.
Warmer ocean surfaces will reduce the effectiveness of the sink.

June 20, 2010 5:26 pm

Ferdinand Engelbeen says on 6/20/10 at 2:16 pm said,
>> Jeff, one need to make a differentiation between what is measured, what can be calculated and what is speculation in climate science. Besides temperature plots, most direct measurements are done by people interested in good data, whatever these show. That the IPCC (and some intermediates) manipulate the interpretation of the data is of a different category (including the GCM’s and other computer games).
We seem to be converging here, which is quite encouraging. My work in this area arises out of what I considered gross errors in the scientific method evident in IPCC reports. In particular, I find the use of an unvalidated model for public policy a breach of ethics for a scientist, and a fraud for personal gain. My observations is that there would be no climate crisis, and especially no CO2 crisis, but for IPCC. I am focused entirely on exposing the IPCC fraud. The rest of climatology can wander over the domain as it might choose, and I might look in on their work from time to time with amusement.
You said,
>>Ocean pCO2 (the result of all factors in seawater: DIC, pH, salinity, biolife,…) were sporadically measured at a lot of places by ships and systematically nowadays and on a few fixed places on earth. E.g. in Bermuda:
http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
>>That gives (as sent in a previous message) that we know with reasonable accuracy where the outgassing is from the oceans and where the sinks are. And the intermediates (source in summer, sink in winter). The flux measurements in and out are more difficult to estimate, as wind speed is a huge factor in the exchanges, as simple diffusion and surface crossing is quite slow (there is average only 0.000007 bar difference in pCO2 pressure between the atmosphere and the upper oceans).
I think that Takahashi, et al., have done a commendable job in assembling those data into a beautiful model. It is AR4, Figure 7.8, p. 523. The sum of all the individual cells in the Takahashi diagram is correctly the net uptake of the ocean. However, Takahashi’s positive and negative partial sums are not faithful to the uptake and outgassing fluxes used by IPCC in Figure 7.3, p. 515. Therefore, the Takahashi model needs recalibration. I provided an example on my blog. See Rocket Scientist’s Journal, “On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, etc.”, Figure 1A.
You mention “all the factors”, with examples, but excluding dissolved molecular CO2. In my model, I rely on Henry’s Law, with the ocean surface layer a compliant, instantly available buffer. CO2 rich water rises at the end of the subsurface THC in the Eastern Equatorial Pacific (EEP) to outgas at the warmest prevailing SST of the day. (In an equivalent view, the THC can be considered a continuous path, closed by a branch over the surface of the ocean.) A warm air mass, heavily laden with CO2, rises, divides north and south, to enter the Hadley cells, which then carry the gas into the trade winds. This creates a plume of CO2 in the atmosphere that descends across Hawaii. That plume I imagine to be a ridge that can wander with the prevailing wind at MLO, causing it to have a seasonal cycle. (It’s too bad that the MLO data do not seem to include wind measurements.) MLO data represent a major source of atmospheric CO2.
That CO2 then wanders across the surface of the globe with natural wind currents. Meanwhile, the ocean surface layer moves poleward, cooling and reabsorbing CO2. These currents flow to the poles, where the surface water made dense by cooling (always to ice water temperatures) and especially by CO2, and somewhat by salinity (the conventional model), descends to depths as the headwaters of the THC. The polar regions represent major sinks of atmospheric CO2.
The THC subsurface current has many branches by which it reemerges, but a dominant branch leads to the EEP about a millennium later. The lag and the solubility curve shape is measurable in the Vostok record. See http://www.rocketscientistsjournal.com, “The Acquittal of Carbon Dioxide”.
Some conclusions: surface waters are the major source of atmospheric CO2, and the mean trends of CO2 concentration measured at MLO and the South Pole should not match.

Stephen Wilde
June 20, 2010 10:53 pm

“A warm air mass, heavily laden with CO2, rises, divides north and south, to enter the Hadley cells, which then carry the gas into the trade winds. This creates a plume of CO2 in the atmosphere that descends across Hawaii. That plume I imagine to be a ridge that can wander with the prevailing wind at MLO, causing it to have a seasonal cycle.”
I like that description but how does it square with the similar record at Barrow and other locations ?

Ferdinand Engelbeen
June 21, 2010 8:36 am

Jeff Glassman says:
June 20, 2010 at 5:26 pm
This creates a plume of CO2 in the atmosphere that descends across Hawaii. That plume I imagine to be a ridge that can wander with the prevailing wind at MLO, causing it to have a seasonal cycle.
That there is a continuous CO2 flow from the warm equator to the poles is true, but is not the cause of the seasonal cycle. The main cause is the terrestrial vegetation cycle of the mid-latitudes, mainly in the NH, where leaf formation in spring and further photosynthesis use CO2, produce O2 and increase d13C levels from spring to fall. The opposite happens from fall to spring. That can be seen in the seasonal cycle of CO2, O2 and d13C levels.
Further, the pCO2 (measured or calculated) is directly proportional to the concentration of free CO2 in solution. Thus more or less known for different parts of the oceans and for different seasons. Henry’s Law still is valid, but has very little to do with temperature (alone) in the case of seawater. Thus calculating fluxes in/out seawater based on only Henry’s Law / temperature gives a complete wrong answer, the more that the diffusion speed of CO2 through (sea)water is very low, thus even with very large differences in pCO2 between ocean surface and atmosphere, the speed of CO2 transfer is low and wind speed is the dominant factor in the fluxes.
From the past near million years (Vostok, Dome C) we know that the (dynamic) equilibrium between temperature in the past and CO2 levels is about 8 ppmv/K.
That includes all (deep) ocean flows, ice sheet and vegetation expansion/retreat, etc… That means that temperature is only responsible for maximum 8 ppmv from the warming since the depth of the LIA. There is no reason to expect that this ratio is different now, to the contrary, as the short term influence of (ocean) temperature is about 4 ppmv/K around the trend.

Ferdinand Engelbeen
June 21, 2010 9:03 am

Willis Eschenbach says:
June 20, 2010 at 6:18 pm
Mean residence time is the time an average molecule stays in a reservoir. It is calculated as mass divided by throughput. IT DOES NOT HAVE A HALF-LIFE.
While true for identical molecules, there still is a kind of half-life for a pulse of a different isotope (like 14C or 13C)…
For the sake of clarity I have plotted what happens if you add a pulse of “human” aCO2 of 100 GtC at once to the pre-industrial atmosphere containing 580 GtC:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/fract_level_pulse.jpg
Where FA is the fraction of “anthro” CO2 in the atmosphere, FL in the upper oceans, tCA the total amount of CO2 in the atmosphere and nCA the amount of “natural” CO2 in the atmosphere, the difference being anthropogenic.
It is easy to see that after a pulse the total amount is of course the old amount + the amount in the pulse, which brings the fraction of aCO2 to about 14%. This fraction is fast reduced to near zero in 40-50 years, simply by exchanges with natural CO2 through the permanent and seasonal exchanges between (deep) oceans and vegetation. The reduction of the extra CO2 from the pulse to near zero takes much longer, but despite that there is near no aCO2 left in the atmosphere, the origin of the increased CO2 level compared to the equilibrium still is 100% caused by aCO2.

June 21, 2010 2:46 pm

Willis Eschenbach on 6/19/10 at 1:40 pm said, shouting at the end,
>>Mean residence time is the time an average molecule stays in a reservoir. It is calculated as mass divided by throughput. IT DOES NOT HAVE A HALF-LIFE. 

He does not explain what he means by an “average molecule”. The reader might think he’s talking about the average in some sense of 12CO2, 13CO2, and 14CO2 molecules. What happened is that he conflated the average of time into the time of the average. These are interchangeable only in a special circumstance (linearity), which happens not to be applicable here. Then when he moved the average from time to the molecule, he was left with an expression of an average with respect to nothing real.
What he needed to do was average the life time of a bunch of molecules, then he would have had a target for the averaging. This might have kept him from conflating the average life of a molecule and the average lifetime in a slug of molecules.
When in his second sentence cited, Mr. Eschenbach says, “It is calculated …”, the “it” pretty surely refers to “mean residence time”. But that is not what “it” references in the third sentence. He might intend for the second “it” to refer to his “average molecule”. This is the grammatical error of a faulty pronoun reference, and it contributes to his scientific and mathematical error.
In his opening piece on 6/7/10, Willis Eschenbach said,
>>Suppose we put a pulse of CO2 into an atmospheric system which is at some kind of equilibrium. The pulse will slowly decay, and after a certain time, the system will return to equilibrium. This is called “exponential decay”, since a certain percentage of the excess is removed each year. The strength of the exponential decay is usually measured as the amount of time it takes for the pulse to decay to half its original value (half-life) or to 1/e (0.37) of its original value (e-folding time). The length of this decay (half-life or e-folding time) is much more difficult to calculate than the residence time. The IPCC says it is somewhere between 90 and 200 years. I say it is much less, as does Jacobson.
We can excuse Mr. Eschenbach’s use of the word equilibrium, even though IPCC seriously bungles the concept repeatedly, because Mr. E. says its “some kind of equilibrium”. Out of kindness, we can read “some kind of equilibrium” to mean steady state. However, Mr. E. didn’t even need the assumption to explain the decay of a pulse. He need not have introduced the state of the system. Later, Mr. E. says, “there is no half-life of a pulse when a system is at equilibrium.” When a pulse is added to a system, it will, as described below, have a half life regardless of the state of the system. What counts is the mass of the pulse and its vanishing at a rate proportional to its remaining mass.
Mr. E. claims that “a certain percentage of the excess is removed each year.” First, he doesn’t mean the “excess”. There is no excess in his problem, and he is left with a “certain percentage” of an uncertain thing. In many basic physical problems, the rate of increase or decrease is proportional to the instantaneous parameter value, whether mass, quantity, or size. Dissolution of a gas into a liquid and the emptying of a reservoir are relevant examples. This is known from physics and experiment, and is not a consequence of the problem as Mr. E. has described it. When the rate is proportional to the total remaining, the solution is unique and it is the exponential.
Mr. E. suggested I “Read the IPCC definition again.” Here it is, again, for all to read:
>>Turnover time (T) (also called global atmospheric lifetime) is the ratio of the mass M of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal S from the reservoir: T = M / S.
IPCC goes on to equate T to mean residence time — sometimes. Now IPCC never uses Turnover time in the main body of either its 3rd or 4th Assessment Report, so it doesn’t matter to its writings that the definition is incomplete. In the case of CO2 being dissolved into water, the conventional model is that the rate of removal S is equal to a constant times the instantaneous mass remaining in the reservoir. So S = kM. This is the situation Mr. E. should have in mind, but can’t express, when he says “a certain percentage … is removed”. This has the effect of making T = 1/k.
Now S is the rate of change of the mass, M. So we use differential calculus to write, dM/dt = S = -kM, where k is positive and is called the decay constant. This equation is easily solved, but can’t be written here within the html limits on this blog. We have the integral of dM/M dt equals the integral of k dt, and the solution is ln(M) = -kt + constant. Using the obvious value for the constant and obvious notation, the result is M(t)=M_0*exp(-kt).
We write that M(t_1/2) = M_0/2, so that exp(-kt_1/2) = ½, and thus t_1/2, the half-life, equals ln(2)/k. It doesn’t matter how big the mass is, all the way down to one molecule.
Similarly, we write M(T_e folding) = M_0/e = M_0 exp(-kt_e folding). Thus e-folding = 1/k.
Now the remarkable thing is that the e folding time is equal to the mean residence time as defined by IPCC. This is true for one molecule or for Avogadro’s number of molecules or for 800 PgC worth of CO2 in the atmosphere in some unknown isotopic mix. This is not an average molecule.
Just to be sure, we can compute the average lifetime of a molecule in the reservoir being dissolved. First we need the normalized number of molecules at time t, and that is kexp(-kt). So the average is the integral of kexp(-kt)t dt from 0 to ∞. It is demonstrated in first year calculus to be 1/k. It is the e folding time, the average life time of a molecule in a slug, the average life time of a hypothetical molecule in general, the average life time of the slug, the reciprocal of the decay constant (k), the mean residence time, and the (instantaneous) Turnover time, T, all at the same time.
I write a lot of words and provide a lot of references and cite a lot of sources because that’s what it takes to untangle what you write.

June 21, 2010 4:54 pm

Steven Wilde on 6/20/10 at 10:53 said,
>>I like that description but how does it square with the similar record at Barrow and other locations?
I rely almost exclusively on IPCC reports, and on papers cited there. I did not find Point Barrow CO2 there. My opinion about the Barring Head CO2 record is the same as that for the South Pole, below.
Re Ferdinand Engelbeen 6/21/10 at 8:36 am
I did not say that the CO2 flow originating at the Equator was the cause of the seasonal fluctuations. My model says that those fluctuations are due to the seasonal fluctuations in the prevailing wind at MLO, which modulate standing waves or gradients in the atmospheric CO2 concentration there.
The model that the CO2 seasonal fluctuations at MLO are due to terrestrial vegetation cycles is IPCC’s model, but it is a weak conjecture. Seasonal fluctuations are seen all over the globe, but they are not in sync. Investigators are still testing the terrestrial model for CO2 fluctuations. You might want to look at Keeling, CD, The Concentration and Isotopic Abundances of Carbon Dioxide in the Atmosphere, Tellus, v. 12, no. 2, 6/60, and Manning, AC and RF Keeling, Correlations in Short-Term Variations in Atmospheric Oxygen and Carbon Dioxide at Mauna Loa Observatory, a Scripps publication, 11/8/60 for four decades of uncertainty in the model.
That the modulation is terrestrial seems implausible because of the massive natural flow that should be dominating MLO concentrations.
At the same time, the MLO concentrations reported by IPCC and in the journals is simply too pat. The record there and at the South Pole looks as though someone had deconstructed the data into a roughly exponential trend line plus a seasonal component, smoothed them both, put them through a recalibration, and then reassembled them. Each series is too perfect, and the comparisons too coincidental. Real data don’t do that.
A good argument can be made that the temperature reconstruction in the Vostok record is global. That is not true of the CO2 record, which is local, sampled inside the CO2 sink of the Antarctic. Both are heavily smoothed, low pass filtered by the fern closure time. The fact that all ice core records exhibit a hockey stick effect when merged into the instrument record is likely due to the difference in filtering plus IPCC data doctoring. What varies with SST is the outgassing in the EEP, which then has a profound effect on MLO. The uptake at the poles is at a constant temperature around 0º to 4ºC which means the ice core records have low variance, independent of SST.
Henry’s Law must have a profound effect for two reasons. One is that it has not been repealed. The other is that the notion that the surface ocean is a bottleneck, making CO2 queue up in the atmosphere waiting for slow sequestration processes to make room for it, is based on the assumption that the stoichiometric equations of equilibrium apply. A far better model that requires repealing nothing is that the surface layer is a buffer for molecular CO2 that allows the flux with the atmosphere and the dissociation, currents, and sequestration all to proceed independently and at their own pace. Atmospheric CO2 is uncoupled with sequestration.
It’s also worth noting that IPCC’s model gives the absorption of CO2 three time constants. This is nonsense, tending to invalidate its model. It is equivalent to having three separate reservoirs and circuits for the CO2. The fastest time constant would dominate the emptying of the reservoir, and that time constant is much faster than IPCC’s fastest. Henry’s Law is a law of equilibrium, but we know from experience that the equilibration time for CO2 dissolution is instantaneous compared with even short term climate.
One more thing is worth stating. The replenishment of CO2 in the surface waters is quite slow because the cooling of the layer is slow as the waters find their way back to the poles. A reasonable model is that Henry’s Law is satisfied everywhere along the path.

Ferdinand Engelbeen
June 22, 2010 2:10 am

Jeff Glassman says:
June 21, 2010 at 2:46 pm
Turnover time (T) (also called global atmospheric lifetime) is the ratio of the mass M of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal S from the reservoir: T = M / S.
Indeed, this is the right definition of turnover time. That is about the possibility of any individual molecule (whatever its origin) in the atmosphere to be catched by or exchanged with CO2 from another reservoir (oceans, vegetation). In the current world, there is an exchange of about 150 GtC/year between the different reservoirs (both ways), thus about 20% of the atmospheric CO2 content (800 GtC) is exchanged with the other reservoirs.
Of course that is not used by the IPCC, as the turnover, no matter how large, has very little influence on how much CO2 resides in the atmosphere. What matters is what happens if you add an extra amount of CO2 to the atmotmosphere, whatever the source. That is governed by the pCO2 pressure difference between the oceans and the atmosphere, which is positive near the equator and negative near the poles and seasonal (temperature) dependent in between. That gives currently a sink rate of about 2 GtC CO2 into the upper oceans and about 1 GtC into vegetation. See:
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf
Thus the turnover time (which is based on the 150 GtC exchange rate) has simply nothing to do with the decay time of an extra mass of CO2 brought into the atmosphere (which is based on the 3 GtC/year removal rate), which is much longer than the turnover time.

Ferdinand Engelbeen
June 22, 2010 3:28 am

Jeff Glassman says:
June 21, 2010 at 4:54 pm
My model says that those fluctuations are due to the seasonal fluctuations in the prevailing wind at MLO, which modulate standing waves or gradients in the atmospheric CO2 concentration there.
As the data from a lot of stations shows the same pattern (and a reverse pattern in the SH), the assumption that prevailing winds are the cause of the seasonal variability at MLO is questionable:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/month_2002_2004_4s.jpg
As all exchanges are at the surface, it is normal that the largest seasonal changes are seen near ground and that there is some lag with altitude (here for the NH):
http://www.ferdinand-engelbeen.be/klimaat/klim_img/seasonal_height.jpg
That the modulation is terrestrial seems implausible because of the massive natural flow that should be dominating MLO concentrations.
While the flows are massive, these are composed of a relative constant term, as emissions from the warm equator and sinks in near the poles are rather constant: the SST’s near the equator and near the poles are relative constant and don’t change much over the seasons, only the position changes somewhat. The main variability is in the mid-latitudes, where temperature and biolife govern the pCO2 of the oceans. But the seasonal trend is opposite to what one expect from higher temperatures in summer: CO2 is lower in summer than in winter. Thus the variability is caused by vegetation, not by the oceans, as also O2 and 13C trends show. Even if these (still) have large margins of error, the mass trend is clear.
At the same time, the MLO concentrations reported by IPCC and in the journals is simply too pat.
Before you accuse someone of manipulating the data, please have a look at the (raw) data yourself. These are available on line for four stations: Barrow, Mauna Loa, Samoa and South Pole:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
These are the calculated CO2 levels, based on 2 x 20 minutes 10-second snapshots voltages of the cell + a few minutes voltages measured from three calibration gases. Both the averages and stdv of the calculated snapshots are given. These data are not changed in any way and simply give the average CO2 level + stdv of the past hour.
Some of the data are “flagged”, if the stdv within an hour is high, the difference between subsequent hours is high, with upwind conditions, etc… These “flagged” data are excluded from daily, monthly and yearly averaging, because these represent local contamination and only data deemed “background” are used for averaging. Does that influence the average and trend? Hardly. With or without outliers, the shape and trend is hardly different, only less variable around the (seasonal) trend:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/mlo2004_hr_raw.jpg
for 2004 including all data
http://www.ferdinand-engelbeen.be/klimaat/klim_img/mlo2004_hr_selected.gif
excluding “flagged” data.
Please check it yourself…
More later…

Ferdinand Engelbeen
June 22, 2010 4:11 am

In addition:
The detailed measurement, calibration and selection procedures for MLO (and other stations) are available at:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html

Ferdinand Engelbeen
June 22, 2010 5:03 am

Further reactions…
A good argument can be made that the temperature reconstruction in the Vostok record is global. That is not true of the CO2 record, which is local, sampled inside the CO2 sink of the Antarctic.
The CO2 record is even more global: today there is hardly any difference in CO2 levels between the North Pole and the South Pole, for 95% of the atmosphere (only over land up to up to 1,000 m there is a lot of noise). There is only a small (14 months) lag between the NH and the SH if one looks at yearly averages, see the difference between the yearly averages of several stations:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends.jpg
Thus the Vostok CO2 record simply represents the global CO2 levels of that time, be it smoothed over about 600 years, as that is the time that all gas bubbles need to be fully closed.
The fact that all ice core records exhibit a hockey stick effect when merged into the instrument record is likely due to the difference in filtering plus IPCC data doctoring.
There is not the slightest role of the IPCC in this case, as the ice core data are sampled by different drilling and measurement teams from different countries. That the different ice cores show the same hockeystick, despite huge differences in accumulation rate, temperature, (coastal) salt inclusions, etc. only strengthens the case that there is a real hockeystick (confirmed by other proxies like d13C changes in sponges) in this case. The ice cores with the best resolution (8 years) show the same trend as those with the worst one (600 years) for the same average gas age and overlap with some 20 years with the direct data of the South Pole:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/antarctic_cores_001kyr_large.jpg
Henry’s Law must have a profound effect for two reasons. One is that it has not been repealed. The other is that the notion that the surface ocean is a bottleneck, making CO2 queue up in the atmosphere waiting for slow sequestration processes to make room for it, is based on the assumption that the stoichiometric equations of equilibrium apply.
Again you are missing a lot of factors in the exchange of CO2 between oceans and atmosphere. Henry’s Law still is working, but the amount of free CO2 at the surface is not only influenced by temperature, but by a host of other factors. At last it is the real partial pressure of free CO2 in the last few cm of water which decides which way CO2 will go: in or out of the waters, if the difference with pCO2 of the atmosphere is higher or lower. And even then, the flux involved is secondary to wind speed: even if the upper few cm are rapidely in equilibrium with the atmosphere, the diffusion speed to supply CO2 for more emissions or take in CO2 for more uptake is very, very slow. It is the wind/waves which mixes the layers which rules the uptake/release speed.
It’s also worth noting that IPCC’s model gives the absorption of CO2 three time constants. This is nonsense, tending to invalidate its model. It is equivalent to having three separate reservoirs and circuits for the CO2.
In fact there are three reservoirs involved: the ocean surface (+ vegetation), the deep oceans (+ longer term coalification of vegetation) and (silicate) rock weathering. That doesn’t mean that I agree with the Bern model, as the second and third term are only of interest if we should burn all available oil and a lot of coal. Then one would see an important increase of even the deep ocean CO2 content, which shows up in the equatorial upwelling of the THC.
But your fastest time constant is wrong, as it is based on the residence time, not the decay time needed to reduce any excess CO2 above (dynamic) equilibrium.

June 22, 2010 7:42 am

Re Ferdinand Engelbeen 6/22/10 at 5:03 am said:
>>The CO2 record is even more global: today there is hardly any difference in CO2 levels between the North Pole and the South Pole, for 95% of the atmosphere (only over land up to up to 1,000 m there is a lot of noise). There is only a small (14 months) lag between the NH and the SH if one looks at yearly averages, see the difference between the yearly averages of several stations:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends.jpg
>>Thus the Vostok CO2 record simply represents the global CO2 levels of that time, be it smoothed over about 600 years, as that is the time that all gas bubbles need to be fully closed.
Even if what you say were true about North Pole and South Pole data, that does not make any South Pole data, today or in the paleo past global. That is not supported by logic. IPCC admits that a measurable east-west gradient exists in global CO2, and that the north-south gradient is 10 times as great as the east-west gradient. That observation is part of the data reduced by IPCC by which it concluded that a manmade catastrophe is looming due to man’s CO2 emissions. That conclusion is false in part because CO2, by IPCC’s own admissions, is not well-mixed.
As you have pointed out from time to time, local measurements can be quite different than what is assumed for the global CO2 concentration. MLO is local. It sits in the plume of by far the largest source of CO2, more than an order of magnitude greater than what man contributes. IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line.
By the way, the smoothing in the ice core data ranges between 20 years, according to one report, but more generally 30 years to a millennium and a half.
You say,
>>>> The fact that all ice core records exhibit a hockey stick effect when merged into the instrument record is likely due to the difference in filtering plus IPCC data doctoring.

>> There is not the slightest role of the IPCC in this case, as the ice core data are sampled by different drilling and measurement teams from different countries.
To the contrary, and as my remark stated, IPCC merged the ice core records into the instrument records. See TAR SPM, Figure 2, p. 6 (rocketscientistsjournal.com, “SGW”, Figure 34), AR4, Figure SPM.1, (“SGW”, Figure 35). This is unacceptable science. What the laboratories did in creating laboratory data is quite unimportant. The fraud starts with the IPCC.
You wrote,
>>Henry’s Law still is working, but the amount of free CO2 at the surface is not only influenced by temperature, but by a host of other factors. At last it is the real partial pressure of free CO2 in the last few cm of water which decides which way CO2 will go: in or out of the waters, if the difference with pCO2 of the atmosphere is higher or lower.
Actually the partial pressure of a gas in water is a fiction. It is taken to be the partial pressure of the gas in the gas state in contact with the water and in equilibrium with it. It is a laboratory concept, but good enough for climate work. What counts in dissolution is the partial pressure of the gas in the atmosphere, the solute, and the temperature of the water, the solvent.
The last few cm of water are relevant only to the extent that that microlayer might represent the surface layer. The surface layer, which is also known as the mixed layer, is a roiling turmoil running to a nominal depth of 50 m to 150 m or so. The wave action on the surface is the top of a larger overturning action, which is wind dependent. Of course, sometimes the wind is nil, and sometimes it is a storm. What counts is the average action over the surface as it migrates poleward, absorbing more and more CO2 along its path. The surface layer is thoroughly mixed with entrained air taken in from the surface. It is a perpetual, dynamic sampling mechanism by which surface air is captured and absorbed. The notion of a significant layer a few centimeters thick fits a stagnant pond.
You seem to agree when you write,
>>And even then, the flux involved is secondary to wind speed: even if the upper few cm are rapidely in equilibrium with the atmosphere, the diffusion speed to supply CO2 for more emissions or take in CO2 for more uptake is very, very slow. It is the wind/waves which mixes the layers which rules the uptake/release speed.

The surface layer is taking up CO2 as it cools, as Henry’s Law informs us. If you are talking about diffusion across the air-water boundary, it is extremely rapid – instantaneous on any climate scale. If you are talking about diffusion from the surface layer to deep water, it is irrelevant to and unsensed by the atmosphere. The ocean is, contrary to IPCC’s strictly equilibrium analysis, a buffer of surplus molecular CO2 that allows Henry’s Law to operate with the atmosphere, and the stoichiometric equations to operate along with the vertical ocean currents and the sequestration processes.
Ocean emissions occur when the water is heated. That occurs principally at the upwellings, and especially in the Eastern Equatorial Pacific. Small variations occur over the surface as shown by Takahashi. These are minor fluctuations in the larger pattern going from tropical temperatures to polar temperatures, spanning almost the full range of the solubility curve. As I said before, Takahashi got the net flux right. He did not get the 90 to 100 PgC/yr up and down fluxes right. The Takahashi diagram of AR4 Figure 7.8, p. 523 (rocketscientistsjournal.com, “On Why CO2 Is Known Not To Accumulate in the Atmosphere, etc.”, Figure 1), is not in accord with the carbon cycle of AR4 Figure 7.3, p. 515. One way to bring the Takahashi analysis into agreement with the carbon cycle is to recalibrate it as shown in Figure 1a, id.
You say,
>> But your fastest time constant is wrong, as it is based on the residence time, not the decay time needed to reduce any excess CO2 above (dynamic) equilibrium.

In the same spirit of repetitiveness, no decay time is involved. Dynamic and equilibrium are contradictory. You might be referring to a dynamic steady state, which is acceptable on climate scales as a description of the surface layer.
You are trying to defend IPCC’s model that ACO2 accumulates in the atmosphere to drive the climate, while nCO2 does not accumulate, but remains in a perpetual dynamic steady state. This model is indefensible.

Ferdinand Engelbeen
June 22, 2010 2:00 pm

Jeff Glassman says:
June 22, 2010 at 7:42 am
Even if what you say were true about North Pole and South Pole data, that does not make any South Pole data, today or in the paleo past global. That is not supported by logic. IPCC admits that a measurable east-west gradient exists in global CO2, and that the north-south gradient is 10 times as great as the east-west gradient.
The NH-SH gradient in yearly averages is less than 5 ppmv on a level of 390 ppmv or less than 2%. I call that well-mixed, which is true for 95% of the atmosphere, from the North Pole to the South Pole, including MLO. Well mixed doesn’t imply that at all places on earth at the same time one can find exactly the same levels. But away from huge sources and sinks, within a reasonable mixing time, the levels are within small boundaries.
Thus if the South Pole data are within 2% of the North Pole data within a year, with increasing emissions in the NH today, I may assume that the ice core data from Antarctica represent 95% of the ancient atmosphere, be it smoothed over a long(er) period.
IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line.
Sorry, this is a bridge too far. The raw hourly data, not adultered or changed in any way, including all outliers are available for checking and comparison for anyone like you and me, at least for four stations, including MLO (I even received a few days of the raw 10-second voltage data for check of the calculations, on simple request). If you have any proof that the data were changed by anyone, or even that the selection procedure has any significant influence on the level, averages or trends, well then we can discuss that. If you have no proof, then this is simply slander.
To the contrary, and as my remark stated, IPCC merged the ice core records into the instrument records. This is unacceptable science. What the laboratories did in creating laboratory data is quite unimportant. The fraud starts with the IPCC.
The merging of the ice core data and the instrument record is acceptable, as that is based on a 20 year overlap of the ice cores from Law Dome with the South Pole direct measurements. There is no difference between CO2 at the South Pole and at the top layer of the firn at the drilling site. There is a small gradient (about 10 ppmv) in the firn from top to start closing depth, which means that the CO2 level at closing depth is about 7 years older than in the atmosphere and there is no difference in CO2 levels between open (firn) air and already closed bubbles in the ice at closing depth. The average closing time is 8 years. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_overlap.jpg
The original article from Etheridge e.a. from 1996 (unfortunately after a paywall) is at:
http://www.agu.org/pubs/crossref/1996/95JD03410.shtml
Further, there are succesively overlaps between ice cores, all within a few ppmv for the same gas age. That means that CO2 levels were (smoothed) between 180 and 310 ppmv for the past 800,000 years, except for the past 150 years where they reached 335 ppmv in 1980 (ice cores) to 390 in 2010 (firn and air). Nothing to do with (much) higher levels many millions of years ago, when geological conditions were quite different.
As there is a rather linear correlation between temperature and CO2 (Vostok, Dome C) of about 8 ppmv/K, there is no reason to expect that the current increase is caused by temperature (which should imply a 12 K increase to explain the 100 ppmv increase of the past 150 years).
Actually the partial pressure of a gas in water is a fiction. It is taken to be the partial pressure of the gas in the gas state in contact with the water and in equilibrium with it.
The partial pressure of CO2 in water may be a fiction (I don’t thinks so), but the equilibrium with the air above is measured (since many decades) nowadays continuously on seaships and is the driving force for uptake or release of CO2 from/to the air above it. Much more realistic than some theoretical calculation from Henry’s Law which doesn’t take into account other factors than temperature. It can also be calculated from all other components, including temperature. See:
http://cat.inist.fr/?aModele=afficheN&cpsidt=1679548
As I said before, Takahashi got the net flux right. He did not get the 90 to 100 PgC/yr up and down fluxes right.
As Al Tekhasski said under a previous treat, while the Feely e.a. calculation is wrong, as they used (local) averages for pCO2 difference and wind speed for the calculation, while one needs an average of momentary pCO2 difference and momentary wind speed, which is quite different…
In the same spirit of repetitiveness, no decay time is involved. Dynamic and equilibrium are contradictory. You might be referring to a dynamic steady state, which is acceptable on climate scales as a description of the surface layer.
As a (retired) chemical engineer, I have seen hundreds of chemical reactions in dynamic equilibrium, where the equilibrium is shifted by either changing the concentration of one of the components and/or temperature and/or pressure. The whole CO2 cycle behaves as a simple linear first order process, where the “normal” equilibrium was ruled by temperature, now disturbed by the addition of more CO2 from outside the normal cycle. The fact that only halve the amount added is showing up in the atmosphere in an extreme linear way for the past at least 100 years supports that “model”:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg
Or do you think that any natural process would follow the human emissions in such an exact way?
You are trying to defend IPCC’s model that ACO2 accumulates in the atmosphere to drive the climate, while nCO2 does not accumulate, but remains in a perpetual dynamic steady state. This model is indefensible.
I never said that aCO2 is accumulating and nCO2 not, to the contrary. In fact aCO2 in first instance adds to the total mass (thus is the cause of the increase), but is readily exchanged by nCO2 on short term. The exchange itself doesn’t change the total mass, but about half of the initial increase (a mix of aCO2 and nCO2) is absorbed by oceans and vegetation, because a slightly higher pCO2 in the atmosphere decreases the outgassing at warm places (where the temperature/Henry’s Law/pCO2 didn’t change) and increases the uptake at colder places (something similar for vegetation alveoles).
You are trying to prove that the extra human CO2 added doesn’t accumulate in the atmosphere (as mass! not as individual molecules), despite that all known observations support that, probably because that is one of the cornerstones of the AGW hypothesis. Without accumulation, the AGW hypothesis fails.
I try to see what can be supported and what not on scientific grounds, in the case of accumulation, the IPCC is right. But for the other cornerstone, the effect of the accumulation, the IPCC is wrong (IMHO).

Ferdinand Engelbeen
June 22, 2010 2:19 pm

Jeff Glassman says:
I need to place the answer in two parts (too many links?)
Even if what you say were true about North Pole and South Pole data, that does not make any South Pole data, today or in the paleo past global. That is not supported by logic. IPCC admits that a measurable east-west gradient exists in global CO2, and that the north-south gradient is 10 times as great as the east-west gradient.
The NH-SH gradient in yearly averages is less than 5 ppmv on a level of 390 ppmv or less than 2%. I call that well-mixed, which is true for 95% of the atmosphere, from the North Pole to the South Pole, including MLO. Well mixed doesn’t imply that at all places on earth at the same time one can find exactly the same levels. But away from huge sources and sinks, within a reasonable mixing time, the levels are within small boundaries.
Thus if the South Pole data are within 2% of the North Pole data within a year, with increasing emissions in the NH today, I may assume that the ice core data from Antarctica represent 95% of the ancient atmosphere, be it smoothed over a long(er) period.
IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line.
Sorry, this is a bridge too far. The raw hourly data, not adultered or changed in any way, including all outliers are available for checking and comparison for anyone like you and me, at least for four stations, including MLO (I even received a few days of the raw 10-second voltage data for check of the calculations, on simple request). If you have any proof that the data were changed by anyone, or even that the selection procedure has any significant influence on the level, averages or trends, well then we can discuss that. If you have no proof, then this is simply slander.
To the contrary, and as my remark stated, IPCC merged the ice core records into the instrument records. This is unacceptable science. What the laboratories did in creating laboratory data is quite unimportant. The fraud starts with the IPCC.
The merging of the ice core data and the instrument record is acceptable, as that is based on a 20 year overlap of the ice cores from Law Dome with the South Pole direct measurements. There is no difference between CO2 at the South Pole and at the top layer of the firn at the drilling site. There is a small gradient (about 10 ppmv) in the firn from top to start closing depth, which means that the CO2 level at closing depth is about 7 years older than in the atmosphere and there is no difference in CO2 levels between open (firn) air and already closed bubbles in the ice at closing depth. The average closing time is 8 years. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_overlap.jpg
The original article from Etheridge e.a. from 1996 (unfortunately after a paywall) is at:
http://www.agu.org/pubs/crossref/1996/95JD03410.shtml
Further, there are succesively overlaps between ice cores, all within a few ppmv for the same gas age. That means that CO2 levels were (smoothed) between 180 and 310 ppmv for the past 800,000 years, except for the past 150 years where they reached 335 ppmv in 1980 (ice cores) to 390 in 2010 (firn and air). Nothing to do with (much) higher levels many millions of years ago, when geological conditions were quite different.
As there is a rather linear correlation between temperature and CO2 (Vostok, Dome C) of about 8 ppmv/K, there is no reason to expect that the current increase is caused by temperature (which should imply a 12 K increase to explain the 100 ppmv increase of the past 150 years).
More in a second message.

Ferdinand Engelbeen
June 22, 2010 2:29 pm

Jeff Glassman says:
My total response dissapeared in Cyberspace, probably due to too many links. Now in several parts:
Even if what you say were true about North Pole and South Pole data, that does not make any South Pole data, today or in the paleo past global. That is not supported by logic. IPCC admits that a measurable east-west gradient exists in global CO2, and that the north-south gradient is 10 times as great as the east-west gradient.
The NH-SH gradient in yearly averages is less than 5 ppmv on a level of 390 ppmv or less than 2%. I call that well-mixed, which is true for 95% of the atmosphere, from the North Pole to the South Pole, including MLO. Well mixed doesn’t imply that at all places on earth at the same time one can find exactly the same levels. But away from huge sources and sinks, within a reasonable mixing time, the levels are within small boundaries.
Thus if the South Pole data are within 2% of the North Pole data within a year, with increasing emissions in the NH today, I may assume that the ice core data from Antarctica represent 95% of the ancient atmosphere, be it smoothed over a long(er) period.
IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line.
Sorry, this is a bridge too far. The raw hourly data, not adultered or changed in any way, including all outliers are available for checking and comparison for anyone like you and me, at least for four stations, including MLO (I even received a few days of the raw 10-second voltage data for check of the calculations, on simple request). If you have any proof that the data were changed by anyone, or even that the selection procedure has any significant influence on the level, averages or trends, well then we can discuss that. If you have no proof, then this is simply slander.
To the contrary, and as my remark stated, IPCC merged the ice core records into the instrument records. This is unacceptable science. What the laboratories did in creating laboratory data is quite unimportant. The fraud starts with the IPCC.
The merging of the ice core data and the instrument record is acceptable, as that is based on a 20 year overlap of the ice cores from Law Dome with the South Pole direct measurements. There is no difference between CO2 at the South Pole and at the top layer of the firn at the drilling site. There is a small gradient (about 10 ppmv) in the firn from top to start closing depth, which means that the CO2 level at closing depth is about 7 years older than in the atmosphere and there is no difference in CO2 levels between open (firn) air and already closed bubbles in the ice at closing depth. The average closing time is 8 years. See:
Law Dome overlap
The original article from Etheridge e.a. from 1996 (unfortunately after a paywall) is at:
Etheridge e.a.
Further, there are succesive overlaps between ice cores, all within a few ppmv for the same gas age. That means that CO2 levels were (smoothed) between 180 and 310 ppmv for the past 800,000 years, except for the past 150 years where they reached 335 ppmv in 1980 (ice cores) to 390 in 2010 (firn and air). Nothing to do with (much) higher levels many millions of years ago, when geological conditions were quite different.
As there is a rather linear correlation between temperature and CO2 (Vostok, Dome C) of about 8 ppmv/K, there is no reason to expect that the current increase is caused by temperature (which should imply a 12 K increase to explain the 100 ppmv increase of the past 150 years).
More in next message…

Ferdinand Engelbeen
June 23, 2010 2:45 am

Sorry for the repeated (parts) of my message, something got wrong at posting…

June 23, 2010 10:41 pm

Ferdinand Engelbeen says on 6/22/10 at 2:29 pm said,
>> The NH-SH gradient in yearly averages is less than 5 ppmv on a level of 390 ppmv or less than 2%. I call that well-mixed, which is true for 95% of the atmosphere, from the North Pole to the South Pole, including MLO. Well mixed doesn’t imply that at all places on earth at the same time one can find exactly the same levels. But away from huge sources and sinks, within a reasonable mixing time, the levels are within small boundaries.
And
>>>>IPCC alters its data to make MLO look smooth, and then alters the records at the South Pole and Baring Head to overlap the MLO and to look indistinguishable in the trend line. 

>>Sorry, this is a bridge too far. The raw hourly data, not adultered or changed in any way, …

IPCC graphed its processed MLO and Baring Head records in AR4, Figure 2.3. These are not the hourly data. The bridge is one you built.
So while your observations about the value of the hourly data is encouraging, limiting the depth of IPCC’s fraud, they are quite irrelevant. In Figure 2.3, the trends at Baring Head and MLO follow one another within about one line width for the plots. A line width is 0.404 ppm(v), not 5 ppmv. That is more than an order of magnitude less than your figure, and provides an indication of what well-mixed means to IPCC. While IPCC relies heavily on the well-mixed conjecture, it never defines it and only implies it by the graphs that it manufactures.
The overlap of the Baring Head and MLO records is not credible, nor is the smoothness of the MLO record all by itself. These kind of results do not occur in nature, but are the product of heavy smoothing and “calibrations”, by which is meant data fudging.
You will find an analysis of Figure 2.3 at rocketscientistsjournal.com, SGW, Figure 27. Especially interesting with respect to fraud or slander is part (b) of that figure. IPCC scaled and shifted the graph of delta 13C to parallel the emissions record, and then to conclude that it had a human fingerprint of ACO2 in the CO2 measurements. That should be a criminal offense.
Earlier I brought various IPCC calibrations to your attention. They include interstation calibrations which are “becoming progressively more extensive and better intercalibrated.” IPCC reports do not include its calibration data, its smoothing algorithms for CO2 stations, or its CO2 data reconstruction methods. The CRU documents revealed an IPCC algorithm that adjusted data to look more like the instrument record. IPCC’s results are not credible on their face, and a little investigation reveals an effort to extract trillions of dollars from world governments and to cripple industry. There’s no slander here, but instead a record of a dozen abuses of science and dishonesty.
You wrote,
>> As there is a rather linear correlation between temperature and CO2 (Vostok, Dome C) of about 8 ppmv/K, there is no reason to expect that the current increase is caused by temperature (which should imply a 12 K increase to explain the 100 ppmv increase of the past 150 years).

We’ve had this discussion previously. See rocketscientistsjournal.com, “The Acquittal of Carbon Dioxide”. As shown there, a far better fit than linear or even quadratic to the relationship between CO2 and temperature in the Vostok record is a fit to the solubility curve.
You wrote,
>> The partial pressure of CO2 in water may be a fiction (I don’t thinks so), but the equilibrium with the air above is measured (since many decades) nowadays continuously on seaships and is the driving force for uptake or release of CO2 from/to the air above it. Much more realistic than some theoretical calculation from Henry’s Law which doesn’t take into account other factors than temperature.
To the contrary, Henry’s Law and Henry’s Coefficients also depend on pressure, wind, and salinity. A reasonable conjecture is that the coefficients might also depend on molecular weight, yet a fifth order dependence. And if IPCC’s conjecture were true, it would also depend on ionic concentrations in the solvent. But that is novel physics necessary to make AGW look feasible. You can’t escape from the reality of CO2 solubility in sea water by discarding it as theoretical.
Moreover, the solubility curve is evident in data used by IPCC. See rocketscientistsjournal.com, “The Acquittal of Carbon Dioxide”, Figure 21. Another example is IPCC’s Second Order Draft, Figure 7.3.10(a), where the solubility curve was uncovered in the Panel’s attempt to quantify the Revelle buffer factor. In the final version of AR4, IPCC concealed this relationship. See discussion, rocketscientistsjournal.com, Figures 3, 4, and 5. Solubility is not a conjecture to be discarded to justify AGW. It is the engine for the carbon cycle, which IPCC does get right any more than it does the hydrological cycle.
We have about 90 Gtons of CO2 coming out of the ocean each year. And if we used a mass balance equation, the 120 Gtons now attributed to terrestrial sources might prove to be a misattribution. It wouldn’t be the first IPCC misattribution. That 90 Gtons is not accounted for in the Takahashi diagram. Also, that diagram gives you support for your conjecture that not much is going on through the law of solubility. That natural emission comes out of the water because of the law of solubility. It’s around 15 times man’s emissions.
Solubility would be key in the climate model, if only CO2 were more significant as a greenhouse gas, and if only greenhouse gases were significant to global warming.

June 24, 2010 12:07 am

Ferdinand Engelbeen says on 6/22/10 at 5:03 am said,
>> But your fastest time constant is wrong, as it is based on the residence time, not the decay time needed to reduce any excess CO2 above (dynamic) equilibrium.


I commented on this sentence earlier, but I would like add a few items.
First, the decay time and the residence time are not directly comparable. Decay time refers to the time for a mass to reduce to a stated level. For our purposes here, the decay time is exponential. Time in the exponent is made dimensionless by multiplying by what is called the decay constant, or it is divided by one of the characteristic times of half-life (scaled by dividing by ln(2)), e-folding time, average lifetime, or turnover time. The average lifetime is average time before dissolution of the molecules, and it is equal to the e-folding time. Both are equal to the reciprocal of the decay constant. Decay time is a function of time, and all the other terms are constants, characteristic of the process.
Second, decay time is the time to reduce the mass to a stated level, and has nothing to do with the state of its surroundings. Neither the atmosphere nor the decaying CO2 is ever in equilibrium, nor is such an assumption needed.
You wrote,
>> Thus the turnover time (which is based on the 150 GtC exchange rate) has simply nothing to do with the decay time of an extra mass of CO2 brought into the atmosphere (which is based on the 3 GtC/year removal rate), which is much longer than the turnover time.



This is not true. Turnover time, as you agreed, is defined as T = M/S, where M is the mass in the reservoir and S is the removal rate. For the process of dissolution, S = k*M, where k is the decay constant and M is the instantaneous mass. So if we write these parameters as functions of time, T(t) = M(t)/S(t) = M(t)/(k*M(t)), then T(t) = 1/k. Turnover time is a constant and its equal to the residence time, etc. The key assumption in this model is that the rate of removal, S, is proportional to the instantaneous mass. This is the assumption that leads to the exponential as the unique functional solution.
Turnover time is not based on any numerical exchange rate or numerical removal rate. Also we should note that what is being discussed here is the rate of removal of a pulse of CO2 in the atmosphere. It is not the mass of CO2 in the atmosphere, because old pulses are being removed while new pulses are being added. To solve this problem, a mass balance analysis is required. Some writers have jumped from the mass of a pulse to the mass in the atmosphere, which is an unwarranted change in parameters that leads to the wrong conclusion about turnover time.
As I wrote above, the time constants, whether called residence times or average life times or e-folding times, that I computed were 1.5 years using IPCC data, 3.2 years using University of Colorado data, or 4.9 years using Texas A&M data. IPCC provides the following:
>> The CO2 response function used in this report is based on the revised version of the Bern Carbon cycle model used in Chapter 10 of this report (Bern2.5CC; Joos et al. 2001) using a background CO2 concentration value of 378 ppm. The decay of a pulse of CO2 with time t is given by
>>a_0 = sum(a_i*exp(-t/tau_i), i=1,3
>>Where a_0 = 0.217, a_1 = 0.259, a_2 = 0.338, a_3 = 0.186, tau_1 = 172.9 years, tau_2 = 18.51 years, and tau_3 = 1.186 years. AR4, Table 2.14, p. 213, fn. a.
IPCC’s fastest time constant is 1.186 years, even faster than mine.
This formula appears to have been from Archer (2005). AR4, ¶7.3.1.2, p. 514. The following associations seem reasonable: the fastest (1.186 years) refers to the surface layer as a reservoir in DIC form, the middle value (18.51 years) refers to intermediate water and carbon consumption by photosynthesis, and the slowest (172.9 years) refers to production of calcareous shells in the deep ocean. These are the solubility pump, the organic carbon pump, and the calcium carbonate pump. IPCC diagrams them in Figure 7.10, p. 530, a figure with several errors (e.g., backward arrows, “solution pump”). Functionally, however, the organic carbon pump and the calcium carbonate pump should not connect to the atmosphere, but instead to the surface layer. These chemical processes need to connect to ionized CO2, not gaseous CO2.
The formula is not feasible. It provides that 21.7% of atmospheric CO2 remains in the atmosphere forever. It then provides 18.6% to solubility, 25.9% to photosynthesis, and 33.8% to sequestration. No physical mechanism exists by which these four portions might be directed to the various pumps. What will happen is that the solubility pump will drain the atmosphere (neglecting, as always, replenishment) of any slug of CO2 at the rapid time constant until it is effectively reduced to zero. This won’t starve the other pumps, assuming they are correctly connected to the surface layer.

tonyb
Editor
June 24, 2010 4:24 am

Jeff Glassman
In my own thread about CO2 over at the Air Vent I added the various threads on the subject carried here more recently at WUWT.
http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/
You may find many of the comments of interest, but one in particular said that the Co2 outgassing (and presumably also the absorption) was some 8ppmv per 1 degree C change in the temperature of the ocean.
Does that roughly equate to your 90Gton estimate?
Tonyb
tonyb

Ferdinand Engelbeen
June 24, 2010 12:16 pm

Jeff Glassman says:
June 23, 2010 at 10:41 pm
IPCC graphed its processed MLO and Baring Head records in AR4, Figure 2.3. These are not the hourly data. The bridge is one you built.
So while your observations about the value of the hourly data is encouraging, limiting the depth of IPCC’s fraud, they are quite irrelevant. In Figure 2.3, the trends at Baring Head and MLO follow one another within about one line width for the plots. A line width is 0.404 ppm(v), not 5 ppmv. That is more than an order of magnitude less than your figure, and provides an indication of what well-mixed means to IPCC.
The overlap of the Baring Head and MLO records is not credible, nor is the smoothness of the MLO record all by itself. These kind of results do not occur in nature, but are the product of heavy smoothing and “calibrations”, by which is meant data fudging.

Again these are a lot of (false) accusations, where you don’t give the slightest proof. The IPCC didn’t alter any data, the NOAA (and many others) sampled and did filter the data, by rejecting any data which might be contaminated by local sources. If you are interested in volcanic outgassing, measure at the mouth of the gas exits. If you are interested in what vegetation takes up from the atmosphere, measure in the middle of the vegetation. If you are interested in background data, measure in the trade winds and don’t use the data which are contaminated with the previous ones.
The criteria for rejecting data for inclusion in averages are clear and predefined, not made after the results are known. But again, because you may not like it, in bold: the average and trend from all data, including all outliers, and the average and trend of selected only data , excluding all outliers, don’t differ with more than a few tenths of a ppmv That includes the seasonal trend in the region of where the station is located. So far the “fraud of the IPCC”.
The difference between MLO and Baring Head is a few ppmv, if you take the MLO averages through the seasonal trend. The hourly data from Baring Head are not online, but these from Samoa and the South Pole are, which span the area in the SH from near the equator to the South Pole.
I have plotted the raw hourly data together with the selected (according to the pre-defined criteria) daily averages for Mauna Loa and Samoa:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_raw_select_2008.jpg
As you prefer the real scale: here the differences on full scale:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_raw_select_2008_fullscale.jpg
The 2008 average for the raw hourly data of Samoa is 384.00 ppmv, for the selected daily data it is 393.91. For Mauna Loa: raw 385.34, selected 385.49.
I suppose that I may say that at least for Mauna Loa and Samoa (but also for all other baseline stations) the atmosphere is very well mixed…
IPCC scaled and shifted the graph of delta 13C to parallel the emissions record, and then to conclude that it had a human fingerprint of ACO2 in the CO2 measurements. That should be a criminal offense.
I know your aversion against non-full scale graphs, but even on full scale, there is an extremely good correlation between emissions, increase in the atmosphere and ocean surface and (inverse) with d13C levels. Which points to a human fingerprint…
Earlier I brought various IPCC calibrations to your attention. They include interstation calibrations which are “becoming progressively more extensive and better intercalibrated.” IPCC reports do not include its calibration data, its smoothing algorithms for CO2 stations, or its CO2 data reconstruction methods.
Again, you are accusing the IPCC of manipulation, although they have nothing to do with CO2 measurements. The calibration is done by NOAA and a lot of other laboratories from different organisations in different countries. That is about the calibration of the calibration gases and the equipment used. I know that it is scientific practice to do an intercalibration, so that the different equipment used in the world gives the same value for the same level of CO2 in a calibration gas. That has nothing to do with manipulation or biasing of the data to show similar results.
Further, as already delivered, the calibration and calculation procedures and the selection criteria for CO2 at MLO (and all baseline stations) are fully described in detail at:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html
More later…

Ferdinand Engelbeen
June 24, 2010 12:50 pm

Further discussion…
To the contrary, Henry’s Law and Henry’s Coefficients also depend on pressure, wind, and salinity. A reasonable conjecture is that the coefficients might also depend on molecular weight, yet a fifth order dependence. And if IPCC’s conjecture were true, it would also depend on ionic concentrations in the solvent. But that is novel physics necessary to make AGW look feasible. You can’t escape from the reality of CO2 solubility in sea water by discarding it as theoretical.
Wait a minute, the graphs you use only show the temperature – CO2 relationship, according to Henry’s Law. No trace of other influences. If you include the other factors, we simply agree. And never seen a pH-CO2 curve for seawater? A very small change in pH (whatever the source) has an enormous effect on CO2 solubility, which makes temperature a bleak substitute. That is even used by some fellow sceptics to prove that a fast release from the oceans is possible. Unfortunately for them, the uptake is the other way out…
We have about 90 Gtons of CO2 coming out of the ocean each year. And if we used a mass balance equation, the 120 Gtons now attributed to terrestrial sources might prove to be a misattribution. It wouldn’t be the first IPCC misattribution. That 90 Gtons is not accounted for in the Takahashi diagram. Also, that diagram gives you support for your conjecture that not much is going on through the law of solubility. That natural emission comes out of the water because of the law of solubility. It’s around 15 times man’s emissions.
Again you are confusing parts of a continuous or seasonal exchange with the net effect of an extra addition. The equator-poles is a continuous stream of CO2, while the mid-latitudes of the oceans show a seasonal exchange. Thus while there is a 9 or 90 or 900 GtC exchange over a year between oceans and atmosphere (and a similar between vegetation and atmosphere), that is not of the slightest interest for the mass balance. Only the difference at the end of the year is of interest, and that is some (measured) 3.5 +/- 3 GtC/year sink capacity for all natural flows together, whatever the individual flows may be.
The increase of CO2 in the atmosphere is simply a direct function of the emissions (extremely linear!) and of temperature. The first adds CO2 to the total mass, increasing the difference with the “normal” equilibrium, the latter shifts the base equilibrium. That has nothing to do with the capacity of CO2 as a greenhouse gas, that is a complete different, unrelated discussion.

Ferdinand Engelbeen
June 24, 2010 3:12 pm

Jeff Glassman says:
June 24, 2010 at 12:07 am
I am completely at loss with your definitions of turnover and decay.
As far as my English goes, turnover or residence time is about the possibility that a single molecule (whatever the origin) in the atmosphere is catched or released by another reservoir, both ways. That is ruled by the exhange rate, in this case about 150 GtC on 800 GtC present in the atmosphere. You say:
This is not true. Turnover time, as you agreed, is defined as T = M/S, where M is the mass in the reservoir and S is the removal rate.
But S is the exchange rate, both ways, not the “removal” rate.
Decay rate indeed is defined as you describe. And we agree that this refers to a return to a stated level (in this case the stated level is temperature dependent). I called that a “dynamic” equilibrium, but have no problem with a “stated” level.
Thus we are discussing the decay rate of an extra pulse (or continuous pulses) of human emissions of CO2 here. The decay rate according to you is somewhat less than 5 years, but have a better look at the IPCC decay rates you are using:
IPCC’s fastest time constant is 1.186 years, even faster than mine.
Yes, but that is only for 18.6% of the extra CO2, according to the Bern model. The rest is absorbed in other compartiments at much slower rates:
33.8% with a time constant of 18.51 years
25.9% with a time constant of 172.9 years
and 21.7% of the initial pulse remains in the atmosphere forever!
Thus according to the Bern model, only 18.6% of the extra CO2 will be removed fast via the fastest decay rate. That may be the upper ocean level and/or vegetation, but both are limited in capacity, thus this may be defendable, but the other limits aren’t under current conditions, especially not the deep ocean capacity.
I don’t support the IPCC model, as that only may be right for extreme amounts of emissions. At the current emission level, there is hardly any influence on deep ocean CO2 levels, thus the deep ocean uptake and return is not influenced at all. This leads to a much more realistic decay rate: see the paper of Peter Dietze at the late John Daly’s website: http://www.john-daly.com/carbon.htm
I haven’t looked at the other decay rates you mention, as I have no direct reference for them.

June 24, 2010 3:19 pm

Tonyb on 6/24/10 at 4:24 am asked,
>>You may find many of the comments of interest, but one in particular said that the Co2 outgassing (and presumably also the absorption) was some 8ppmv per 1 degree C change in the temperature of the ocean.
>>Does that roughly equate to your 90Gton estimate?
No. As an initial point of order, the 90 GtC/yr is IPCC’s estimate, not mine.
90 GtC/yr = 90 PgC/yr
Stoichiometry: 12 gC = 44 gCO2
Units: 31557600 sec = 1 yr
Units: 10^15 = 1 Peta (P)
Uptake Temperature: 0ºC
Outgas Temperature: 30ºC (max, nominal)
Uptake solubility: 0.3346 gCO2/100gH2O (rocketscientistsjournal.com, “The Acquittal of CO2”, Figure 6)
Outgas solubility: 0.1257 gCO2/100gH2O (id.)
Solubility, net: 0.2089 gCO2/100gH2O
Units: 1000 g = 1 kg
Density H2O: 1000 kg/m^3
Units: 1 Sv = 10^6 m^3/sec
Result: 90 PgC/yr ~ 5.091 Sv at 30ºC, 5.006 at 29ºC.
This is in the ballpark of the wide range of values for the THC, aka the MOC. For example, an estimate for the bottom flow from the Antarctic is 4.3 Sv. Gent, PR, “Will the North Atlantic Ocean thermohaline circulation weaken during the 21st century?”, Geo.Phys.Res.Ltrs., vol. 28, no. 6, pp. 1023-1026, 3/15/01, p. 1024. Is there a similar number for the bottom flow from the Arctic?
Other numbers in Gent that pop out for different conditions are 29 ± 7 Sv, 20 Sv, 17-18 Sv, and 15 Sv. Those high numbers could reasonably be the result of minor branches of the THC surfacing around the globe at much lower temperatures than the Equatorial estimate of 30ºC. The figure of 5.2 Sv should be read as an effective THC for the purposes of estimating the atmosphere/ocean flux.
The outgassing should drop from 90 PgC/yr to 88.5 PgC/yr for 1ºC drop in SST at the Equator. This is a flow rate of change of 1.68%. If that change applied to the nominal 385 ppm at MLO today, the drop would be 6.45 ppm, and close enough to the 8ppmv/ºC. A more direct comparison would be between the change in outgassing and anthropogenic emissions, ACO2.
Lowering the effective temperature at the Equator raises the inferred THC flow rate. At a flow rate of 10 Sv and 90 PgC/yr, the effective temperature is 10.3ºC and the sensitivity is about 7 PgC/ºC, the equivalent of the ACO2 estimate. The solution to the problem is robust, able to accommodate a wide range of initial conditions or operating points.
The conclusion is that IPCC’s estimate of 90 PgC/yr is consistent with Henry’s Law.

Ferdinand Engelbeen
June 24, 2010 3:43 pm

Further…
As I wrote above, the time constants, whether called residence times or average life times or e-folding times, that I computed were 1.5 years using IPCC data, 3.2 years using University of Colorado data, or 4.9 years using Texas A&M data. IPCC provides the following:
Here again there is confusion: the 1.5 years of the IPCC is a decay rate of a pulse of CO2 in the atmosphere into the fastest (but limited) receiving reservoir. The Colorado and Texas A&M data clearly are about a residence time, which has nothing to do with a decay rate…

June 24, 2010 4:44 pm

Ferdinand Engelbeen says on 6/24/10 at 5:03 am said,
You are confusing what IPCC has done in its Reports with your experience with laboratory data. The two have nothing to do with one another. I have specifically referenced the troublesome data reductions by IPCC and you ignore them to go back to data culling and calibrations in the laboratories around the world. Your observations about the laboratory data is irrelevant. I asked you to consider the calibration that occurs after the laboratory data are prepared, and you ignore the request to reopen immaterial considerations.
You invite me to re-examine laboratory data. At one time I tried, and gave up. I was especially interested in the wind corrections, but found no wind data. Also the volume of the data was immense, and not suitable for desktop operations. Regardless, the laboratory data are irrelevant to IPCC’s fraudulent data reductions. What might be of interest are the calibrations by which IPCC reduced the laboratory data to its charts. However, the fraud is evident even to a layman without such miniscule details, and a viable alternative for global temperatures is on the table.
The graphs show only temperature because it is the dominant, first order effect. That does not mean that minor, second order and lower effects do not exist.
I have indeed seen a pH CO2 curve for seawater. I believe you are referring to the Bjerrum plot. That plot is the solution to the stoichiometric equations of equilibrium. It appears in the Zeebe & Wolf-Gladrow papers, which IPCC relied upon without showing the Bjerrum plot. This reliance is one of IPCC’s fatal errors in its modeling. See rocketscientistsjournal.com, “IPCC’s Fatal Errors”. That the surface layer might be in equilibrium so that those equations would apply is preposterous. I discussed this in my post of 6/20/10 at 2:16 pm, above, which you seem to have ignored to persist in an invalid model. Modeling the surface layer as being in equilibrium is a fatal error.
You say “S is the exchange rate, both ways, not the ‘removal’ rate. I introduced the turnover time as defined by IPCC, quoting it in full on 6/21/10 at 2:46 pm. You accurately quoted me and it in your response on 6/22/10 at 2:46 pm. The definition specifies S to be “the total rate of removal S”, not the exchange rate.
I suspect that in retracting the definition and inserting an exchange rate that you might be thinking about the mass of CO2 in the atmosphere, subject to many additions and removals, and not the mass of a pulse of CO2 added to the atmosphere. This is a change of parameters that leads to an incorrect analysis. The decay of a pulse is modeled à priori as exponential. Nothing like that applies to the concentration of CO2, or any species of CO2, in the atmosphere.
I have not accused IPCC of manipulating data. I accuse it of outright fraud in its reports on Anthropogenic Global Warming. I rely not on a simple, single error, nor on an alternative data set, nor on an alternative global warming model. Instead I rely on a raft of abuses of honesty, of science, and of the scientific method in IPCC Third and Fourth Assessment Reports.
I disagree with your discussion about the “9 or 90 or 900 GtC exchange”. The outgassing of 90 GtC is ancient water, saturated with CO2 at the partial pressure about a millennium past and at a temperature of about 0ºC. It is outgassed dominantly at the current SST at the Equator, where the amount released depends on the solubility curve at that current SST. As I discussed above in response to Tonyb, the amount outgassed is dependent on the current temperature in an amount of the same order of magnitude as the estimated fossil fuel emissions.
One of IPCC’s fatal errors is to model what it admits is a “highly nonlinear” (a meaningless phrase – something is either linear or it is not; likewise, a system is in equilibrium or it is not) by the sum of two parts, the natural carbon cycle and an anthropogenic carbon cycle. This is the radiative forcing paradigm. Because the system is nonlinear, the total response is not equal to the sum of its responses to individual forcings applied separately. This applies nowhere so much as it does in outgassing. It is nonlinear, being inversely proportional to the partial pressure of total CO2 in the atmosphere. This can be solved with a mass balance analysis, but the result cannot be assumed to be the response to natural outgassing plus the response to fossil fuel burning or an assumed ACO2 cycle. The mass balance analysis omitted by IPCC is necessary.
IPCC’s equation is cited to the Bern model and to Archer (2005), but it is not quoted. I haven’t bothered to see whether IPCC’s equation is faithful to any reference. That would be a magnificent waste of time because the equation is pure nonsense. There is no mechanism in the ocean by which the four separate processes have their separate reservoirs. I explained this in detail today, 6/24/10, at 12:07 am. You repeat the nonsense again in your post of 6/24/10 3:12 pm, citing my 12:07 post, but totally ignoring my criticism.
I also wrote in the 12:07 post that the terms resident time, e-folding time, turnover time, and average lifetime are all the same. Now today, 6/24/10, at 3:43 you ignore what I have written and plow ahead with the same distinction without a difference. What you call a “(limited) receiving reservoir” appears to be the non-existing partition in IPCC’s formula.
By the way, the IPCC, Colorado and Texas A&M data are about fluxes, not residence times or any of the other characteristic process constants. The computation of the two sets of three parameters for those sources is entirely mine, as is the name for my results.
A goodly number of posts back dialog stop working. Time can be better spent than responding to the nonresponsive.

Ferdinand Engelbeen
June 25, 2010 4:33 am

Jeff Glassman says:
June 24, 2010 at 4:44 pm
You are confusing what IPCC has done in its Reports with your experience with laboratory data. The two have nothing to do with one another.
As said repeatedly, the IPCC doesn’t hold, manipulate or reduce the CO2 data. The CO2 data are measured at some 10 “baseline” stations under NOAA, including Mauna Loa, some 60 other “background” stations at different places, under different organisations and some 400 stations over land, used to measure local/regional CO2 fluxes. Data selection, averaging over days, months and years is done by these organisations, not the IPCC. The IPCC only uses the (selected) data delivered by the other organisations, they don’t change them in any way.
No matter what the IPCC did do on other items, the CO2 data are not filtered, reduced or manipulated, fraudulent or not, by the IPCC. If you have proof of otherwise on this item only, please provide it.
The only way that after-the-fact recalculation is necessary is when an intercalibration of apparatus and/or calibration gases shows a deviation. Then all original voltage readings are reused with the new calibration values. Again that has nothing to do with data manipulation to shift the data from different places/organisations towards each other. And the IPCC has nothing to do with this all.
That the surface layer might be in equilibrium so that those equations would apply is preposterous.
We may discuss that in length of times, but the calculations of pCO2 and the real time measurements do confirm that the calculations, based on temperature, DIC, pH,… are right. Thus while your calculation based on temperature only may be a good approximation, Feely’s one, based on pCO2 and (non)equilibrium, is better.
From the IPCC:
Turnover time (T) (also called global atmospheric lifetime) is the ratio of the mass M of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal S from the reservoir: T = M / S.
Indeed right, although “removal” is a triggy word in this discussion. One may replace “total rate of removal S from the reservoir” in this definition as well as with “total rate of emission S to the reservoir”, as both emissions to and removal from the reservoir are (near) equal. Or simpler “total rate of exchange with another reservoir”. That is the point: although the flow is huge, that has no effect on the total mass in the reservoir, except if there is a discrepancy between the inflow and outflow.
The Colorado university description makes it very clear:
note that since in fluxes equal out fluxes, the RT is the same relative to the sum of the out fluxes or simply the throughflux.
There is no 90 GtC pulse going into the atmosphere, neither a 92 GtC negative pulse out of the atmosphere. There is only a continuous and seasonal (relative small) flow which emits CO2 at one side (or season) and absorbs CO2 at the other side (or season). Integrated over a year, that accounts for 90 GtC exchange, but only 2 GtC per year is the net amount which is really removed out of the atmosphere.
As I discussed above in response to Tonyb, the amount outgassed is dependent on the current temperature in an amount of the same order of magnitude as the estimated fossil fuel emissions.
That may be right for the first year: a drop of 1 K at the equator gives a decrease in outflow of around 8 GtC. Without human emissions, this would give a drop in atmospheric CO2 of about 1%, assuming that the poles remain the same sink. But as the pCO2 pressure in the atmosphere drops with 1% after a year, the outflow of the oceans at the equator increases and the inflow at the polar oceans decreases. Until a new equilibrium is found at about 8 ppmv less (for an average ocean temperature drop of 1 K), according to the Vostok ice core…
Thus after a few years, the new fluxes out and in are again equal to each other at a different level of CO2 in the atmosphere. Contrary to this, the human emissions go on, year by year, directly into the atmosphere. Part of the added mass is absorbed by oceans and vegetation, part remains in the atmosphere (again as mass, not as “anthro” CO2).
It is nonlinear, being inversely proportional to the partial pressure of total CO2 in the atmosphere. This can be solved with a mass balance analysis, but the result cannot be assumed to be the response to natural outgassing plus the response to fossil fuel burning or an assumed ACO2 cycle.
There are several mass balances in use, including by the IPCC. The IPCC doesn’t assume a separate aCO2 cycle (as mass, but they do for isotope changes), as all emissions simply are mixed into the natural CO2 cycle. And of course, if one adds something extra to a cycle where inputs and outputs are (near) equal, that influences both the inputs and the outputs, as physics dictate.
About the Bern model: you can’t say that the model is wrong (as I do too), and at the same time use the fastest decay rate as proof for your thesis of a rapid decay… The IPCC also defines the fastest rate only for a portion of the pulse, not the whole pulse.
I also wrote in the 12:07 post that the terms resident time, e-folding time, turnover time, and average lifetime are all the same.
They are all the same and have nothing to do with a decay time (except that a decay is also e-folding) of an extra CO2 pulse in the atmosphere. That is what you don’t understand.
By the way, the IPCC, Colorado and Texas A&M data are about fluxes, not residence times or any of the other characteristic process constants. The computation of the two sets of three parameters for those sources is entirely mine, as is the name for my results.
Either you have your own definition of residence time, or you haven’t read the Colorado course. They look at the through fluxes (in and out) to define the RT (residence time), not any word about decay rates. See:
http://www.colorado.edu/GeolSci/courses/GEOL1070/chap04/chapter4.html
Indeed it is difficult to have a good discussion with somebody who has already made up his mind…

June 25, 2010 9:44 am

Re Ferdinand Engelbeen on 6/24/10 at 4:44 pm,
I am only interested in IPCC’s model. I don’t care much anymore what the laboratory data says. IPCC did not present the laboratory data.
I became discouraged because the laboratory that discarded data for an unfavorable wind vector did not record the wind vector! An alternative to the terrestrial biology cause for the seasonal effects in CO2 concentration is the seasonal wind. Because of the laboratory failure to record wind data, the issue cannot be resolved.
Data reduction includes going from laboratory data to graphs like the MLO/South Pole and MLO/Baring Head overlays previously cited for you, but which you chose to ignore. These are reductions by IPCC. They do not have the appearance of legitimate data. As IPCC has said, and you ignore, intercalibration is used to bring the stations into agreement. Also, the individual records have lost the expected real world variability. And IPCC conceals the calibration values and techniques applied.
Are you suggesting that the surface layer is, as IPCC claims, in equilibrium?
The changes you suggest to the definition of S are unacceptable. The change of parameters and changes the concept of the formula, and its validity.
Your defense against a 90 GtC or 92 GtC pulse is meaningless, since I made no such claim, and you give no reference where someone else made such a claim. The parameters are 90 GtC/yr and 92 GtC/yr. These are fluxes, not pulses. I admire you for your language skills, but you can’t hide behind hypothetically limited English for the argumentative willy-nilly changing of units and parameters. You need to stick to names and definitions.
A drop in “pCO2 pressure in the atmosphere” does not cause a decrease in the “inflow at the polar oceans”. It would cause an increase in uptake, in dissolution. This is a consequence of Henry’s Law, which discount, ignore, or misunderstand.
You say “Until a new equilibrium is found”. You need to define what you mean by equilibrium. It is certainly not true of thermodynamic equilibrium.
Contrary to your assertion, IPCC does have separate ACO2 and nCO2 cycles. These it details in its carbon cycle figure. The ACO2 values are in red, the nCO2 values are in black. AR4, Figure 7.3, p. 515.
You say “a decay is also e-folding”. This is to get around your improper use of “decay time”, which I criticized. Now you introduce a new term, “decay”, which is neither “decay constant” or “decay time”, terms used in the previous dialog. Your newest response is meaningless. You cannot claim to be rational and change words in midcourse.
I do not have my own definition of residence time. I have IPCC’s, and I provided it in writing. Should other definitions exist, that would be quite irrelevant to exposing IPCC’s fraud.
My mind is well made up in a whole multitude of respects. What I have done though, and you chose to ignore, is to provide you with full support and evidence for my conclusions.

Ferdinand Engelbeen
June 25, 2010 3:10 pm

Jeff Glassman says:
June 25, 2010 at 9:44 am
Re Ferdinand Engelbeen on 6/24/10 at 4:44 pm,
I am only interested in IPCC’s model. I don’t care much anymore what the laboratory data says. IPCC did not present the laboratory data.
This – again – is a false accusation. The IPCC shows the “cleaned” monthly averages, exactly as supplied by the different laboratories.
An alternative to the terrestrial biology cause for the seasonal effects in CO2 concentration is the seasonal wind. Because of the laboratory failure to record wind data, the issue cannot be resolved.
If you really want to resolve such a question, simply ask the data from the meteorological station at Mauna Loa, next door to the CO2 measuremnt station. The MLO lab needs to ask them too, as they have no wind speed/direction apparatus. But as the stations at Barrow and all other stations in the NH show the same (even more pronounced) seasonal pattern, including a reverse correlation with d13C variability, it is quite clear that vegetation growth in the NH (more land) is the cause.
Data reduction includes going from laboratory data to graphs like the MLO/South Pole and MLO/Baring Head overlays previously cited for you, but which you chose to ignore. These are reductions by IPCC.
Are you really so hard to convince that the IPCC didn’t invent or alter the CO2 data? The IPCC only used the data, already selected by the NOAA, or in the case of Baring Head by CDIAC. The selected laboratory data are not altered by the IPCC in any way. You chose to ingore the measured data, because you don’t like them in your believe that the data are “too smooth” thus must be manipulated by the IPCC. Well, look at the raw data and the monthly averages from selected data from Mauna Loa and the South Pole. If you have any indication that the IPCC didn’t use or altered the monthly averages in any way or if the averaged data don’t fit the raw data, well then you have a point. Otherwise stop with your allegations. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
The monthly averages is what the IPCC uses in all its graphs. Nothing else.
As IPCC has said, and you ignore, intercalibration is used to bring the stations into agreement. Also, the individual records have lost the expected real world variability. And IPCC conceals the calibration values and techniques applied.
Again false accusations. As repeatedly said and shown with references to the (inter)calibration procedures, which you obviously choose to ignore, intercalibration is common practice in any type of laboratory to assure the correct operation of equipment and the correct value of calibration gases. This has nothing to do with bringing the observations into agreement. The observations are what they are, if you like them or not.
And the IPCC has no bussiness with the calibration and techniques used.
Are you suggesting that the surface layer is, as IPCC claims, in equilibrium?
Yes and no: locally at the very thin skin it is, deeper depends of a lot of constraints like wind speed and over the total ocean surface: no, as there is a pCO2 gradient which pushes some 2 GtC per year into the oceans.
The changes you suggest to the definition of S are unacceptable. The change of parameters and changes the concept of the formula, and its validity.
Not at all. If the inflow = throughput = outflow, it doesn’t make any difference if you use the inflow or the outflow, as both represent the exchanges in mass with another reservoir, in this case the oceans. The people of the Colorado University show that both may may be used. That doesn’t change the concept, neither its validity.
Your defense against a 90 GtC or 92 GtC pulse is meaningless, since I made no such claim
Not directly, but if I may cite Jeff Glassman:
So IPCC attributes all the observed rise in CO2 that has accumulated in the atmospheric during the industrial era to ACO2. The 119.6 PgC/yr from terrestrial sources, the 90.6 PgC/yr from the ocean do not accumulate.
According to you, contrary to the IPCC, all these flows do accumulate, thus form a “pulse” of extra CO2, for which the 8 GtC/yr from humans is negligible?
A drop in “pCO2 pressure in the atmosphere” does not cause a decrease in the “inflow at the polar oceans”. It would cause an increase in uptake, in dissolution.
Novel physics here? if the pCO2 in the atmosphere drops, the pressure difference between the atmosphere and the oceans decreases, thus less is going into the oceans at for the same cold temperature (worked some time in a cola bottlery, you know).
With “equilibrium” I mean an equilibrium between CO2 pressure in the atmosphere and release of CO2 in the tropics (according to Henry’s Law or not) and absorption near the poles. If the temperature drops near the equator (or oceanwide), do you agree that the consequence is that about 8 ppmv less CO2 will be left in the atmosphere for each K drop in temperature?
Contrary to your assertion, IPCC does have separate ACO2 and nCO2 cycles.
OK, this is the first time that I have looked at that graph. It looks identical to the NASA graph (which I thought it was), except that these make no differentiation between nCO2 and aCO2. It looks like a best guess of the partitioning between aCO2 and nCO2 in the total flows not as really separate cycles (except for the emissions of course). I suppose that the IPCC tried to show how much CO2 has increased (as mass) in different compartments as result of the emissions, but there is certainly not that much aCO2 in the atmosphere (at maximum some 8%) and I don’t think that the flows between oceans and atmosphere increased with 20 GtC/yr due to the emissions…
This is simply bad work.
You say “a decay is also e-folding”. This is to get around your improper use of “decay time”, which I criticized. Now you introduce a new term, “decay”, which is neither “decay constant” or “decay time”, terms used in the previous dialog. Your newest response is meaningless. You cannot claim to be rational and change words in midcourse.
Sorry for the confusion. I only used the word “decay” because the decay of a pulse of extra CO2 and the residence of a 14C pulse (from the atomic bomb testing) both have an e-folding time or halve life if you want. But quite different: the 14C pulse didn’t change the total mass of CO2 and its decrease only depends of the exchange flows (residence time), while the extra CO2 pulse only depends of the difference in in- and outflows. Both have quite different half life times: the first about 5 years, the second about 40 years.
I do not have my own definition of residence time.
A residence time has nothing to do with a decay time for an extra CO2 pulse. That is where you are confused.
What I have done though, and you chose to ignore, is to provide you with full support and evidence for my conclusions.
Of which several are simply wrong…

June 26, 2010 1:30 pm

On 6/21/10 at 4:54 pm, Ferdinand Engelbeen defined laboratory data for us:
>>Before you accuse someone of manipulating the data, please have a look at the (raw) data yourself. These are available on line for four stations: Barrow, Mauna Loa, Samoa and South Pole:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
These are the calculated CO2 levels, based on 2 x 20 minutes 10-second snapshots voltages of the cell + a few minutes voltages measured from three calibration gases. Both the averages and stdv of the calculated snapshots are given. These data are not changed in any way and simply give the average CO2 level + stdv of the past hour.

I prefer to call raw data the output when the technician first calibrates transducer outputs, e.g. voltages, into the physical units being measured. That Engelbeen still might refer to hourly averages from the snapshots as raw data is OK for the purposes of discussion here.
But then on 6/25/10, he says,
>>The monthly averages is what the IPCC uses in all its graphs. Nothing else.

>>>>[quoting me] As IPCC has said, and you ignore, intercalibration is used to bring the stations into agreement. Also, the individual records have lost the expected real world variability. And IPCC conceals the calibration values and techniques applied.

>>Again false accusations. As repeatedly said and shown with references to the (inter)calibration procedures, which you obviously choose to ignore, intercalibration is common practice in any type of laboratory to assure the correct operation of equipment and the correct value of calibration gases.
Again, Engelbeen limits the meaning of calibration to the raw data level (his definition or mine) because he discusses calibration in the context of the equipment (transducers) and calibration gases. He ignores the other IPCC calibrations even after being given baker’s dozen of examples (6/20/10, 1:42 pm).
IPCC says,
>> The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically > Because of the favorable site location, continuous monitoring, and careful selection and scrutiny of the data, the Mauna Loa record is considered to be a precise record and a reliable indicator of the REGIONAL TREND in the concentrations of atmospheric CO2 in the middle layers of the troposphere. Caps added.
Note that the authors consider MLO valid as regional data, and in the trend. They assert no validity with respect to either global concentrations or seasonal variations. IPCC shows how the data plot on top of one another, matching in trends. How did that happen, and who did it? Where is it all published? Is Engelbeen alleging that laboratory technicians did it all on their own?
IPCC says,
>>The high-accuracy measurements of atmospheric CO2 concentration, initiated by Charles David Keeling in 1958, constitute the master time series documenting the changing composition of the atmosphere. These data have iconic status in climate change science as evidence of the effect of human activities on the chemical composition of the global atmosphere. KEELING’S MEASUREMENTS ON MAUNA LOA IN HAWAII PROVIDE A TRUE MEASURE OF THE GLOBAL CARBON CYCLE, AN EFFECTIVELY CONTINUOUS RECORD OF THE BURNING OF FOSSIL FUEL. They also maintain an accuracy and precision that allow scientists to separate fossil fuel emissions from those due to the natural annual cycle of the biosphere, demonstrating a long-term change in the seasonal exchange of CO2 between the atmosphere, biosphere and ocean. Later observations of parallel trends in the atmospheric abundances of the 13CO2 isotope and molecular oxygen (O2) uniquely identified this rise in CO2 with fossil fuel burning. Caps added, citations deleted, AR4, ¶1.3.1 The Human Fingerprint on Greenhouse Gases, p. 100.
IPCC claims in 2007 that C. D. Keeling’s measurement program at MLO is global, while is son, R. F. Keeling, in 2009 says the data are regional. Physics is not on IPCC’s side.
CDIAC provides the following note on its MLO data sheet:
>> Values above represent monthly concentrations adjusted to represent 2400 hours on the 15th day of each month. Units are parts per million by volume (ppmv) expressed in the 2003A SIO manometric mole fraction scale. The “annual average” is the arithmetic mean of the twelve monthly values where no monthly values are missing.
However, the note on its South Pole and Baring Head data, reads
>> Values above are taken from a curve consisting of 4 harmonics plus a stiff spline and a linear gain factor, fit to monthly concentration values adjusted to represent 2400 hours on the 15th day of each month. Data used to derive this curve are shown in the accompanying graph. Units are parts per million by volume (ppmv) expressed in the 2003A SIO manometric mole fraction scale. The “annual average” is the arithmetic mean of the twelve monthly values.
Why are MLO data reduced differently than South Pole and Baring Head data? How can similar data be compared under different rules of data reduction? What exactly is the “curve consisting of 4 harmonics”? And the “stiff spline”? But especially note the “linear gain factor”. This is exactly the factor by which one could “calibrate” the stations to look alike. Is it different for the two stations? Is it a constant or a variable?
R. F. Keeling and S. Piper were IPCC contributing authors for both the TAR and AR4.
To be continued.

June 26, 2010 1:30 pm

Continuing, on 6/25/10, Ferdinand Engelbeen says,
>>>> (quoting me) Are you suggesting that the surface layer is, as IPCC claims, in equilibrium? 

>>Yes and no: locally at the very thin skin it is, deeper depends of a lot of constraints like wind speed and over the total ocean surface: no, as there is a pCO2 gradient which pushes some 2 GtC per year into the oceans.

and
>>>>(quoting me) A drop in “pCO2 pressure in the atmosphere” does not cause a decrease in the “inflow at the polar oceans”. It would cause an increase in uptake, in dissolution.

>>Novel physics here? if the pCO2 in the atmosphere drops, the pressure difference between the atmosphere and the oceans decreases, thus less is going into the oceans at for the same cold temperature (worked some time in a cola bottlery, you know).

The yes part of his two-way answer is wrong. Nowhere is the ocean in equilibrium, which is the ultimate state of stagnation. In equilibrium, there are no currents and no heat transfer (to use the redundant term). One cannot even say that something is close to equilibrium. A system either is or it is not in equilibrium. The surface layer is in turmoil, including all thin slices of it.
IPCC urges, and Engelbeen it seems would agree,
>> The air-sea exchange of CO2 is determined largely by the air-sea gradient in pCO2 between atmosphere and ocean. Equilibration of surface ocean and atmosphere occurs on a time scale of roughly one year. Gas exchange rates increase with wind speed and depend on other factors such as precipitation, heat flux, sea ice and surfactants. The magnitudes and uncertainties in local gas exchange rates are maximal at high wind speeds. In contrast, the equilibrium values for partitioning of CO2 between air and seawater and associated seawater pH values are well established (Zeebe and Wolf-Gladrow, 2001; see Box 7.3). Citation deleted, AR4, ¶7.3.4.1 Overview of the Ocean Carbon Cycle, p. 528.
This is not correct. The conclusion from Zeebe, et al., is for a fictional surface layer perpetually restrained to be in equilibrium. That conclusion relies on the stoichiometric equations of equilibrium, and the solution given graphically in the Bjerrum plot. The uptake and outgassing of CO2 is governed by Henry’s Law. Dissolution does not depend on the pressure difference or pressure gradient. Except for the fact that this air-sea exchange model is crucial to justifying AGW, it is a surprising error from a prominent, contributing, PhD professor of geophysics, and from someone who claims credentials as a chemist.
On 6/22/10 at 2:00 pm Engelbeen wrote,
>>>>(quoting me) Actually the partial pressure of a gas in water is a fiction. It is taken to be the partial pressure of the gas in the gas state in contact with the water and in equilibrium with it.
>>The partial pressure of CO2 in water may be a fiction (I don’t thinks so), but the equilibrium with the air above is measured (since many decades) nowadays continuously on seaships and is the driving force for uptake or release of CO2 from/to the air above it. Much more realistic than some theoretical calculation from Henry’s Law which doesn’t take into account other factors than temperature.
Just for a moment, he seemed to recognize the fiction of partial pressure of a gas dissolved in solvent. Because of that fiction, and what is taken as the meaning of that partial pressure, the pressure difference and the pressure gradient do not exist.
Engelbeen dismisses solubility, also known as dissolution and Henry’s Law, repeatedly. Here, he dismisses it as if someone had made a “theoretical calculation”. Once again, he refers refer to something no one said.
More important is that Henry’s Law informs us of the physics involved in a qualitative way, as fundamental as the recognition that balls roll down hill. Dissolution depends on the partial pressure of the gas above the water, and the temperature of the water, and not the reverse of either. Variations in atmospheric pressure over the ocean are rather insignificant. What counts is the temperature of the ocean, which varies greatly from the tropics to the poles.

Editor
Reply to  Jeff Glassman
June 26, 2010 1:39 pm

Jeff Glassman,
“The yes part of his two-way answer is wrong. Nowhere is the ocean in equilibrium, which is the ultimate state of stagnation. In equilibrium, there are no currents and no heat transfer (to use the redundant term). One cannot even say that something is close to equilibrium. A system either is or it is not in equilibrium. The surface layer is in turmoil, including all thin slices of it.”
I don’t think you actually understand the definition of equilibrium if you think an equilibrium is a state of stagnation, with no currents and no heat transfer. A system in equilibrium can have currents, heat transfer, and not be stagnated. What is in equilibrium is that all flows balance out and all inputs equal outputs. Rocket scientists should know their thermal dynamics…

June 26, 2010 2:03 pm

Re mikelorrey’s misunderstanding of equilibrium, 6/26/10 at 1:39 pm:
>>[W]e shall use the symbols Y and X for [a] pair of independent coordinates. … A state of a system in which Y and X have definite values which remain constant so long as the external conditions are unchanged is called an equilibrium state. Zemansky, M. W., “Heat and Thermodynamics”, McGraw-Hill, Fourth Ed., 1957, p. 5.
Or,
>>When there is no unbalanced force in the interior of a system and also none between a system and its surroundings, the system is said to be in a state of mechanical equilibrium. … When a system in mechanical equilibrium does not tend to undergo a spontaneous change of internal structure, such as a chemical reaction, or a transfer o matter from one part of the system to another, such as diffusion or solution, however slow, then it is said to be in a state of chemical equilibrium … Thermal equilibrium exists when there is no spontaneous change in the coordinates in mechanical and chemical equilibrium when it is separated from its surroundings by a diathermic [“like a thin metal sheet”] wall. In thermal equilibrium, all parts of a system are at the same temperature, and this temperature is the same as that of the surroundings. … When the conditions for all three types of equilibrium are satisfied, the system is said to be in a state of thermodynamic equilibrium; in this condition, it is apparent that there will no tendency whatever for any change of state either of the system or of the surroundings to occur. States of thermodynamic equilibrium can be described in terms of macroscopic coordinates that do not involve time … . Id., pp. 24-25.
Short form: stagnation.
Zemansky earned an international reputation as the creator of the foundations for teaching thermodynamics, and the work cited is a classic.

Editor
Reply to  Jeff Glassman
June 26, 2010 2:15 pm

Glassman,
I note you misdefined mechanical equilibrium because you are ignoring it as a system of particles.:
“The necessary conditions for mechanical equilibrium for a system of particles are:
(i)The vector sum of all external forces is zero;
(ii) The sum of the moments of all external forces about any line is zero.”
– John L Synge & Byron A Griffith (1949). Principles of Mechanics (2nd Edition ed.). McGraw-Hill. p. 45–46.
I also point out that you completely ignored the concept of dynamic equilibrium:
“A dynamic equilibrium exists when a reversible reaction ceases to change its ratio of reactants/products, but substances move between the chemicals at an equal rate, meaning there is no net change. It is a particular example of a system in a steady state. In thermodynamics a closed system is in thermodynamic equilibrium when reactions occur at such rates that the composition of the mixture does not change with time. Reactions do in fact occur, sometimes vigorously, but to such an extent that changes in composition cannot be observed. Equilibrium constants can be expressed in terms of the rate constants for elementary reactions.”
http://en.wikipedia.org/wiki/Dynamic_equilibrium Atkins, P.W.; de Paula, J. (2006). Physical Chemistry (8th. ed.). Oxford University Press. ISBN 0198700725.

June 26, 2010 2:38 pm

Re mikelorrey’s continuing misunderstanding of equilibrium, 6/26/10 at 2:15 pm:
You misread my post. What I provided was not some personal definition of equilibrium, nor of mechanical equilibrium, by which I went wrong. They were Mark Zemansky’s. And as applied to Zemansky, he surely included your elaborate vector definition. Zemansky just said it simply as “no unbalanced force”. Unbalanced means the vector sum is other than zero, and “no” means in no way, about no line nor plane nor surface, etc.
As to “dynamic equilibrium”, you have introduced a new term, defined in a special way, for some other application. It is also rather worthless here because it applies only to reversible reactions, which are another idealization altogether. Thermodynamics, oceanography, and climate are not reversible.
Nice try. If you want to win at hip-shooting disses, at quick draw ad hominems, aim at the man.
Zemansky is still standing.

Editor
Reply to  Jeff Glassman
June 26, 2010 2:40 pm

Glassman,
No, he isn’t. Stagnation is zero movement or change, equilibrium is zero NET movement or change. They are completely different things.

Ferdinand Engelbeen
June 26, 2010 3:19 pm

Jeff Glassman says:
June 26, 2010 at 1:30 pm
I prefer to call raw data the output when the technician first calibrates transducer outputs, e.g. voltages, into the physical units being measured.
Agreed. These are available on simple request, but as these represent many millions of 10-second snapshots per year, these are not directly in line. I have checked a few days of calculations from these raw data and the hourly averages and stdv are as made available on line.
He ignores the other IPCC calibrations even after being given baker’s dozen of examples (6/20/10, 1:42 pm).
The dozen examples have nothing to do with CO2 levels. The IPCC isn’t involved in calibrations or procedures around CO2. I understand that this is difficult to believe, because of the other dozen examples, but it is the truth.
How did that happen, and who did it? Where is it all published? Is Engelbeen alleging that laboratory technicians did it all on their own?
With some very little effort, the Internet is a great source. It shows that CDIAC/Keeling Sr. was the master brain, beginning it all, and they have published (and still publish their own independent results) on the net and in different papers:
http://cdiac.ornl.gov/trends/co2/contents.htm
Nowadays, NOAA is the leading organisation where the central preparation and checking of calibration gases is done. An interactive plotting site of a lot of data can be found at:
http://www.esrl.noaa.gov/gmd/ccgg/iadv/ with some background, but ftp sites exist to download the (mostly cleaned) data from a lot of sites.
I already gave the link to the raw hourly averages of four stations. Although a heavy load, Excel can handle it.
the Mauna Loa record is considered to be a precise record and a reliable indicator of the REGIONAL TREND in the concentrations of atmospheric CO2 in the middle layers of the troposphere.
Yes, all CO2 measurements are local, some are regional, but most important: those far away from direct sources and sinks, that is in the middle of the oceans, deserts (including ice deserts), on mountain tops and coastal with seaside wind all show very similar averages and trends, differing less than 1% within one hemisphere, less than 2% between the NH and the SH for yearly averages. That represents 95% of the atmosphere. Thus it doesn’t matter if the IPCC uses Mauna Loa or Barrow or South Pole data, or the average of them. These are near equally “global”, only Mauna Loa has the longest continuous record, so that is mostly used. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends.jpg
All series show near the same trend but with a NH-SH lag.
Why are MLO data reduced differently than South Pole and Baring Head data? How can similar data be compared under different rules of data reduction? What exactly is the “curve consisting of 4 harmonics”? And the “stiff spline”? But especially note the “linear gain factor”. This is exactly the factor by which one could “calibrate” the stations to look alike.
That may be a matter of timing: the MLO sheet may be of an older date, but in fact it doesn’t matter. I have plotted the raw data, the unsplined, but selected daily averages and the splined and selected monthly averages from MLO and SPO in one graph:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
If you disagree with the way that the monthly averages represent the real measurements, I (and probably NOAA) am interested to hear of an alternative method of data reduction.
The “linear gain factor” simply is the year-by-year increase in level: while the seasonal variability is more or less constant, there is more variability in the increase. Used together with the average curvatory of previous years to adjust monthly averages to the middle of the month, if only a limited number of days for averaging is available at the beginning or end of the month.
More tomorrow, need some sleep now…

June 26, 2010 3:21 pm

Re mikelorrey’s continuing misunderstanding of equilibrium, 6/26/10 at 2:15 pm:
You left out one important word when you wrote,
“Stagnation is zero movement or change, equilibrium is zero NET movement or change. They are completely different things.”
That’s according to the definition you cited. You should have written,
“Stagnation is zero movement or change, dynamic equilibrium is zero NET movement or change. They are completely different things.”
(This is assuming in each instance that you are talking about changes in macroscopic variables.)
Imagine two systems in radiation balance, that is, in dynamic equilibrium. Take away one of the systems. The other will lose its thermal energy through radiation. Neither system was in thermodynamic equilibrium at any time.
(a) Dynamic equilibrium is not thermodynamic equilibrium.
(b) Dynamic equilibrium applies to reversible processes, which certainly excludes climate.

tonyb
Editor
June 26, 2010 3:52 pm

Ferdinand.
“It shows that CDIAC/Keeling Sr. was the master brain, beginning it all.
Keeling knew nothing whatsoever about measuring Cio2 when he was recruited for the job and took it because he wanterd to spend time in the open air rather than in a stuffy office. There is no doubt he was greatly influenced by Callendar who had his own reasons for his selection of historic Co2 data.
It is instructive to read Keeling’s autobiography where he confirms his lack of knowledge and to go through Callendars archives.
Both were great men in their own way but it is stretching a point to say Keeling was the master brain.
Tonyb

Ferdinand Engelbeen
June 27, 2010 12:38 am

tonyb says:
June 26, 2010 at 3:52 pm
Neither system was in thermodynamic equilibrium at any time.
(a) Dynamic equilibrium is not thermodynamic equilibrium.
(b) Dynamic equilibrium applies to reversible processes, which certainly excludes climate.

Like on many items, you have your own ideas about definitions, which differs from what others mean. In the case of two systems in radiation balance, the whole process is in thermodynamic equilibrium and each of them is in thermodynamic equilibrium as their state doesn’t change, as for each of them and both together, all the inputs equal the outputs. Where for near all people in the world with some technical/scientific knowledge is implied the word “dynamic”, without mentioning it, except for you.
Climate never is in dynamic equilibrium, as the inputs and outputs continuously change and mostly not in an equal way. But it certainly is reversible.

Ferdinand Engelbeen
June 27, 2010 12:46 am

tonyb says:
June 26, 2010 at 3:52 pm
Hi Tony, some time ago…
The discussion mentioning Keeling was about the Mauna Loa data, where Keeling indeed was the master brain of discovering the reason why over land there was such high variability (vegetation uptake/breathing), choosing a better location (South Pole first, Mauna Loa second), and inventing a continuous sampling method + calibration which was about 100 times more accurate than most chemical methods used before him…

Ferdinand Engelbeen
June 27, 2010 1:57 am

Jeff Glassman says:
June 26, 2010 at 1:30 pm
R. F. Keeling and S. Piper were IPCC contributing authors for both the TAR and AR4.
Thus every contributing author of the IPCC (including Spencer, McIntyre,…) are on your personal blacklist of fraudsters of the IPCC?
The yes part of his two-way answer is wrong. Nowhere is the ocean in equilibrium, which is the ultimate state of stagnation. In equilibrium, there are no currents and no heat transfer (to use the redundant term). One cannot even say that something is close to equilibrium. A system either is or it is not in equilibrium. The surface layer is in turmoil, including all thin slices of it.
OK, if you insists: include in all mentionings of “equilibrium” that 99% of all engineers in the world talk about a dynamic equilibrium, never a static one, as that doesn’t exist in the real world. Thus the ultimate surface layer of the oceans always is in dynamic equilibrium with the atmosphere, although a lot of molecules can be transfered both ways… But as the layers below it aren’t in equilibrium with the atmosphere (at most places), there is always a difference in transfer rates. This results in a lot of CO2 degassing at the equator and a lot of absorbance near the poles. But in the past at least 420,000 years, the whole CO2 system was in dynamic equilibrium, where the level in the atmosphere was only influenced by temperature changes. That changed 150 years ago with the human emissions.
This is not correct. The conclusion from Zeebe, et al., is for a fictional surface layer perpetually restrained to be in equilibrium. That conclusion relies on the stoichiometric equations of equilibrium, and the solution given graphically in the Bjerrum plot. The uptake and outgassing of CO2 is governed by Henry’s Law. Dissolution does not depend on the pressure difference or pressure gradient. Except for the fact that this air-sea exchange model is crucial to justifying AGW, it is a surprising error from a prominent, contributing, PhD professor of geophysics, and from someone who claims credentials as a chemist.
If you think that “Dissolution does not depend on the pressure difference or pressure gradient.”, you simply demonstrate that you don’t understand the physics and chemistry involved. If there was no pressure gradient between free CO2 in the water and in the atmosphere (that means equal transfer of molecules from water to air as reverse), then there was zero (net) flux (or a dynamic equilibrium).
What you are saying is that only temperature (via Henry’s Law) is involved in the amount of (free) CO2 in seawater and hence the flux in either direction between ocean and atmosphere. This is completely wrong. The amount of free CO2 in seawater depends of a lot of other items than temperature alone: pH, salt content, DIC content. From
http://www.soest.hawaii.edu/oceanography/faculty/zeebe_files/Publications/WolfGladrowMarChem07.pdf
you can learn (page 289) that pH and DIC have a direct influence on (dissolved free) CO2 concentration at a constant temperature and salt content. Thus any use of temperature alone doesn’t show what is really happening in solution, thus not what will happen in reality. pCO2, measured or calculated, is the only realistic parameter which may give the difference between oceanic and atmospheric CO2 pressure, thus the direction and to a certain extent the quantity of the flux.
This is your error, not somebody’s else (including many who have published, measured and calculated pCO2 from the oceans).
Engelbeen dismisses solubility, also known as dissolution and Henry’s Law, repeatedly. Here, he dismisses it as if someone had made a “theoretical calculation”. Once again, he refers refer to something no one said.
Henry’s law holds for one level of pH, DIC and salt content of seawater. Change one of these parameters and the curve according to Henry’s Law moves, as the concentration of free CO2 (thus the pressure to go in/out solution) changes. Using one curve of Henry’s Law for one level of the others is completely wrong.
The pCO2 of seawater is measured routinely on seaships by simply spraying seawater in a closed air system at the temperature of the seawater and measuring the CO2 level of that air. Thus pCO2 of seawater is what the atmosphere would get if both seawater and air were in dynamic equilibrium at that temperature. Any partial pressure of CO2 in the real atmosphere above that would give a flux into the oceans and vv. Temperature is important, but only one of the parameters involved. pCO2 gives the right answer…

tonyb
Editor
June 27, 2010 2:49 am

Ferdinand
Your 12.38 message was presumably aimed at Jeff Glassman-I never said anything about the subject 🙂
With regards to your 12.46. Perhaps Keeling was the master brain EVENTUALLY but when he first joined up he knew nothing of the subject and took considerable advice from Callendar -who he greatly admired- and took his historic readings from him.
Callendars archives are instructive, as he clearly took historic concentration levels that suited his theory-that man was causing climate change-and discarded others.
Keeling later says in his autobiography that the 19th Century scientists were more accurate than he had initially believed (as a young, untried, inexperienced PHD) in measuring historic CO2 concentrations.
Keeling undoubtedly did a lot of intersting work but we shall continue to agree to differ as to whether his records are 100 times more acurate than the 19th Century scientists he subsequently came to admire 🙂
All the best
Tonyb

Ferdinand Engelbeen
June 27, 2010 4:53 am

tonyb says:
June 27, 2010 at 2:49 am
Your 12.38 message was presumably aimed at Jeff Glassman-I never said anything about the subject :
Sorry, copied the wrong header…
Callendar and Suess and many others of his time (even Arrhenius before them) did see the greenhouse effect of CO2 as a positive item. The alarmists did come later… Besides that, the criteria hCallendar used did result in a graph that 50 years later was confirmed by CO2 measurements in ice cores, firn, even roughly in stomata data…
Most of the historical measurement methods were accurate to +/- 3% of the measurement or for a 300 ppmv level about +/- 10 ppmv. Not even accurate enough to detect the seasonal variability. The NDIR method, together with the calibration as developed by Keeling, is better than +/- 0.1 ppmv since the start of the measurements over 50 years ago.
Indeed, Keeling did know nothing about CO2 when he joined Scripps, but he not only was a fast learner, but also had an enormous analytical insight and practical skills, which made him one of the greatest scientists of the previous century, whatever he did think about global warming. As somebody else wrote: we could only hope that the temperature measurements were set up and controlled in an equal way…

Ferdinand Engelbeen
June 27, 2010 7:14 am

Jeff Glassman says:
June 26, 2010 at 1:30 pm
Tried to post this, now in two parts, as it doesn’t show up…
R. F. Keeling and S. Piper were IPCC contributing authors for both the TAR and AR4.
Thus every contributing author of the IPCC (including Spencer, McIntyre,…) is on your personal blacklist of fraudsters of the IPCC?
The yes part of his two-way answer is wrong. Nowhere is the ocean in equilibrium, which is the ultimate state of stagnation. In equilibrium, there are no currents and no heat transfer (to use the redundant term). One cannot even say that something is close to equilibrium. A system either is or it is not in equilibrium. The surface layer is in turmoil, including all thin slices of it.
OK, if you insists: include in all mentionings of “equilibrium” that 99% of all engineers in the world talk about a dynamic equilibrium, never a static one, as that doesn’t exist in the real world. Thus the ultimate surface layer of the oceans always is in dynamic equilibrium with the atmosphere, although a lot of molecules can be transfered both ways… But as the layers below it aren’t in equilibrium with the atmosphere (at most places), there is always a difference in transfer rates. This results in a lot of CO2 degassing at the equator and a lot of absorbance near the poles. But in the past at least 420,000 years, the whole CO2 system was in dynamic equilibrium, where the level in the atmosphere was only influenced by temperature changes. That changed 150 years ago with the human emissions.
This is not correct. The conclusion from Zeebe, et al., is for a fictional surface layer perpetually restrained to be in equilibrium. That conclusion relies on the stoichiometric equations of equilibrium, and the solution given graphically in the Bjerrum plot. The uptake and outgassing of CO2 is governed by Henry’s Law. Dissolution does not depend on the pressure difference or pressure gradient. Except for the fact that this air-sea exchange model is crucial to justifying AGW, it is a surprising error from a prominent, contributing, PhD professor of geophysics, and from someone who claims credentials as a chemist.
If you think that “Dissolution does not depend on the pressure difference or pressure gradient.”, you simply demonstrate that you don’t understand the physics and chemistry involved. If there was no pressure gradient between free CO2 in the water and in the atmosphere (that means equal transfer of molecules from water to air as reverse), then there was zero (net) flux (or a dynamic equilibrium).
What you are saying is that only temperature (via Henry’s Law) is involved in the amount of (free) CO2 in seawater and hence the flux in either direction between ocean and atmosphere. This is completely wrong. The amount of free CO2 in seawater depends of a lot of other items than temperature alone: pH, salt content, DIC content. From
http://www.soest.hawaii.edu/oceanography/faculty/zeebe_files/Publications/WolfGladrowMarChem07.pdf
you can learn (page 289) that pH and DIC have a direct influence on (dissolved free) CO2 concentration at a constant temperature and salt content. Thus any use of temperature alone doesn’t show what is really happening in solution, thus not what will happen in reality. pCO2, measured or calculated, is the only realistic parameter which may give the difference between oceanic and atmospheric CO2 pressure, thus the direction and to a certain extent the quantity of the flux.
This is your error, not somebody’s else (including many who have published, measured and calculated pCO2 from the oceans).

June 27, 2010 7:56 am

On 6/27/10 at 12:38 am, Ferdinand Engelbeen said,
>>Like on many items, you have your own ideas about definitions, which differs from what others mean. In the case of two systems in radiation balance, the whole process is in thermodynamic equilibrium and each of them is in thermodynamic equilibrium as their state doesn’t change, as for each of them and both together, all the inputs equal the outputs. Where for near all people in the world with some technical/scientific knowledge is implied the word “dynamic”, without mentioning it, except for you.
>>Climate never is in dynamic equilibrium, as the inputs and outputs continuously change and mostly not in an equal way.
>>But it certainly is reversible.



First rational dialog cannot exist without solid definitions. AGW as a political movement doesn’t require definitions any more than other forms of politics do. Climate, though, is science, and in fact a branch of thermodynamics. Here, definitions are essential. I will assume all terms are drawn from the field of thermodynamics. However, if you want to introduce another term, I’m willing to accommodate your peculiar definitions.
IPCC said,
>>The air-sea exchange of CO2 is determined largely by the air-sea gradient in pCO2 between atmosphere and ocean. Equilibration of surface ocean and atmosphere occurs on a time scale of roughly one year. Gas exchange rates increase with wind speed (Wanninkhof and McGillis, 1999; Nightingale et al., 2000) and depend on other factors such as precipitation, heat flux, sea ice and surfactants. The magnitudes and uncertainties in local gas exchange rates are maximal at high wind speeds. In contrast, the equilibrium values for partitioning of CO2 between air and seawater and associated seawater pH values are well established (Zeebe and Wolf-Gladrow, 2001; see Box 7.3). AR4, ¶7.3.4.1 Overview of the Ocean Carbon Cycle, p. 528.
For openers, the first sentence is false.
>>Carbonate chemistry
>>In thermodynamic equilibrium, gaseous carbon dioxide (CO2(g)), and [CO2] are related by Henry’s law:
>>CO2(g) = [CO2] (at K_0), (1)
>>where K_0 is the temperature and salinity dependent solubility coefficient of CO2 in seawater (Weiss, 1974). The concentration of dissolved CO2 and the fugacity of gaseous CO2, fCO2, then obey the equation [CO2] = K_0 × fCO2, where the fugacity is virtually equal to the partial pressure, pCO2 (within ~1%). Zeebe, R. E., and D. A. Wolf-Gladrow, “Carbon dioxide, dissolved (ocean).” Encyclopedia of Paleoclimatology and Ancient Environments, Ed. V. Gornitz, Kluwer Academic Publishers, Earth Science Series, in press 2008, p. 1. http://www.soest.hawaii.edu/oceanography/faculty/zeebe_files/Publications/ZeebeWolfEnclp07.pdf
And
>> The pCO2 of a seawater sample refers to the pCO2 of a gas phase in equilibrium with that seawater sample. Id., p. 3.
Conclusions: Equilibrium, which IPCC does not define, should be thermodynamic equilibrium because climate is a thermodynamic problem. IPCC’s own authorities confirm that their work on which IPCC relies is for thermodynamic equilibrium. Furthermore, the thermodynamic relationships follow from Henry’s Law, which IPCC ignores and Engelbeen discounts. And those reactions of dissolution depend on the partial pressure of CO2 in the gas phase, and not on the fantastic “air-sea gradient in pCO2”.
Contrary to Engelbeen’s assertion, climate can be well modeled as being in dynamic equilibrium. As in all science, it’s a matter of the accuracy demanded of the model. For over a half million years, the global surface temperature has been about 14ºC (5ºC to +17ºC). Dynamic equilibrium may be exactly the same as steady state. IPCC uses the term “dynamic equilibrium” just once in its two last Assessment Reports, and that is with respect to ground water. AR4, ¶5.5.5.4, p. 418. IPCC uses the phrase “steady state” 47 times in those Reports.
Apparently Engelbeen has observed reversible processes in his experience.
>>[A] reversible process is one that is performed in such a way that, at the conclusion of the process, both the system and the local surroundings may be restored to their initial states, without producing any changes in the rest of the universe. A process that does not fulfill these stringent requirements is said to be irreversible.
>>The question immediately arises as to whether natural processes, i.e., the familiar processes of nature, are reversible or not. The purpose of this chapter is to show that it is a consequence of the second law of thermodynamics that all natural processes are irreversible. Zemansky, M. W., “Heat and Thermodynamics”, Ch. 8, Reversibility and Irreversibility, McGraw-Hill, Fourth Ed., 1957, p. 151-2.

Engelbeen’s problem is not so much, as he says, that English is not his first language, as it is that his experience is in a different universe.

tonyb
Editor
June 27, 2010 12:00 pm

Ferdinand
I merely wanted to establish the historic record accurately as I diasagreed with one of your earlier statements . We now seem to be agreed that whilst Keeling may have become one of the great brains over time, he was a complete novice when he started at Mauna Loa
That he managed to immediately build a piece of equipment 100 times more accurate than the combined efforts of hundreds of great scientists during the 130 years of Co2 sampling prior to his appointment is something we will never agree on, nor the reliabilty of CO2 ice cores.
However I think we must be agreed that IF CO2 was a constant 280ppm before mans intevention, and yet we recorded dramatic changes in the climate, then CO2 is a very weak climate driver and natural variability is a far more significant factor.
All the best
Tonyb

Ferdinand Engelbeen
June 27, 2010 1:42 pm

Jeff Glassman says:
June 27, 2010 at 7:56 am
Furthermore, the thermodynamic relationships follow from Henry’s Law, which IPCC ignores and Engelbeen discounts. And those reactions of dissolution depend on the partial pressure of CO2 in the gas phase, and not on the fantastic “air-sea gradient in pCO2″.
The IPCC neither I ignore Henry’s Law, you ignore that temperature is not the only parameter in CO2 solubility in seawater and temperature is only one factor. In your own pages, you only show one graph of solubility of CO2 in water with a fixed salt content, that fits part of the Vostok curve. But real seawater shows huge differences in salt content, DIC and pH. Each of them influences the pCO2 of seawater in equilibrium with the atmosphere.
Your own Hawaii universityreference is clear about that:
The dissolved carbonate species react with water, hydrogen and hydroxyl ions and are related by the equilibria:
CO2 + H2O = HCO3(-) + H(+) = CO3(2-) + 2 H(+)

where the = sign is a bidirectional reaction sign and between () means upper case.
Thus if the pH is lower (for whatever reason), or the carbonate content is higher (for whatever reason), the reactions are pushed to the left side, and more CO2 is set free at identical salt content and temperature. That is what lacks in all your reasoning about Henry’s law and the solubility curve. The increase of plankton (blooms) in summer have a profound effect on CO2 in solution, because of bicarbonate use to build their shells, that reduces the pCO2 of the ocean surface, thus releasing less CO2 to the atmosphere.
Further:
The pCO2 of a seawater sample refers to the pCO2 of a gas phase in equilibrium with that seawater sample.
Why the emphasis on seawater? This is the definition of the pCO2 of seawater, not of the gas phase above it… If the gas phase has a higher pCO2 than the pCO2 of seawater, then CO2 from the gas phase will be pushed into the seawater, if the pCO2 of the gas phase is lower, CO2 will come out of the water. If both are equal, the system is in equilibrium. The transfer rate in all cases is proportional to the difference in pCO2 between water and the atmosphere above it. Negative, positive and zero.
Please note that they don’t use “dynamic”, while in all three cases there are dynamics involved, as molecules are continuously transfered in both directions. Thus the IPCC and I are completely right that one need to look at the difference between the pCO2 of seawater and the pCO2 of the atmosphere, because that is the driving force for uptake or release.
No further discussion about the “irreversable” climate, as that is not relevant here.

June 27, 2010 4:08 pm

For tonyb and Ferdinand Engelbeen, passim:
For a different historical perspective, I recommend Spencer Weart’s online work, “The Discovery of Global Warming”. He gives major credit to Roger Revelle, who hired Dave Keeling early in Dave’s career. Weart says,
>> Before scientists would take greenhouse effect warming seriously, they had to get past a counter-argument of long standing. It seemed certain that the immense mass of the oceans would quickly absorb whatever excess carbon dioxide might come from human activities. Roger Revelle discovered that the peculiar chemistry of sea water prevents that from happening. His 1957 paper with Hans Suess is now widely regarded as the opening shot in the global warming debates. This essay not only describes Revelle’s discovery in detail, but serves as an extended example of how research found essential material support and intellectual stimulus in the context of the Cold War.
Revelle discovered nothing. He realized that CO2 emissions were not near large enough to support the model that CO2 caused global warming. So he advanced a conjecture that a buffer existed in sea water to cause manmade(!) CO2 to accumulate in the atmosphere. He couldn’t quantify his model, so he engaged Han Suess for the task. Together they produced the 1957 article. It was not a technical paper, but a pitch for a share of the funds for the upcoming (1958) International Geophysical Year that Revelle was promoting. In their pitch, Revelle and Suess confirmed that they could not set the parameters of the problem to show that CO2 accumulated in the atmosphere, hence more funds were required. Isn’t it always the case?
IPCC seized on Revelle’s buffer as if the Revelle & Suess article were a technical paper, and tried to resurrect Revelle’s conjecture in its Fourth Assessment Report. It said,
>>Carbon Cycle Feedbacks to Changes in Atmospheric Carbon Dioxide
>>Chemical buffering of anthropogenic CO2 is the quantitatively most important oceanic process acting as a carbon sink. Carbon dioxide entering the ocean is buffered due to scavenging by the CO3^(2–) ions and conversion to HCO3^(–), that is, the resulting increase in gaseous seawater CO2 concentration is smaller than the amount of CO2 added per unit of seawater volume. Carbon dioxide buffering in seawater is quantified by the Revelle factor (‘buffer factor’, Equation (7.3)), relating the fractional change in seawater pCO2 to the fractional change in total DIC after re-equilibration (Revelle and Suess, 1957; Zeebe and Wolf-Gladrow, 2001): … AR4, ¶7.3.4.2, p. 531.
In case this isn’t perfectly clear, what is most important is the Revelle conjecture: that seawater, while acting as a CO2 sink, must buffer against ACO2 absorption in order for AGW to work. Note also that IPCC again refers to “seawater pCO2″, the fictional parameter that contradicts their authorities, Zeebe and Wolf-Gladrow.
IPCC’s effort failed, but one wouldn’t notice from a casual reading of its Report. When IPCC’s effort turned out to be a rediscovery of solubility, IPCC concealed the information. See discussion, rocketscientistsjournal.com, ” On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, etc.”, Part 5, re. Figures 3 and 4.
Dave Keeling’s contribution was to help advance the state of the art in CO2 measurements, a pet project he was pursuing for fun at Big Sur, California, circa 1955. His long held ambition was to work outdoors, and to establish a global baseline for atmospheric CO2 concentration. Sadly that was a failure and irrelevant to climate.
In 1956, Keeling’s CO2 work came to the attention of Harry Wexler, director, Division of Meteorological Research, US Weather Bureau, whose pet project was the building and staffing of a new observatory at Mauna Loa. So while cautioning that measurements should not be made within the influence of sinks or sources, Keeling managed to collect data from a spot smack in the plume of the ocean’s massive, wind-modulated, CO2 outgassing.
Concurrently, Keeling’s work came to the attention of Roger Revelle, Director of Scripps, who was already championing CO2 as Callendar’s greenhouse agent causing global warming. Revelle hired Keeling. In September, 1956, Revelle with Hans Suess wrote a pitch for IGY funding for Keeling’s measurements. The ocean’s aversion to ACO2 was going to cause it to build up and produce global warming. What has withstood the tests of time and reason is their sentence turned slogan: “Thus human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future.”
Keeling says that he wasn’t convinced until 1965 after working with Revelle on the President’s Science Advisory Committee Panel on Environmental Pollution.
An article published in the initial issue of Cosmos by Roger Revelle and shortly before his death concluded, “The scientific basis for greenhouse warming is too uncertain to justify drastic’ action at this time.” The article seemed out of character and inconsistent with other recent pronouncements by Revelle. AGW proponents allege Roger Revelle, in failing health, was tricked into signing a paper drafted by others.
But now, 45 years later, we can say with confidence that CO2 has nothing to do with the climate. See rocketscientistsjournal.com. And the time for drastic action has arrived! Isn’t it also time for a couple of posthumous Nobel peace prizes?

Joel Shore
June 27, 2010 4:36 pm

tonyb says:

However I think we must be agreed that IF CO2 was a constant 280ppm before mans intevention, and yet we recorded dramatic changes in the climate, then CO2 is a very weak climate driver and natural variability is a far more significant factor.

Hi, tonyb!
Actually, that doesn’t follow at all. How can you tell if CO2 is a strong or weak driver when it isn’t changing? If it changes significantly and climate doesn’t, then you have a logical argument. In your case, not so much. (And, of course, whether the changes in climate…especially on a global scale…were that dramatic over the last few thousand years is debatable, but even leaving that aside, your argument doesn’t follow.)

June 27, 2010 6:11 pm

Ferdinand Engelbeen, 6/27/10 at 7:14 am said,
>>Thus every contributing author of the IPCC (including Spencer, McIntyre,…) are on your personal blacklist of fraudsters of the IPCC?
I made no such accusations against Keeling and Piper. You had said,
>>The IPCC isn’t involved in calibrations or procedures around CO2.
To prove you wrong, I quoted the calibration techniques applied to the those monthly records data you say are pure laboratory data. Those techniques included an unquantified “linear gain factor”, applied not to MLO but to SP and BH. After that “linear gain factor”, which may be variable in time and station dependent, the trends at SP, BH, and MLO data all agree. That is called “calibration” in the IPCC world. It involves CO2. It was reported by IPCC contributing authors. IPCC IS involved in calibrations and procedures around CO2.
You took my citation out of context, and supplied your own meaning with no justification, just to hurl an ad hominem attack. Shame.
You wrote,
>>If you think that “Dissolution does not depend on the pressure difference or pressure gradient.”, you simply demonstrate that you don’t understand the physics and chemistry involved. If there was no pressure gradient between free CO2 in the water and in the atmosphere (that means equal transfer of molecules from water to air as reverse), then there was zero (net) flux (or a dynamic equilibrium).
I gave you an encyclopedic treatment of the subject written by Zeebe and Wolf-Gladrow in support of the physics as I stated it. I quoted the misinterpretation by IPCC, referenced to the same writers. Regardless, you continue to hold with IPCC’s version.
In the spirit of science and accuracy, I will concede that there is indeed a pressure gradient between the partial pressure of CO2 in the atmosphere and the partial pressure of CO2 in the water. The latter is zero. When a gas is dissolved in a solvent, it exerts no more gas pressure. Therefore, the gradient you and IPCC endorse is equal to the partial pressure of CO2 in the atmosphere. Is that better?
You wrote,
>>What you are saying is that only temperature (via Henry’s Law) is involved in the amount of (free) CO2 in seawater and hence the flux in either direction between ocean and atmosphere. This is completely wrong. The amount of free CO2 in seawater depends of a lot of other items than temperature alone: pH, salt content, DIC content.
And
>>Henry’s law holds for one level of pH, DIC and salt content of seawater.
What do you mean by “free CO2”? Is that total CO2 = DIC + DOC + POC, or just one of them, or is it molecular (un-ionized) CO2? What I am saying is that Henry’s Law applies to total gas dissolved in a solvent, and that it depends on temperature, pressure, salinity, and by conjecture, on molecular weight. Further, it is an equilibrium property. With regard to the air-sea flux, the gas of concern is CO2 and the solvent is sea water. The rate of dissolution is also proportional to wind velocity. With regard to the air-sea flux, pressure and salinity don’t vary significantly, and wind velocity averages out, all with respect to temperature. Ocean acidity (pH) and relative DIC content are effects of dissolved CO2, and not contributors to dissolution. Henry’s Law is not known to depend on pH or DIC.
To the extent that your physics is different, I challenge you to provide unbiased evidence as I have done for my position. If you rely on IPCC, including any of its authors, you will fail my challenge.
With regard to your following observations,
>> The pCO2 of seawater is measured routinely on seaships by simply spraying seawater in a closed air system at the temperature of the seawater and measuring the CO2 level of that air. Thus pCO2 of seawater is what the atmosphere would get if both seawater and air were in dynamic equilibrium at that temperature. Any partial pressure of CO2 in the real atmosphere above that would give a flux into the oceans and vv. Temperature is important, but only one of the parameters involved. pCO2 gives the right answer…
Good enough. But you fail to see that in the procedure you describe, the partial pressure measured is that in the air, not in the seawater. When we speak of the partial pressure of CO2 in water, that is jargon, meaning we deem the water to have that partial pressure. Underlying the jargon, the reference is to the pCO2 which only exists in the air. Dissolved CO2 exerts no pressure.
And to the contrary of your last two sentences, temperature of seawater is the key parameter.
The partial pressure, pCO2, is important, too, because the ocean will dissolve CO2 out of the atmosphere in proportion to pCO2, that is, in proportion to the CO2 concentration in the atmosphere immediately above the water. Water exposed to the CO2 rich air emitted from the Equatorial outgassing will absorb more CO2 than would water at the same temperature not exposed to that outgassing.
Water exposed to CO2-depleted air in the polar regions will dissolve less CO2 than would water at the same temperature exposed to richer air. Water in transit from the tropics to the poles will adjust its CO2 content along its path, following the local partial pressure. At the time of the descent of what is the THC at the poles, seawater will be loaded with CO2 corresponding to its solubility at about 0ºC and at the local partial pressure in the polar atmosphere.
Your reference on 6/27/10 at 1:42 quoting U. of Hawaii is no different than what anyone else is saying, including IPCC and Zeebe & Wolf-Gladrow. Please note that in every case, the chemical equations relate to the state of equilibrium. Those equations will hold once the surface layer of the ocean reaches equilibrium, and not a moment before. Because equilibrium is a state and not a continuous measure, we can’t even say that the conditions in the surface layer approach the equilibrium conditions in any sense.
I have mentioned the solution to those equations in posts here, and you have not responded. As I said, it is solved graphically in the Bjerrum plot. You might want to read about it on rocketscientistsjounal.com, “On Why CO2 is Known Not To Have Accumulated in the Atmosphere, etc.”, which was the source of Steve Hempell’s 6/7/10 query to Willis Eschenbach that kicked off his baseless diatribe early in this thread. I would draw your attention first to Figures 7 and 8. I infer that you are aware of this graphical solution because you note the “reactions are pushed to the left side”.
Instead you talk about the solution as if you were introducing it for the first time on this thread. But then, you utterly confuse cause and effect. The pH and the concentration of ions in the surface layer do not regulate Henry’s Law, the dissolution of CO2. They do not create a bottleneck.
Nor is the reverse true. The dissolution of CO2 does not shift the Bjerrum plot one way or the other, that is, until equilibrium is reached. And that never happens.
You say,
>>If the gas phase has a higher pCO2 than the pCO2 of seawater, then CO2 from the gas phase will be pushed into the seawater, if the pCO2 of the gas phase is lower, CO2 will come out of the water. If both are equal, the system is in equilibrium. The transfer rate in all cases is proportional to the difference in pCO2 between water and the atmosphere above it. Negative, positive and zero.
This is pure balderdash. Even Henry’s Law tells us nothing about the rate of exchange. Since the stoichiometric equations of equilibria and the dissolution of CO2 in water are governed by the same Law, isn’t it gratifying that the physics tells us nothing about the trajectory of the state of the system?
And isn’t it ironic that so much time is spent on CO2, that it has been thoroughly misunderstood by IPCC, and in the end it has no measurable affect on climate?

tonyb
Editor
June 28, 2010 12:02 am

Joel Shore
Great to hear from you!
I think you missed the implied smiley after my comment. Ferdinand and I go back a long way…
I think it would be perfectly possibly to make a logical argument that natural variability is by far the bigges factor in our climate change but I wasn’t seriously attempting to do that here. The argument between Ferdinand and Jeff is far too entertaining for me to want to hijack the thread. Perhaps another time.
Tonyb
Ps I have missed you and Scott Mandia posting over here as regularly as you once did.

Ferdinand Engelbeen
June 28, 2010 3:51 am

Jeff Glassman says:
June 27, 2010 at 4:08 pm
Revelle discovered nothing. He realized that CO2 emissions were not near large enough to support the model that CO2 caused global warming. So he advanced a conjecture that a buffer existed in sea water to cause manmade(!) CO2 to accumulate in the atmosphere.
Well, if you can prove that Revelle was wrong in his “conjecture”, there are hundreds of scientific works confirming it. The basics of buffering solutions are teached in first years chemistry classes…
His long held ambition was to work outdoors, and to establish a global baseline for atmospheric CO2 concentration. Sadly that was a failure and irrelevant to climate.

It was Keeling, who was a smart enough to see why there were large CO2 variations over land, except in the afternoon, when atmospheric mixing shows similar values everywhere. It was his idea that one could find a “background” CO2 level, if measured on remote places. And he did find that in 95% of the atmosphere. Far from a failure, he was and is proven right. For his very interesting story, read his autobiography:
http://scrippsco2.ucsd.edu/publications/keeling_autobiography.pdf
In case this isn’t perfectly clear, what is most important is the Revelle conjecture: that seawater, while acting as a CO2 sink, must buffer against ACO2 absorption in order for AGW to work. Note also that IPCC again refers to “seawater pCO2″, the fictional parameter that contradicts their authorities, Zeebe and Wolf-Gladrow.
Again you haven’t get even the basics if what pCO2 means: the partial pressure of CO2 from a seawater sample as can be measured in air above it when at (dynamic) equilibrium. That is routinely measured and can be calculated from the other parameters (including, but not solely, Henry’s Law). The pressure difference between the pCO2 (=volume ratio) in the atmosphere and pCO2 from the ocean water is what drives the direction and strength of the CO2 flux. There is not the slightest contradiction with Zeebe or others, as also your own references show.
Keeling managed to collect data from a spot smack in the plume of the ocean’s massive, wind-modulated, CO2 outgassing.
Your own idea of the speed of outgassing/absorption… Not backed up by any real world observation. Near identical values were found at the South Pole, which was measured first, before Mauna Loa. I don’t think that there is much outgassing nor sinks in thousands meters of ice.
But now, 45 years later, we can say with confidence that CO2 has nothing to do with the climate.
You have a quite strong manner of categoric statements. I should say the “CO2 has little to do with climate”, much less than the GCM’s try to convince us of. But “nothing” can’t be said, as we simply have not enough data to show the effect of more CO2 in the atmosphere, one way or the other.

Ferdinand Engelbeen
June 28, 2010 7:34 am

Jeff Glassman says:
June 27, 2010 at 6:11 pm
To prove you wrong, I quoted the calibration techniques applied to the those monthly records data you say are pure laboratory data. Those techniques included an unquantified “linear gain factor”, applied not to MLO but to SP and BH.
I never said that the monthly average data are pure laboratory data. The raw hourly averages are pure (calculated) laboratory data. All further averages are based on a selection of these data, by the different organisations, not the IPCC. The selection doesn’t change the average, curvatory or trend of the raw data, neither the differences between different stations, as I have repeatedly shown in the combined graphs of Mauna Loa with Samoa and with South Pole data. The “linear gain factor” is the difference of the seasonal curves of the present with the previous year as can be seen here:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/mlo_co2_seasons.jpg
Nothing sinister here and again, the IPCC is not involved in any way.
Correction procedures for missing data used for the averages may change over time, as good for Mauna Loa as for other stations. See WUWT of some time ago:
http://wattsupwiththat.com/2008/08/06/post-mortem-on-the-mauna-loa-co2-data-eruption/
In all cases there is no difference in average or trend between the “pure” laboratory data and the monthly averaged data beyond a few tenths of a ppmv.
If you have any indication of a change in these factors between the raw data and the published averages, we can discuss this further. Otherwise stop your allegations, you only make yourself unbelievable.
In the spirit of science and accuracy, I will concede that there is indeed a pressure gradient between the partial pressure of CO2 in the atmosphere and the partial pressure of CO2 in the water. The latter is zero. When a gas is dissolved in a solvent, it exerts no more gas pressure. Therefore, the gradient you and IPCC endorse is equal to the partial pressure of CO2 in the atmosphere. Is that better?
This should close the books for any first years chemistry student, give him/her an F grade with the urgent request to look for another study, as far away as possible from chemistry (or even physics).
Even Henry’s Law shows something different:
CO2(g) = [CO2] (at K_0), (1)
Where [CO2] is gaseous CO2 in the wather, thus free, not dissolved, CO2 in the liquid phase. This excerts a pressure to get out of the water, which is indirectly measured by getting it in equilibrium with a small amount of air above it. That is what Henry’s law says. For seawater it is never zero and called the pCO2 of the water phase, which may differ from the pCO2 in the atmosphere above it. While measured indirectly, it is the real pressure of free CO2 dissolved in the water, otherwise it would never come out again. In physical terms, it is called the “fugacity” of the dissolved CO2.
From http://www.britannica.com/EBchecked/topic/221428/fugacity
a measure of the tendency of a component of a liquid mixture to escape, or vaporize, from the mixture. The composition of the vapour form of the mixture, above the liquid, is not the same as that of the liquid mixture; it is richer in the molecules of that component that has a greater tendency to escape from the liquid phase. The fugacity of a component in a mixture is essentially the pressure that it exerts in the vapour phase when in equilibrium with the liquid mixture.
What I am saying is that Henry’s Law applies to total gas dissolved in a solvent
There it is where you go wrong: Henry’s Law is not “total” gas dissolved in a solvent, the right term [CO2] is the concentration of not ionized CO2 in solution (including H2CO3). Bicarbonate and carbonate ions are not part of Henry’s law, but part of the ionic buffer reactions and all together form DIC. If for any reason more [CO2] is formed (lower pH by undersea volcanic HCl) or less (by algal blooms), then the result of Henry’s Law is influenced as [CO2] is changed, even if the temperature and salinity didn’t change. That is the difference between a cola/fresh water/champagne bottle and seawater.
More details here:
http://www.eoearth.org/article/Marine_carbonate_chemistry#Dissolved_Carbon_Dioxide
Some excerpts:
The sum of [CO2(aq)] and [H2CO3] is denoted as [CO2].
At typical surface seawater pH of 8.2, the speciation between [CO2], [HCO3(-)], and [CO3(2-)] hence is 0.5%, 89%, and 10.5%, showing that most of the dissolved CO2 is in the form of HCO3- and not CO2
Thus a doubling of CO2 in the atmosphere will double [CO2] from 0.5% to 1% in first instance, and that will increase bicarbonate and carbonate somewhat, but will not double these two, thus seawater doesn’t double in CO2 content.
This shows where you are wrong and makes that several assumptions made with the wrong idea about Henry’s Law are wrong too. More about that later…

Ferdinand Engelbeen
June 28, 2010 7:47 am

Sorry, a mistake in the previous message:
Where [CO2] is gaseous CO2 in the wather, thus free, not dissolved, CO2 in the liquid phase.
Of course it is dissolved in the liquid phase, be it still as CO2 (and for a small part as H2CO3), but not ionised.

June 28, 2010 8:56 am

Ferdinand Engelbeen, 6/28/10 at 3:51 am said,
>> Well, if you can prove that Revelle was wrong in his “conjecture”, there are hundreds of scientific works confirming it. The basics of buffering solutions are teached in first years chemistry classes…

Expect proofs in logic and mathematics, not science. You will find a thorough treatment in rocketscientistsjournal.com, “On Why CO2 is Known Not To Have Accumulated in the Atmosphere, etc.”, Part 5, especially the discussion around Figures 3 to 5.
If there are hundreds of scientific works confirming Revelle, about one such citation that’s freely available on-line?
Here’s an interesting passage from Charles Keeling’s autobiography:
>>Because I could by then, in retrospect, see a seasonal variation in the carbon isotopic ratios of CO2 in my earlier afternoon data from Caltech, I proposed that the activity of plants growing on land was the cause of the seasonal cycle. This activity explained why maximum CO2 concentrations in both hemispheres were observed in the spring, when most plants begin to grow. The observed year by year rise in concentration was close to that expected if all of the industrial CO2 from combustion of fossil fuels remained in the air. Aware, however, of Revelle’s conviction that the oceans must be absorbing some of that CO2, I noted that longer records might cause a revision in the estimated rise. This was a good judgment call. In the 1970s, with much longer records of CO2, a coworker, Robert Bacastow, discovered that a transient release of CO2 from natural sources, associated with a powerful E1 Niño event in 1958, had exaggerated the average rise in these early data.
So at first, Keeling required 100% of ACO2 from fossil fuels to remain in the atmosphere. That could not be substantiated, and what he witnessed in his early data was attributed to El Niño. In 2006, A. C. Manning and Keeling’s son, R. F. Keeling, re-examined the seasonal variability Charles had attributed to terrestrial growing seasons. The said,
>>In conclusion, we have shown that a significant fraction of the
short-term variability in our CO2 and O2/N2 data at MLO can be
explained by real atmospheric variability rather than by artifacts
of our flask sampling procedure or analysis. We have suggested
that this variability may be related to seasonal north-south
concentration gradients that exist in the tropics as a result of
opposing seasonal variations at middle latitudes in either
hemisphere and have given statistical evidence to support this.
Manning, A.C. and R. F. Keeling, “Correlations in Short-Term Variations in Atmospheric Oxygen and Carbon Dioxide at Mauna Loa Observatory”, 11/8/06.
Apparently the authors didn’t accept that that the MLO variations were due to terrestrial biology. In fact, biological effects are not even a candidate source for the MLO variations. Also, note that Manning and Keeling restrict their study to MLO, and not some alleged global data set. The authors are not even explicit in identifying that the MLO short term variations they address are on the seasonal scale. Regardless, Manning and Ralph Keeling leave the matter by saying that based on the data available, the best estimate for the cause of the seasonal effects at MLO is north-south seasonal transport between the hemispheres, coupled with their differing CO2 concentrations. My suggestion to them is to examine the lie of the plume of the oceanic outgassing as it follows the seasonal trade winds across Hawaii.
You protest the physics of the partial pressure of a gas by repeating the same fiction. Your model does not agree with Zeebe and Wolf-Gladrow. As I quoted on 6/27/10, the latter state that the concentration of dissolved CO2 is proportional to the partial pressure of CO2 in the gas above the solvent, where the constant of proportionality depends on temperature and salinity. Your model is a contradiction because it involves a partial pressure gradient.
I made no claims about the “speed of outgassing/absorption” beyond saying that it is affected by the wind and is otherwise irrelevant because it is much faster than climate time scales and has no net global effect. Keeling’s data are backed by Keeling’s data. As I have painfully laid out for you in detail with full references, IPCC made the South Pole data look “near identical” to MLO by applying a “linear gain factor”, whose form and value it keeps secret.
How can one object to having more data? What we need before we spend any money on data collection is a review of climate based an honest and complete model fitting the data we already have and contradicting none.

Ferdinand Engelbeen
June 28, 2010 9:27 am

Further discussion…
The partial pressure, pCO2, is important, too, because the ocean will dissolve CO2 out of the atmosphere in proportion to pCO2, that is, in proportion to the CO2 concentration in the atmosphere immediately above the water.
No, it is the difference between pCO2 of the atmosphere and pCO2 of the oceans which drives the local direction and speed (together with other factors like wind speed). As the (largely temperature driven) oceanic pCO2 values show, the pCO2 difference air-ocean surface is highly negative near the equator and positive near the poles. In the mid-latitudes, the difference is positive in summer and negative in winter.
At the time of the descent of what is the THC at the poles, seawater will be loaded with CO2 corresponding to its solubility at about 0ºC and at the local partial pressure in the polar atmosphere.
The measured pCO2 of seawater near the North Pole is about 230 microatm. The measured pCO2 of the atmosphere at 7 m height at Barrow is about 390 microatm. It is that difference which drives the uptake. There is little difference between the atmospheric pCO2 near Barrow (raw or selected data…) and near the equator and at Antarctica. Barrow is higher than at the equator and that is higher than near Antarctica. The yearly average difference is less than 5 ppmv.
Please note that in every case, the chemical equations relate to the state of equilibrium. Those equations will hold once the surface layer of the ocean reaches equilibrium, and not a moment before.
Again you are completely lost: these equations are describing a dynamic equilibrium, not a static one. All reactions shift if the concentration of one of the constituents changes, including [H(+)] (pH).
The pH and the concentration of ions in the surface layer do not regulate Henry’s Law, the dissolution of CO2.
Yes, they do influence [CO2], thus the dissolution of CO2. Henry’s Law only regulates the next step between [CO2] and CO2(atm).
And isn’t it ironic that so much time is spent on CO2, that it has been thoroughly misunderstood by IPCC, and in the end it has no measurable affect on climate?
While one can have a lot of critique on the other cornerstone of the AGW theory, there is little doubt that humans are responsible for the recent increase in the atmosphere. All known observations support that. None contradict that. To counter the “consensus” on this, one need very solid arguments. If these are based on wrong assumptions, that works counterproductive for the arguments used for the other points where the “consensus” is on more shaky grounds.

tonyb
Editor
June 28, 2010 12:46 pm

Ferdinand 9.27am
I wonder if we can agree on the cornerstones of AGW? This will help us to focus on countering the weak points
Hypothesis
1) Human introduced co2 causes substantial and alarming warming. (radiative physics)
This ’cause’ has had the following (claimed) effects;
2) Sea levels are rising through thermal expansion/glacier melt
3) Land temperatures have been rising since 1880
4) The Earths climate has been relatively static throughout history until introduction of (1) causing temperature to rise.
I am not asking anyone to agree or disagree as to cause and effect, just whether this sums up the four cornerstones which need to be demolished.
Tonyb

Ferdinand Engelbeen
June 28, 2010 2:20 pm

Jeff Glassman says:
June 28, 2010 at 8:56 am
While looking at part 5 of your “why CO2 has not accumulated…”, I saw the Takahashi diagram, where you say:
For the minimum values of each Takahashi cell, the total outgassing, uptake, and net would be (-4.11, 6.64, -3.44) GgmC/yr, at average they are (-3.32, 1.20, -2.12) GgmC/yr, and at maximum, (-2.54, 1.73, -8.05) GgmC/yr. {Begin rev. 12/30/09}The values should be on the order of 90 PetagmC/yr, an unresolved discrepancy of 10^7.
What you haven’t seen is that the Takahashi diagram is about the net release/uptake of CO2 for each cell, that is the integrated fluxes plus and min for each cell over a year. The net total result is about 2 GtC net uptake by all ocean surfaces together.
While there are cells near the equator which show permanent release of CO2 and cells near the poles show permanent uptake, the bulk of the cells in the mid-latitudes show release in summer and uptake in winter. Thus a (large) part of the 90/92 GtC transfer is in/out the same cells within a year and some (smaller) part is released from the equator and absorbed near the poles. See the summer-winter difference of the same diagram at:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/images/fig03.jpg
This is what makes the residence time which (in the case of the ocean-atmosphere and vegetation-atmosphere) is largely bidirectional in the same cells and only a smal part is unidirectional and even a smaller part is really differential uptake/release. The latter is what governs the decay rate, not the residence time.
If there are hundreds of scientific works confirming Revelle, about one such citation that’s freely available on-line?
See “Changes in the Carbon Dioxide Content of the Atmosphere and Sea due to Fossil Fuel Combustion” by Bert Bolin and Erik Eriksson, 1958. On line at:
http://onramp.nsdl.org/eserv/onramp:16573/n8._Bolin___Eriksson__1958corrected.pdf
One relevant quote:
An addition of CO2 to the water will change the pH and thereby decrease the dissociation resulting in a larger portion of CO2 and H2CO3 molecules. Since the pressure of CO2 in the gas phase being in equilibrium with CO2 dissolved in water is proportional to the number of CO2 and H2CO3 molecules in the water, an increase of the partial pressure occurs which is much larger (about 12.5 times) than the increase of the total content of CO2 in the water.
Also interesting, the history behind the Revelle factor:
http://www.aip.org/history/climate/Revelle.htm
With a reference to the ocean pCO2 measurements of Buch (1933!):
Observations in the 1930s had established the key data (such as how the partial pressure of CO2 in sea water varied as a function of acidity).(5*)
http://www.biokurs.de/treibhaus/literatur/buch/buch1939.pdf
I hope that you can understand some German…
———–
So at first, Keeling required 100% of ACO2 from fossil fuels to remain in the atmosphere. That could not be substantiated, and what he witnessed in his early data was attributed to El Niño. In 2006, A. C. Manning and Keeling’s son, R. F. Keeling, re-examined the seasonal variability Charles had attributed to terrestrial growing seasons.
Your interpretation of Manning and R.F. Keeling:
Regardless, Manning and Ralph Keeling leave the matter by saying that based on the data available, the best estimate for the cause of the seasonal effects at MLO is north-south seasonal transport between the hemispheres, coupled with their differing CO2 concentrations.
Again, it seems that you are a master of misinterpretation of what some others say: Manning and Keeling Jr. saw that there were disturbances to the “normal” NH seasonal variability at MLO and these were probably caused by the short term disturbances from opposing flows between the NH and SH. The latter don’t cause the seasonal variability at MLO, only the disturbances of the seasonal variability. Any other station in the NH shows a similar, but more pronounced seasonal variability at about twice the amplitude of MLO. And the maxima are in winter/spring, the minima in summer/fall, opposite to your supposed seawater temperature/outflow influence:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/month_2002_2004_4s.jpg
Where Barrow is at the edge of the cold Arctic Ocean, but receives air from the mid-latitudes via the Ferell cells.
the latter state that the concentration of dissolved CO2 is proportional to the partial pressure of CO2 in the gas above the solvent, where the constant of proportionality depends on temperature and salinity. Your model is a contradiction because it involves a partial pressure gradient.
You still don’t get it. Let us try to put it in another way: one establishes the “fictional” pCO2 of seawater by spraying seawater in a small volume of air and measuring the pCO2 of the that small volume. The sea surface water used for this experiment thus has a theoretical pCO2 as found in the test.
Now assume that the ambient air at sea level has – surprisingly – the same pCO2. What will happen then with the CO2 flows between the atmosphere and the oceans? Nothing, no net flow at all, as both have the same pCO2 . If there was zero CO2 in the ocean surface, you would be right. But there is already CO2 in the oceans. Depending of the pCO2 in water, based on Henry’s Law, and pCO2 in the atmosphere, there would be a CO2 flow in direction and ratio with the difference between the partial pressures involved…
IPCC made the South Pole data look “near identical” to MLO by applying a “linear gain factor”, whose form and value it keeps secret.
Completely false allegation, based on misinterpretation of the meaning of “linear gain factor”, repeatedly shown false as the raw data show identical averages as the “manipulated” data within a few tenths of a ppmv…

Ferdinand Engelbeen
June 28, 2010 2:32 pm

tonyb says:
June 28, 2010 at 12:46 pm
I think that indeed these are the cornerstones. There are a lot more derivatives, some are widely accepted by the IPCC, others not widely accepted, but still are used by the more extreme alarmists:
Widely accepted:
– spread of vector deseases
– more intense storms/hurricanes
– intenser drought at some other parts of the globe
Not widely accepted
– THC shutdown and NH freezing
– runaway warming
I suppose one can find several more…

June 29, 2010 7:34 pm

Ferdinand Engelbeen, 6/28/10 at 8:56 am said,
>>What you haven’t seen is that the Takahashi diagram is about the net release/uptake of CO2 for each cell, that is the integrated fluxes plus and min for each cell over a year. The net total result is about 2 GtC net uptake by all ocean surfaces together.
I said nothing different, and you cite nothing from me to show that I overlooked that fact.
You said,
>>Thus a (large) part of the 90/92 GtC transfer is in/out the same cells within a year and some (smaller) part is released from the equator and absorbed near the poles.
Perhaps you would agree that the average output for a single cell is its net output for the interval, keeping track of positive being absorption and negative being outgassing. And perhaps you would agree that the average output for the sum of all cells is the sum of the averages for all cells, which is the net for the whole ocean. I took the sum over all net positive cells to be the net absorption, and the sum over all net negative cells to be the net outgassing. The yearly net absorption I took to be about 92 GtC, and the yearly net outgassing to be about 90 GtC, according to IPCC AR4 Figure 7.3, p. 515, and thus the net was about 2 GtC absorbed. Clearly you disagree in the details.
You seem to have determined that the 90 GtC is the sum of the output from all cells while they are outgassing, and 92 GtC to be the sum while they are absorbing, and so these are not apparent in Takahashi’s analysis. I would agree that you get the same net result for the ocean, preserving the signs. What leads you to believe that this is what IPCC’s figures of 90 and 92 mean?
If the Takahashi diagram is correct, what do the partial sums of all positive cells and all negative cells in the diagram mean?
What do you think causes cells to absorb and outgas, if its not the solubility effect?
IPCC says,
>>In winter, cold waters at high latitudes, heavy and enriched with CO2 (as DIC) because of their high solubility, sink from the surface layer to the depths of the ocean. This localised sinking, associated with the Meridional Overturning Circulation (MOC; Box 5.1) is termed the ‘solubility pump’. Over time, it is roughly balanced by a distributed diffuse upward transport of DIC primarily into warm surface waters. AR4, ¶7.3.1.1, The Natural Carbon Cycle, p. 514.
Are the cold waters at high latitudes much different in winter than in summer? Isn’t the polar water where it descends approximately ice water? This water in the MOC, also known as the THC, returns to the surface in warm waters. CO2 was absorbed because of solubility, it says. Do you believe that it was returned later to the atmosphere because of solubility, now in warm waters? Are you familiar with the flow in the THC? It’s in the range of a few to a couple of dozen Sv. Have you checked the solubility curve to see if the difference in temperature and the flow rate work out to about 90 GtC or so? I have, and they do. A temperature difference of 30ºC from the headwaters to the discharge fits a flow of 5 Sv, and a difference of 10ºC fits 10 Sv.
This is a continuous flow, a river of CO2, and the output of natural CO2 depends on the SST at venting.
The geographical distribution of the discharge is somewhat uncertain, but the Takahashi diagram provides a hint. The Takahashi diagram also has a strong resemblance to SST. Does he explain why? Some of his key papers are only for purchase or in larger university libraries.
You said,
>>The measured pCO2 of seawater near the North Pole is about 230 microatm.
Surely you jest! You can’t even show that seawater has a pCO2. It is deemed to exist, to be the pCO2 of the air above it when in equilibrium. This error contributes to your misunderstanding of Henry’s Law. Even Wikipedia, the Internet sorted, manages to get this right. It says,
>>In chemistry, Henry’s law is one of the gas laws, formulated by William Henry in 1803. It states that:
>>At a constant temperature, the amount of a given gas dissolved in a given type and volume of liquid is directly proportional to the partial pressure of that gas in equilibrium with that liquid.
>>An equivalent way of stating the law is that the solubility of a gas in a liquid at a particular temperature is proportional to the pressure of that gas above the liquid. Henry’s law has since been shown to apply for a wide range of dilute solutions, not merely those of gases.
It’s the pressure of “that gas above the liquid”, not your alleged pressure gradient. Accord: Zeebe & Wolf-Gladrow, quoted for you on 6/27/10 at 7:56 am.
You insist otherwise, repeatedly, as in:
>>Depending of the pCO2 in water, based on Henry’s Law, and pCO2 in the atmosphere, there would be a CO2 flow in direction and ratio with the difference between the partial pressures involved…
I can see no reason for you to have inserted “based on Henry’s Law” where you did, but the only reasonable interpretation I can find is that you think Henry’s Law depends on both partial pressures. Can you provide a citation for that dual dependence?
You wrote,
>>>>(quoting me) What I am saying is that Henry’s Law applies to total gas dissolved in a solvent

>>There it is where you go wrong: Henry’s Law is not “total” gas dissolved in a solvent, the right term [CO2] is the concentration of not ionized CO2 in solution (including H2CO3).
You are quite right. I miswrote. And your correction is supported in Zeebe & Wolf-Gladrow just above my citation on 6/27/10. Do bear in mind that CO2 dissolves in water even without equilibrium. The mass balance of CO2 works using the solubility curve, and it fits the Vostok data, regardless that Zeebe et al.’s derivation applies only to “thermodynamic equilibrium”. That includes their statement that
>>At typical surface seawater pH of 8.2, the speciation between [CO2], [HCO3^−], and [CO3^2−] is 0.5%, 89%, and 10.5%, respectively, showing that most of the dissolved CO2 is in the form of HCO3− and not in the form of CO2 … . The low ratio of molecular CO2 is valid only in thermodynamic equilibrium. Before equilibrium, no theory tells us the ratio of the forms.
You quote the identical ratio from a different source. Then you conclude,
>>Thus a doubling of CO2 in the atmosphere will double [CO2] from 0.5% to 1% in first instance, and that will increase bicarbonate and carbonate somewhat, but will not double these two, thus seawater doesn’t double in CO2 content.

This would be true — if only the system were in thermodynamic equilibrium. Too bad, for your model. Also, equilibrium is not a continuum, a measurable condition. There is no phase trajectory through the parameters leading to equilibrium. A small disturbance might precipitate a huge shift in the ratio, perhaps even making CO2(aq) the overwhelming majority in the ratio. No theory exists to guide us in disequilibrium.
Takahashi says,
>>Only about 0.5% of the total CO2 molecules dissolved in seawater communicate with air via gas exchange across the sea surface. This quantity is called the partial pressure of CO2 (pCO2), which represents the CO2 vapor pressure. The seawater pCO2 depends on the temperature, the total amount of CO2 dissolved in seawater and the pH of seawater. Takahashi, T and SC Sutherland, “CO2 Partial Pressure Data for Global Ocean Surface Waters”, v. 1.0, 10/20/06, p. 2.
This passage relies on Henry’s Law and equilibrium in the surface layer. It continues,
>>The rate of transfer of CO2 across the sea surface is estimated by: (sea-air CO2 flux) = (transfer coefficient) x (sea-air pCO2 difference). The transfer coefficient depends primarily on the degree of turbulence near the interface, and is commonly expressed as a function of wind speed.
In another online document I deduce was due to Takahashi, he says,
>> The regional and global net CO2 values have been computed using (wind speed)2 and (windspeed)3 formulations for the wind speed dependence on the gas transfer rate with wind speeds at 10 meters and 0.995 sigma level (about 40 meters above the sea surface). http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/pages/air_sea_flux_1995.html
Several aspects of the Takahashi analysis are worth noting. (1) If the ocean equilibrated with the atmosphere instantaneously, Takahashi’s flux rate would be zero even if CO2 had crossed the interface instantaneously. (2) Takahashi uses SST temperature, for example to reject biased samples, but not as a prime parameter for flux. Nevertheless, his diagram has a striking appearance as a measure of SST. This is evidence of Henry’s Law. (3) Most of the net outgassing is in the Equatorial regions, roughly 20º to 30º wide, skewed north in the Indian Ocean and south in the Atlantic. Outside these outgassing regions, a patch of seawater being carried on prevailing currents, which have a general poleward component, absorbs CO2 all along its path, and increasingly so. An interesting analysis would be to integrate the Takahashi cells along the paths of prevailing currents to arrive at a total outgassing and then re-uptake.
Your own reference trips you up. It says,
>>Gaseous carbon dioxide (CO2(g)), and [CO2] are related by Henry’s law in THERMODYNAMIC EQUILIBRIUM: … (Caps added)
and
>>The pCO2 of a seawater sample refers to the pCO2 of a gas phase in equilibrium with that seawater sample.
In other words, the pCO2 of a seawater sample does not refer to a real gas pressure in the solvent. The pressure gradient the experimenters are measuring may be a measure of disequilibrium, indicating that only the dissolution is incomplete.
On this point, you wrote,
>>Again you are completely lost: these equations are describing a DYNAMIC EQUILIBRIUM, not a static one. All reactions shift if the concentration of one of the constituents changes, including [H(+)] (pH).
(Caps added.)
This is false. You introduced “dynamic equilibrium”, and without reason or authority. As I took great care to explain on this thread, dynamic equilibrium is not thermodynamic equilibrium. IPCC, being unaware of the differences, uses just plain “equilibrium”, but its own authorities make clear that what is required for their analysis is thermodynamic equilibrium. Furthermore, where a pressure gradient appears to exist, the process is not even in dynamic equilibrium.
Climatology and oceanography are not unique in that the steady state or equilibrium assumption make intractable problems tractable. This is all à priori modeling, and requires validation with experiment before it can advance from hypothesis to theory.
You wrote,
>>>>(quoting me) If there are hundreds of scientific works confirming Revelle, about one such citation that’s freely available on-line?

>>See “Changes in the Carbon Dioxide Content of the Atmosphere and Sea due to Fossil Fuel Combustion” by Bert Bolin and Erik Eriksson, 1958. On line at:
http://onramp.nsdl.org/eserv/onramp:16573/n8._Bolin___Eriksson__1958corrected.pdf
First, note that Bert Bolin was the first chairman of IPCC, from 1988 to 1997. He doesn’t qualify as a confirming source for anything IPCC might have written.
Bolin and Eriksson wrote:
>>Towards the end of their paper Revelle and Suess point out, however, that the sea has a buffer mechanism acting in such a way that a 10 % increase of the CO2-content of the atmosphere NEED MERELY BE BALANCED by an increase of about 1 % of the total CO2 content in sea water, TO REACH A NEW EQUILIBRIUM. The crude model of the sea they used assuming it to be one well-mixed reservoir of CO2, did not permit them to study the effect of this process more in detail. (Caps added.)
The last sentence is a tactful way of saying that the R&S were not successful in their paper. B&E don’t exactly confirm R&S, but attempt to correct their buffer effort by providing a better model for the ocean.
Note, too, that R&S didn’t say that a 10% increase WAS in fact balanced by a 1% increase in TCO2, only that it need be so to reach a new equilibrium. Theirs was an analysis passing from equilibrium to equilibrium. So was this effect ever confirmed? B&K say,
>>The change of pH in the sea will shift the dissociation equilibrium also for the carbon dioxide containing C14. We may assume an equilibrium rapidly being established and have … . P. 137
A scientist may make any assumptions he wishes. He can throw in magic, or violate the laws of thermodynamics. However, whatever assumptions he does make put a caveat on his model. The Revelle buffer is a relationship between parameters that can be estimated, meaning that a number can be assigned. Without a predicted buffer factor with which to compare the empirical number, however, the model is not validated. The mere quantification of the factor constitutes no validation whatsoever. A measure of confirmation might also accrue to a model if it agrees with another, independent model. However, the ultimate confirmation is validation of a nontrivial prediction by some experimental method.
Bolin & Eriksson repaired the failed Revelle buffer factor, but did not confirm it — even after it was fixed. The Revelle factor was an attempt to show how CO2, and in particular ACO2, would accumulate in the atmosphere to cause manmade global warming. The analysis required the assumption of thermodynamic equilibrium, and since that does exist in the climate, neither B&E nor R&S have linked ACO2 to global warming.
Bolin & Eriksson is interesting because it links IPCC’s errors through one of its founding fathers going back 30 years before IPCC was founded, and two years before that to Revelle & Suess.
You wrote,
>>Manning and Keeling Jr. saw that there were disturbances to the “normal” NH seasonal variability at MLO and these were probably caused by the short term disturbances from opposing flows between the NH and SH. The latter don’t cause the seasonal variability at MLO, only the disturbances of the seasonal variability.
So you determined that Manning and RF Keeling were talking about the variability of the variability! Did you actually find that in their paper, because it is not true. They talk about the “short-term variability”, appearing first in the title of their paper, and defined by them in the following:
>>As a measure of short-term variability, we have computed the residuals in O2/N2 and CO2 relative to smooth curves through the data. Manning & RF Keeling, “Correlations in Short-Term Variations etc.”, p. 1 of 3.
These residuals are the full, peak to peak, seasonal parts of the records. Their graphs confirm it. If they were examining the variability in the seasonal records, they would have had to subtract not just the “smooth curves”, but also an estimate of the seasonal variation. They never reported doing that.
>>This paper will try to establish whether the residuals of the flask data from the smooth curves fitted through the data are due to experimental artifacts or real atmospheric variability; that is, whether there is some problem with the sampling procedure used to collect the air samples at MLO or whether there are one or more natural processes affecting the air at MLO, and in a manner not seen at other SIO stations. Id.
>>This agreement [between the variability of the O2/N2 and CO2 residuals] suggests that the north-south transport may indeed be implicated as a source of variability at MLO. Id., p. 2 of 3.
What they have said is that “the north-south transport may indeed be implicated as a source of [the short-term variability, that is, the full seasonal cycle] at MLO”. There is no variability in the seasonal cycle compared to some idealized seasonal cycle.

Phil.
June 29, 2010 9:18 pm

>>In chemistry, Henry’s law is one of the gas laws, formulated by William Henry in 1803. It states that:
>>At a constant temperature, the amount of a given gas dissolved in a given type and volume of liquid is directly proportional to the partial pressure of that gas in equilibrium with that liquid.

Strictly Henry’s Law doesn’t apply to CO2 in water because CO2 reacts with water but it’s a good approximation.
A small disturbance might precipitate a huge shift in the ratio, perhaps even making CO2(aq) the overwhelming majority in the ratio. No theory exists to guide us in disequilibrium.
Rubbish, Le Chatelier’s principle and reaction kinetics do just fine.
CO2 is constantly flowing in and out of solution, under constant conditions a constant ratio between the gas phase concentration and liquid phase concentration is the Henry’s Law coefficient. In the case of CO2/water you have the more complicated system of CO2(g)⇋CO2(l)+H2O⇋H2CO3⇋HCO3− + H+⇋CO32− + H+
The rate constants and equilibrium constants for all of these are known, change the concentration of CO2(g) and the forward rate of production of CO2(l) goes up and exceeds the rate of flow back into the gas phase so the concentration of CO2(l) goes up and so the production of H2CO3 goes up and so on until all the species are in equilibrium with each other. The time it takes to reach the new equilibrium state is determined by the rates of all the forward/backward reactions, of course if the pCO2 changes too rapidly the equilibrium will never be actually achieved. One way the rates can be measured is to add CO2 containing C13 or C14 and observe the rates of accumulation of the isotope in the various species. Basically the way the experiment was conducted in the Earth’s atmosphere during the nuclear bomb tests in the 60s.

Ferdinand Engelbeen
June 30, 2010 5:39 am

Jeff Glassman says:
June 29, 2010 at 7:34 pm
The yearly net absorption I took to be about 92 GtC, and the yearly net outgassing to be about 90 GtC, according to IPCC AR4 Figure 7.3, p. 515, and thus the net was about 2 GtC absorbed.
That is right. But you say:
For the minimum values of each Takahashi cell, the total outgassing, uptake, and net would be (-4.11, 6.64, -3.44) GgmC/yr, at average they are (-3.32, 1.20, -2.12) GgmC/yr, and at maximum, (-2.54, 1.73, -8.05) GgmC/yr. {Begin rev. 12/30/09} The values should be on the order of 90 PetagmC/yr, an unresolved discrepancy of 10^7.
There is no unresolved discrepancy. For every cell, Takahashi calculated the yearly average flux, by integrating the positive and negative fluxes over a year. Simply compare the winter/summer and yearly plots. For a lot of cells in the mid-latitudes, than means out of the oceans in summer and into the oceans in winter. No matter if you believe in pCO2 or temperature influence only. In all cases, the positive fluxes are part of the 92 GtC uptake of the oceans and the negative fluxes are part of the 90 GtC outgassing. Thus not all 90/92 GtC is in the continuous flow between the equator and the poles, a lot is in the intermittent part. For the residence time, it doesn’t matter if the exchange is local intermittent or hemispheric continuous. It only shows up in the thinning of the isotope ratio’s, as deep ocean d13C/d14C ratio’s are different from upper ocean ratio’s and not influenced by current day changes.
If the Takahashi diagram is correct, what do the partial sums of all positive cells and all negative cells in the diagram mean?
The partial sums of all positive/negative cells only shows that the net result over a year for those cells is positive or negative. That doesn’t give any clue if the individual cell in that year was continuously positive, negative or intermittent positive and negative. Just look at the differences in each cell between winter and summer. The 90/92 GtC are the integral of separated all positive and all negative flows within a year apart, not the integral of the net fluxes over a year.
Have you checked the solubility curve to see if the difference in temperature and the flow rate work out to about 90 GtC or so? I have, and they do.
The solubility curve you use fits only for one DIC and pH. As there are relevant differences in DIC and pH between the equator and the poles, there is no 90 GtC going directly from the atmosphere into the deep oceans, but much less. That is what Feely/Takahashi say and what can be deduced from d13C changes in atmosphere and oceans, due to human emissions. Here a graph which shows the different d13C trends for different atmosphere – deep ocean exchanges, based on the influence on d13C in the atmosphere from fossil fuel burning:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/deep_ocean_air_zero.jpg
That includes indirect exchanges via the upper-deep ocean exchanges (not directly via the THC) and excludes vegetation exchanges (which are largely bidirectional), which may explain the discrepancy in the earlier years.
Thus my “best guess” is that the total atmosphere – deep ocean exchange is about 40 GtC/year, not the full 90/92 GtC you expect.
————–

The measured pCO2 of seawater near the North Pole is about 230 microatm.

Surely you jest! You can’t even show that seawater has a pCO2. It is deemed to exist, to be the pCO2 of the air above it when in equilibrium.
This is what makes discussions with you that difficult: near everybody in the world, who is involved in CO2 levels uses the pCO2 of seawater, as defined as the pCO2 of the atmosphere above seawater when both are in equilibrium, except you. Indeed it can’t be measured in seawater itself (but it can be calculated), but it simply represents the fugacity of CO2 to get out of the water.
If the air above the seawater had no CO2 content at all, CO2 from the solution will get out of the water until the equilibrium is reached. That is a dynamic equilibrium, as the fluxes in and out at that moment are equal.
If the real atmosphere has a higher pCO2, that will push more CO2 into the water and reverse. These are all dynamic equilibria, which have time constraints, partly by the pCO2 difference, partly by the mechanical mixing and moleculare diffusion speed of the upper oceans. The average equilibrium time constant is about a year. That doesn’t play much role at the equator and poles, but it is important in the mid-latitudes, as the temperature and biolife of the upper oceans there changes (opposite) over the seasons.
In other words, the pCO2 of a seawater sample does not refer to a real gas pressure in the solvent. The pressure gradient the experimenters are measuring may be a measure of disequilibrium, indicating that only the dissolution is incomplete.
This says it all: if there was no intention of CO2 in solution to come out, the pCO2 of seawater as defined, would be zero. Henry’s Law is not unidirectional: if the amount of CO2 in solution is proportional to the pCO2 of the atmosphere at a given temperature and salt content, the opposite is true too. Thus if there is a certain amount of free [CO2] in the solution, that would come out until the pressure of CO2 in the atmosphere is in equilibrium with the tendency of the CO2 in solution to get out. Thus pCO2 of water is well defined and is about the tendency of CO2 in water to escape, and has nothing to do with the actual pCO2 of the atmosphere. The difference between the two is what drives the fluxes, as the pCO2 of seawater is directly related to what is already dissolved as free CO2 in the liquid. The higher that is, the slower the flux.
This is false. You introduced “dynamic equilibrium”, and without reason or authority. As I took great care to explain on this thread, dynamic equilibrium is not thermodynamic equilibrium.
Again you are completely wrong. Near everybody say “equilibrium” while meaning “dynamic equilibrium”, as that is what in about every case happens in nature. In all cases, time constraints are active, which means that no natural system is actually momentarely in (dynamic) equilibrium, but the equilibria shift continuously with the change in parameters. That is e.g. the case for solubility and fluxes of CO2 with temperature in the mid-latitudes.
Further, Henry’s Law only covers only one thermodynamic part of the equilibrium. If the concentration of free CO2 in the liquid changes for any reason, that will influence the result of Henry’s Law in the (equilibrated) atmosphere above it. Simple experiment:
Add a small amount of a strong acid (HCl) to seawater and see what happens: CO2 will come out (in equilibrium) into the atmosphere above it, even if the temperature and salt content hardly changed. That is the chemical part of it, which together with Henry’s Law makes the pCO2 of seawater.
However, the ultimate confirmation is validation of a nontrivial prediction by some experimental method.
If Bolin may not be used as confirmation of the buffer effect of CO2, even if that was already established in the 1930’s, who can convince you?
Maybe Zeebe is good enough?
http://www.eoearth.org/article/Marine_carbonate_chemistry
While the increase in surface ocean dissolved CO2 is proportional to that in the atmosphere (upon equilibration after ~1 y), the increase in TCO2 is not. This is a result of the buffer capacity of seawater. The relative change of dissolved CO2 to the relative change of TCO2 in seawater in equilibrium with atmospheric CO2 is described by the so-called Revelle factor:
R = (d[CO2]/[CO2]) / (d[TCO2]/[TCO2]) (7)
which varies roughly between 8 and 15, depending on temperature and pCO2. As a consequence, the man-made increase of TCO2 in surface seawater (ocean acidification) occurs not in a 1:1 ratio to the increase of atmospheric CO2 (the latter being mainly caused by fossil fuel burning). Rather, a doubling of pCO2 only leads to an increase of TCO2 of the order of 10%.

Or take some lessons at Warwick University from Dr. G.P. King, who describes in detail the reactions of CO2 in seawater, including the Revelle factor:
http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf
————-
These residuals are the full, peak to peak, seasonal parts of the records. Their graphs confirm it. If they were examining the variability in the seasonal records, they would have had to subtract not just the “smooth curves”, but also an estimate of the seasonal variation. They never reported doing that.
Please read carefully what they have written at
http://www.esrl.noaa.gov/gmd/publications/annrpt22/MANNING.pdf
In first instance, the whole story is about the O2/N2 variability discrepancy against the CO2 variability. The seasonal variability is mainly in the NH and mainly from vegetation changes. That is largely confirmed by the seasonal O2/N2 changes and d13C changes, as well as at MLO as in all other NH stations.
Second, the text shows the following sentences:
Keeling and Shertz [1991] pointed out that there appeared to be greater relative short-term variability in O2/N2 at MLO than at other sites, but were unsure of the cause.
If that was about the full seasonal cycle, this would be opposite to reality: the seasonal cycle at Barrow, Alert and other NH places is (much) larger for CO2 and O2/N2 and d13C than at Mauna Loa.
Then from Fig. 1:
The curves shown (from which all residuals are calculated) were calculated with a least-squares fit to a function of two harmonics (annual and semi-annual periodicity) and a stiff Reinsch spline. [my bold]
The two harmonics makes the (smoothed) seasonal variability and the spline is the year-by-year increase. The residuals between the smoothed seasonal curve and the observations is the short time variability for which they are looking for an explanation. Not for the seasonal variability itself, of which the cause is largely known.
The seasonal variations are known from a lot of stations and air flights and nowadays satellite measurements. There is a huge north-south gradient and is most pronounced in the mid-latitudes:
http://www.esrl.noaa.gov/gmd/ccgg/globalview/co2/co2_intro.html

June 30, 2010 7:42 am

Re Phil, 6/29/10 at 9:18 am:
Phil’s like the kid who just woke up in the middle of the lecture and shouted out. He appears to have studied chemistry, at least through the lectures on equilibria, but never grasping either logic, e.g., if … then construction, argumentation, e.g., point – counterpoint, laboratory work, e.g., “the experiment was conducted” [period], or analysis, e.g., “Le Chatelier’s principle and reaction kinetics do just fine” [period].
Phil’s first two paragraphs are lifted verbatim out of Wikipedia, but without attribution or quotations. He strangely offsets his second paragraph (“>>”) as if it were a quote from somewhere on high. (A tip for the Phils of the world: Wikipedia is a great source for linking to research, but a risky one for quoting.)
Phil restates the stoichiometric equations as if that added to the dialog. The discussion has gone way beyond that stage. Those equations are explicit in IPCC’s Fourth Assessment Report, attributed to Zeebe & Wolf-Gladrow. That authority makes explicit that the equations apply to thermodynamic equilibrium. Even though climate is a fine example of a thermodynamic problem, IPCC not once mentions the term “thermodynamic equilibrium” (it does manage “equilibrium thermodynamics once) in its last two Assessment Reports. IPCC uses equilibrium repeatedly, and even implying the surface layer is in equilibrium by applying the equations Phil just repeated. But IPCC and some of the contributors to this thread, including Phil, do not grasp the implication.
Phil inserts Le Chatelier’s principle as if it applied. It is an axiom about systems saying they will transition between equilibrium states, given the chance. It says nothing about the trajectory between equilibrium states. I made the point that nothing is known about any trajectory between equilibrium states, to which Phil said “Rubbish” and for proof dropped the names “Le Chatelier’s principle and reaction kinetics” as if that meant something.
Phil interrupts the dialog to say,
>> Strictly Henry’s Law doesn’t apply to CO2 in water because CO2 reacts with water but it’s a good approximation.
To the contrary, Henry’s Law does indeed apply to CO2 in water. The application is analogous to applying blackbody radiation to Earth or the Moon. Of course it applies – just don’t leave out the empirical factor called emissivity. That adjusts the à priori model to fit the real world. In construction, it’s what is called a butch plate. A solubility curve exists for CO2. It is empirical in origin, and has undergone refinements for climate. So even though CO2 in solution undergoes a chemical reaction, forbidden in the reasoning behind Henry’s Law, the CO2 coefficient is a nice butch plate that brings the à priori into accord with experiment.
How well does solubility work? The test is in the application, and as I reported here recently, it works quite well for closure between THC flow rate and air-sea flux estimates. It also provides a best estimate for the relationship between CO2 and temperature in the Vostok record.
How well, then, do the stoichiometric equilibrium equations work for the surface layer? Answer: not very well at all. They lead to the Revelle buffer factor nonsense, and the build up of atmospheric ACO2 but not nCO2, and they make AGW work. But in the process, they violate Henry’s Law, which actually does work.
As I wrote here recently while Phil was asleep,
>>More important is that Henry’s Law informs us of the physics involved in a qualitative way, as fundamental as the recognition that balls roll down hill.
The issue here is not the equilibrium relationships repeated by a startled Phil. It is the application of those relationships where neither thermodynamic equilibrium nor even chemical equilibrium exists.
Could Phil be thinking that Le Chatelier’s principle is instead a law that proves equilibrium exists? He must do what others on this thread and IPCC have failed to do: first establish the existence of thermodynamic equilibrium, then rely they can rely on equilibrium relationships. Equilibrium first, then Le Chatelier’s principle. Equilibrium first, then the stoichiometric equations with the Bjerrum solution.

Ferdinand Engelbeen
June 30, 2010 8:57 am

In addition, about the last part of my previous message, the clear explanation of what was done by Manning and Keeling Jr., in the text of the first page:
As a measure of short-term variability, we have computed the residuals in O2/N2 and CO2 relative to smooth curves through the data.
Clear to me that they looked at the variability around the seasonal variability…

July 1, 2010 8:08 am

Re Ferdinand Engelbeen, 6/30/10 at 5:39 am:
Your explanation of the unresolved discrepancy made no sense to me. Sorry. I don’t want to be repetitive, but I have to fall back on the sum of the net positive cells is the net positive uptake for the ocean, and vice versa. This is based on the sum of the averages being the average of the sums. The sums for the Takahashi diagram provide the correct net difference between uptake and outgassing, but not the uptake and outgassing separate estimates.
You say,
>>The solubility curve you use fits only for one DIC and pH.
Excellent! A commitment to shoot at. So you must have data that show a change in the CO2 solubility curve for a change in water DIC or pH!
Not only is the dependence amazing, but so is the fact that measuring it was practical. The first and second order effects of dissolution are temperature and pressure (including partial pressure), the ranking depending on the application. The third order effect is salinity. A conjecture for a fourth order effect is molecular weight. We’re already into difficult to measure territory. Now you add two more parameters, DIC concentration and pH. How do you think these might rank in order of importance?
Yours is novel physics. Novel means never before known physics, this time developed from climatology. There should be a Nobel prize in here somewhere.
With this new knowledge, we could use CO2 solubility to measure water pH and to measure the operating point on the Bjerrum plot.
For a couple of other reasons, CO2 is not significant to climate. (E.g., climate follows solar activity, Earth’s temperature response is regulated by albedo, and greenhouse gas absorption does not follow the logarithm of the concentration, but instead saturates, following an S curve according to the Beer-Lambert Law. IPCC manages to butcher all of these.) We have GCMs that don’t work as even first order models, and you want to refine the physics by what? A fifth order dependence on surface pH or DIC concentration?
You say,
>>This is what makes discussions with you that difficult: near everybody in the world, who is involved in CO2 levels uses the pCO2 of seawater, as defined as the pCO2 of the atmosphere above seawater when both are in equilibrium, except you.
But that is exactly what I have been saying. You seem to have accepted my argument, and now feedback my position saying I disagree with it. Perhaps I’ve not been clear enough.
The pCO2 of the atmosphere in equilibrium with the water is “taken to be”, “assumed to be”, “deemed to be”, or whatever synonym you want to use, the pCO2 for the water. The latter, of course, does not actually exist. My complaint is the novel model for Henry’s Law that you endorse in which solubility depends on the pCO2 gradient between gas and liquid. That is more novel physics.
As to “near everybody in the world”, (a) you need to get out more and (b) science is not about consensus. AGW is about consensus. Every new idea, every new direction comes from one person. Don’t expect a committee (a) to be correct or (b) to change course.
Takahashi used the pCO2 gradient to estimate the rate of flux. That is not the law of solubility. Henry’s Law says nothing about the rate of flux. It tells us how much CO2 is in the water after everything settles down, gms CO2 per 100 gms water usually, and how in climate at least, what counts most are temperature of the water and pCO2 or, equivalently, the atmospheric CO2 concentration.
Takahashi computed a hypothetical uptake per cell, inferred from empirical relationships about rates. His results might have been measured and reported in PgC/sec, but he multiplied his results by 3.2*10^7 to report in PgC/year. This gives the impression that the rate is the amount of CO2 the water in the cell absorbed in a year. Instead, it is the incremental, additional CO2 for the period of cell measurements. The water in the cell absorbs instantaneously (for all practical purposes) and moves on to the next cell for the next incremental uptake (and vice versa for outgassing). The surface ocean is not a stagnant pool.
The difference between Takahashi’s model, being about rates, and solubility, being an integral, seems to be a constant of integration. That is why I suggested an interesting paper might integrate a patch of water as it moves with ocean current, taking up CO2 at the Takahashi rates, and see what the total uptake or outgassing was for the path.
On 6/30/10 at 8:57, you said with respect to the Manning & RF Keeling paper,
>> Clear to me that they looked at the variability around the seasonal variability…
Your observation has some support in the paper. You have seen clearly, but not very far. The authors examined residuals with respect to smoothed curves fit to annual and semi-annual periodics, at least in the case of MLO. They provide neither formulas nor graphs for those residuals, but they talk about “the RESIDUALS of the flask data from the smooth curves fitted through the data”. Caps added. Their Figure 1 contains both flask data and the smoothed curves.
They go on to say
>>If variations in north-south transport are in fact causing the short-term variability at MLO, then we would expect the ratio of the instantaneous O2/N2 and CO2 GRADIENTS (shown in Figure 3) to be roughly equal to the ratio of the short-term covariations in O2/N2 and CO2. In other words, we would expect the ratio of the two vertical lines shown in Figure 3 to be roughly equal to the slope of the envelope of the flask residuals shown in Figure 2, for the same time period. In the period from December through March, when the ratio of the north-south gradient is at its most stable, the average absolute value of this north-south O2/N2 versus CO2 ratio is 15 ± 3 per meg ppmV-1, while a least squares fit to the flask residuals over this period results in a slope of 17 ± 4 per meg ppmV-1. This agreement suggests that the north-south transport may indeed be implicated as a source of variability at MLO. Caps added.
Figure 3 has no flask data, and the gradients are the differences between pairs of complete seasonal patterns. Two differences are involved, one for O2/N2, and the other for CO2. For each gas parameter, the differences are between La Jolla data and Cape Grim, Tasmania, data. The authors provide neither formula (fairly trivial) nor graphs for the gradients. The peak-to-peak seasonal variations in CO2 at Cape Grim are about 1.5 ppm, while at La Jolla, they are about 12.5 ppm. Consequently, La Jolla seasonal variations alone account for the CO2 gradient by a ratio of about 8:1.
So the authors employed two methods, one with residuals (your observation) and the other comparing only smoothed data (supporting my conclusion), and the methods agree. However, the methods don’t support the identical conclusion. The residual method supports a lesser included conclusion.
The residual method leads to the conclusion that the variations from the seasonal are likely due to “north-south transport”, leaving the door open for the CD Keeling’s conjecture that the seasonal cycles are due to terrestrial biology. The gradient method leads to the stronger conclusion that the total seasonal variations, smoothed fundamental plus residuals, are due to “north-south transport”. The gradient method is not consistent with CD Keeling’s conjecture.
In the authors’ formal conclusion, they generalize “north-south transport” to the parameter “real atmospheric variability”. This opens their conclusions to more than north-south transport, and in particular to include seasonal wind variations at the sites. The latter would be relatively unimportant if CO2 were well-mixed. The recognition that atmospheric CO2 is not well-mixed makes seasonal winds most significant.

Ferdinand Engelbeen
July 2, 2010 4:49 am

Jeff Glassman says:
July 1, 2010 at 8:08 am
I have to fall back on the sum of the net positive cells is the net positive uptake for the ocean, and vice versa. This is based on the sum of the averages being the average of the sums.
The last sentence is right, the first is not: The sum of all positive periods and the sum of all negative periods within a cell is not equal to the average result of the cell, only the sum of both is equal to the average. The +90/-92 GtC out/inflows of total ocean surface added together are equal to the sum of averages of all cells at -2GtC, but each of them (as integrated separately) is much larger, as these represent all outflows and inflows, as represented in the monthly averages, not yearly averages.
Have a look at wintertime in the midlatitude cells:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/images/fig03.jpg
A lot of them are relative strong absorbers, down to 30N
In summer, many of them are relative strong emitters.
Excellent! A commitment to shoot at. So you must have data that show a change in the CO2 solubility curve for a change in water DIC or pH!
See Zeebe and Wolf Fig. 1:
http://www.soest.hawaii.edu/oceanography/faculty/zeebe_files/Publications/ZeebeWolfEnclp07.pdf
Where the horizontal axis is the pH influence on the concentrations of the different species, including [CO2]. That is the Bjerrum plot you don’t like…
In fig. 2 one can see what happens with total alkalinity (~pH) if DIC changes.
Yours is novel physics. Novel means never before known physics, this time developed from climatology. There should be a Nobel prize in here somewhere.
Well, Svante Arrhenius earned the Nobel prize in chemistry, for his work on the greenhouse effect of CO2, although he was wrong with a large factor…
The influence of pH on CO2 solubility in seawater was established in the 1920’s (or even before?), long before CO2 was thought to be increasing in the atmosphere or anyway linked to catastrophic global warming. That resulted in formula’s to calculate the pCO2 of seawater from temperature, salinity and DIC/pH, together with practical methods to measure that even in (deep) ocean waters.
Here the table from the Wattenberg (deep) ocean measurements on board of the Meteor 1925-1927:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/wattenberg_ph_pco2.jpg
The Meteor trips are described here:
http://www.biokurs.de/treibhaus/literatur/wattenberg/meteor-reise.jpg
The theoretical calculations and real life measurements are here in three parts:
http://www.biokurs.de/treibhaus/literatur/wattenberg/wattp1.pdf
(the other parts at wattp2 and wattp3)
Sometimes it helps to know different languages…
That pH has such an influence is not a result of physics, it is the result of chemical reactions. Henry’s Law only describes the effect of gaseous CO2 concentration on free CO2 concentration in water and reverse, not on the other forms of CO2 in solution: bicarbonate and carbonate. pH influences the amount of free CO2 in solution, thus the ultimate effect of Henry’s Law.
Now back to basics.
The pCO2 of the atmosphere in equilibrium with the water is “taken to be”, “assumed to be”, “deemed to be”, or whatever synonym you want to use, the pCO2 for the water. The latter, of course, does not actually exist. My complaint is the novel model for Henry’s Law that you endorse in which solubility depends on the pCO2 gradient between gas and liquid. That is more novel physics.
The definition pCO2 as described here has nothing to do with a novel model of Henry’s Law, it is based on Henry’s Law. What you seem to don’t understand is that Henry’s Law is going in both directions: The concentration of free CO2 in solution is in ratio with the concentration of free CO2 in the atmosphere when in equilibrium. With the same Law, the concentration of free CO2 in a small volume of air above a large amount of seawater is in ratio with the concentration of CO2 in the seawater, regardless of the initial amount of CO2 in the small volume of atmosphere.
Thus the pCO2 of seawater in any cell has nothing to do with the current pCO2 of the atmosphere above it, whatever that is, it simply reflects the tendency of CO2 in solution to come out. Thus it depends of the amount of CO2 already in the water.
Do you agree that if the pCO2 of seawater, as defined here, is higher than the real atmospheric pCO2 above it, that the seawater will release CO2 and reverse?
Indeed, the difference between pCO2 of the atmosphere and of the water doesn’t play much role in the ultimate (dynamic) equilibrium (that depends of the total quantities in both media involved), but it plays a role in the uptake/release speed, see next item.
The water in the cell absorbs instantaneously (for all practical purposes) and moves on to the next cell for the next incremental uptake (and vice versa for outgassing). The surface ocean is not a stagnant pool.
The water in the cell absorbs instantaneously for the skin of the surface, not the whole 100-200 m depth of the cell. That is where dpCO2, wind speed and diffusion speed (which is very low) are involved. The uptake/release rate for a given wind speed and dpCO2 needs about one year to get in full equilibrium for the full depth, thus in the mid-latitudes is never reached as the seasons change the same cells from emitters to absorbers and reverse within a year. The release – uptake is not only from cell to cell (where you have gyres like the North Atlantic gyre, which are going from warm to cold and back to warm…), but within a year within several mid-latitude cells too. Several of these cells have no practical connection to the deep oceans, despite that these add to the uptake and release quantities.
Thus all together in summary, where we seem to disagree:
– Henry’s Law works bidirectional.
– pCO2 of seawater shows the tendency of free CO2 in seawater to escape, measured as pCO2 in a small volume of air above the water, which by Henry’s Law is directly proportional to [CO2] in the seawater of interest.
– besides temperature and salinity, pH and DIC play a huge role in the changes of free CO2 (denoted as [CO2]) in seawater.
– thus the pCO2 of seawater changes with temperature, salinity, pH and DIC.
– the consequence is that an increase of 100% in atmospheric pCO2 results in a 100% increase of oceanic pCO2, when in dynamic equilibrium, but only 10% increase of total CO2 in seawater, due to the change in pH as result of increased CO2.
– dpCO2, the difference between seawater pCO2 and atmospheric pCO2 gives the direction and, together with wind speed, gives the transfer speed of CO2 (thus fluxes) between atmosphere and oceans.
– the 90/92 GtC as described in the literature (and adopted by the IPCC) is the sum of all monthly CO2 fluxes out of all cells separately the sum of all monthly CO2 fluxes into all cells, not the sum of yearly averages of the +/- cells, with or without a factor.
– the deep ocean – atmospheric CO2 exchanges are far less than the 90/92 GtC, as most exchanges are within a year in cells of the mid-latitudes which are emitters in summer and absorbers in winter.
——————-
They provide neither formulas nor graphs for those residuals
The residuals can be seen in Fig. 2 of their report:
http://www.esrl.noaa.gov/gmd/publications/annrpt22/MANNING.pdf
CO2 residuals are between -0.7 to +0.7 ppmv against the smoothed seasonal curve, which shows an amplitude of about +/- 3-4 ppmv. The disturbance of the seasonal curve thus is about 20% of the amplitude.
The peak-to-peak seasonal variations in CO2 at Cape Grim are about 1.5 ppm, while at La Jolla, they are about 12.5 ppm. Consequently, La Jolla seasonal variations alone account for the CO2 gradient by a ratio of about 8:1
All NH stations show huge seasonal variations, all SH stations show small seasonal variations, where NH and SH stations are opposite in seasonality. MLO’s seasonal variability is the result of the seasonal variability within the NH, not the result of any NH-SH gradient. The latter only influences the disturbances in the MLO data, which are not seen in the other NH stations data. In all cases, they write about “short term” variability around the seasonal variability…

Ferdinand Engelbeen
July 2, 2010 4:53 am

In addition:
This opens their conclusions to more than north-south transport, and in particular to include seasonal wind variations at the sites. The latter would be relatively unimportant if CO2 were well-mixed. The recognition that atmospheric CO2 is not well-mixed makes seasonal winds most significant.

If you call a variability of less than 2% of the absolute value “not well-mixed”, including seasonal variability and NH-SH gradients, what in heaven (or earth) is then well-mixed?

Ferdinand Engelbeen
July 2, 2010 5:29 am

More addition:
As I suppose that not everybody reads German and the copy is hardly readable, here a translation of the important parts of the pH-pCO2 graph at
http://www.ferdinand-engelbeen.be/klimaat/klim_img/wattenberg_ph_pco2.jpg
Title: Investigations about the CO2 pressure and the hydrogen ion concentration of ocean waters.
Top and bottom axis: hydrogen ion concentration, pH
Scale: 7.6 – 8.3 pH units; gridlines at 0.1 pH units
Axis at the left and right side: pCO2
Scale: 100-1200 ppmv (indicated at 10exp-4 atm, or per 100 ppmv); gridlines per 100 ppmv
Measurements corrected for temperature (18 C base) and salt content.
At these conditions, a pH of 8.2 in seawater gives a pCO2 of about 230 ppmv (= uptake at 18 C), while at pH 8.0, pCO2 is already 430 ppmv (=release at 18 C) and at pH 7.8 the pCO2 is around 700 ppmv, thus a strong emitter of CO2 (at 18 C) compared to the current atmospheric pCO2 of around 390 ppmv…

July 2, 2010 4:21 pm

Re Ferdinand Engelbeen, 7/1/10 at 8:08 am & 7/2/10 at 4:53:
Your new explanation doesn’t help. I don’t understand what you mean equating sums to averages. What we’re discussing here are all time averages over the same interval of time, are they not? IPCC says the data are monthly averages (of what, mol/m/sec?), but the numbers are in mol/m^2/yr. AR4, Figure 7.8, p. 523. You seem to have a rationale that supports both the Takahashi diagram and the +90/-92 GtC/yr total ocean fluxes. This might be understandable on examination of the data. Can you provide a link to the data that supports the total ocean fluxes of 90 and 92?
The link to … fig03.jpg is no help. It’s a washed out version of one figure from the link I gave you on 6/28 at 8:56, … air_sea_flux_1995.html. If any reader is interested in more information, my link is included in a more comprehensive link at http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/pages/air_sea_flux_2000.html
You provide a link to … ZeebeWolfEnclp07.pdf. You again repeat the identical link I provided to you on 6/27/10 at 7:56. I don’t think you are following this dialog at all, repeating references and putting words in my mouth.
You refer me to Figure 1 for the Bjerrum plot “that I don’t like”. You put words in my mouth (YPWIMM). Bjerrum has been discussed extensively here, but I said nothing to indicate that I don’t like it. IPCC relied on it but failed to cite it. When you and others who have been fooled by IPCC apply the Bjerrum plot, you fail to apply the applicable constraints.
Every scientific hypothesis, theory or law has a domain of validity. You misapply Bjerrum. It requires thermodynamic equilibrium. IPCC says only equilibrium. You incorrectly call for dynamic equilibrium, putting words in the mouths of IPCC and Zeebe & Wolf-Gladrow, too. You even use dynamic equilibrium incorrectly where it might have applied. You admitted this was true when on 6/26/10 you wrote,
>>Climate never is in dynamic equilibrium… .
The Bjerrum solution to the stoichiometric chemical equations for CO2 show that adding CO2 would increase acidity, and by derivation, that the ocean would create a bottleneck to absorbing CO2 (IPCC and Revelle limit this to ACO2, which is compounding the nonsense). Those effects are true only in transitions between thermodynamic equilibrium states, which exist nowhere on Earth.
Your references to the Wattenberg measurements and Meteor cruises were useless. Regrettably, I don’t read German. They appear to be irrelevant, if for no other reason than the fact that IPCC did not rely on them.
You wrote,
>>The definition pCO2 as described here has nothing to do with a novel model of Henry’s Law, it is based on Henry’s Law.
I didn’t say what you protest. YPWIMM. Further, you are wrong that the definition of pCO2 is based on Henry’s Law.
You asked,
>>Do you agree that if the pCO2 of seawater, as defined here, is higher than the real atmospheric pCO2 above it, that the seawater will release CO2 and reverse?

Set this up as an isothermal experiment, constant salinity, constant isotopic mix of CO2, and the answer is yes. That is Henry’s Law, extended beyond thermodynamic equilibrium by experimental Henry’s constants.
You say incorrectly,
>>The water in the cell absorbs instantaneously for the skin of the surface, not the whole 100-200 m depth of the cell.
If this were true, a can of pop would go stale instantaneously. If this were true, the wind speed would have no effect, contradicting the science behind Takahashi’s model. Dissolution is a mechanical and statistical process, so the process is not instantaneous. It continues until the probability of a molecule entering the water is the same as the probability of one leaving it.
A superior model to yours is that the surface layer entrains air in bubbles, in an amount and to a depth that increases with increasing wind. The effective skin depth for dissolution is the total area of the ragged interface plus the surface of all the bubbles.
The skin model works as the water approaches a stagnant pond.
You say,
>>That is where dpCO2, wind speed and diffusion speed (which is very low) are involved. The uptake/release rate for a given wind speed and dpCO2 needs about one year to get in full equilibrium for the full depth, thus in the mid-latitudes is never reached as the seasons change the same cells from emitters to absorbers and reverse within a year.
Diffusion speed has nothing to do with dissolution, assuming you mean vertical diffusion within the ocean. It would, of course, if we were considering transitions between states of thermodynamic equilibrium. But the surface layer is churning, and is in disequilibrium. It never achieves “full equilibrium”, especially in recognition of the fact that partial equilibrium is not defined. The surface layer absorbs as much CO2 as necessary to satisfy Henry’s Law for the appropriate Henry’s constant at the local temperature.
The conditions in the layer do not satisfy the Bjerrum plot. The surface layer supports dissociation of CO2(aq) into its various ions, and those support the biological pumps, fed by contact and diffusion, at their respective speeds and independent of the air-surface flux.
The surface layer takes about a year to get fully charged with CO2, because that is approximately the amount of time it takes to cool to ice water. It is the time it takes to travel from the tropics to the poles.
You say,
>>- Henry’s Law works bidirectional.
It’s bidirectional in the sense that one may either load or unload a 10-ton truck. Henry’s Law provides the carrying capacity. It’s in gms CO2/100 gms H2O, not grams per unit time.
>> – … which by Henry’s Law is directly proportional to [CO2] in the seawater of interest.
Henry’s Law does not depend on the concentration of CO2 in the water. It depends on the temperature and salinity of the seawater, and no other characteristic of any known significance of the solvent.
>>- besides temperature and salinity, pH and DIC play a huge role in the changes of free CO2 (denoted as [CO2]) in seawater.

(a) you’ve got it quite backwards, and (b) the reverse relationship only applies under thermodynamic equilibrium, which doesn’t exist.
>>- the consequence is that an increase of 100% in atmospheric pCO2 results in a 100% increase of oceanic pCO2, when in dynamic equilibrium, but only 10% increase of total CO2 in seawater, due to the change in pH as result of increased CO2.

Dynamic equilibrium is your personal invention, not physics. The second part would be true passing between states of thermodynamic equilibrium, where the stoichiometric equilibrium constants apply.
>>- the 90/92 GtC as described in the literature (and adopted by the IPCC) is the sum of all monthly CO2 fluxes out of all cells separately the sum of all monthly CO2 fluxes into all cells, not the sum of yearly averages of the +/- cells, with or without a factor.

Incomprehensible. Please demonstrate with data or even algebra.
>>- the deep ocean – atmospheric CO2 exchanges are far less than the 90/92 GtC, as most exchanges are within a year in cells of the mid-latitudes which are emitters in summer and absorbers in winter.

(a) Incomprehensible and (b) the deep ocean does not react with the atmosphere, if that is what you mean. Similarly and equally wrong, IPCC shows the intermediate and deep ocean layers exchanging CO2(g) with the atmosphere (AR4, Figure 7.10, p. 530), and support it with the CO2 Response Function, an equation, (AR4, Table 2.14, p. 213, fn. a).
>>The residuals can be seen in Fig. 2 … .
No, they can’t. Residuals are the instantaneous differences between two functions of time, data points and smoothed, fitted curves, each in ppm. The residuals then are also functions of time and in units of ppm. Figure 2 is in dimensionless slopes of residuals in ppm/ppm for CO2, and in meg/meg for O2/N2.
>>If you call a variability of less than 2% of the absolute value “not well-mixed”, including seasonal variability and NH-SH gradients, what in heaven (or earth) is then well-mixed?
I said no such thing. YPWIMM. IPCC used the term well-mixed, and that is for their purpose of claiming MLO data is global, when it is local. That it does to match the rise in CO2 to the rise in temperature, to frighten the gullible, to dislodge public funds, and to receive academic fame and recognition.
IPCC uses the assumption of well-mixed to calibrate all CO2 stations so that the various CO2 concentration record overlap. That is what you see as less than 2% variability. This is done in part by its application of a “linear gain factor” to CO2 readings at stations other than MLO. When you use the calibrated result to prove the assumption, you have lifted yourself by your own bootstraps.
IPCC’s CO2 records are well-calibrated. Its linear gain factors used for the calibration are secret. The burden is on IPCC and you to define well-mixed, and then to demonstrate its existence. Otherwise, you may not rely on the assumption and call the result science.
Homogenized milk is probably well-mixed under the most exacting definition.

Ferdinand Engelbeen
July 3, 2010 1:05 pm

Jeff Glassman says:
July 2, 2010 at 4:21 pm
About the in/out fluxes and the averages:
I tried to make the calculations from monthly data, but seems to miss some information. The +/- flux data are way too high or way too low, but don’t reflect the 90/92 GtC/year total in/out fluxes.
What I tried to show is that if a cell acts both as a CO2 emitter and a CO2 absorber within a year (as most mid-latitude cells do), then the yearly averages don’t show the real + and – fluxes, which each contribute to the 90/92 GtC total fluxes, only the average result. In graph form, this can be seen in the monthly fluxes of the Southern Ocean: http://www.atmos.colostate.edu/~nikki/Metzl-Lenton-SOLAS_China07.pdf
While there are huge monthly fluxes involved, both in and out, depending of the seawater temperature, the net average result over a year is much smaller.
Simple algebra for what I mean:
Two cells have a summer release and a winter uptake of CO2. One releases 5 Mt/month during 5 months and takes in 8 Mt/month during 7 months. The other releases 6 Mt/month during 6 months and takes in 5 Mt/month the other halve year.
The yearly average of cell 1 is -31 MtC
The yearly average of cell 2 is + 6 MtC
The yearly average of all cells is -25 MtC
The sum of all monthly outflows is +61 MtC
The sum of all monthly inflows is -86 MtC
The difference between these two equals the yearly average of -25 MtC, but the individual or total averages don’t reflect the real individual or total inflows and outflows within a year, if certain cells have distinct periods of in and outflow.
The pH and the concentration of ions in the surface layer do not regulate Henry’s Law, the dissolution of CO2. They do not create a bottleneck.
Nor is the reverse true. The dissolution of CO2 does not shift the Bjerrum plot one way or the other, that is, until equilibrium is reached. And that never happens.

and
The Bjerrum solution to the stoichiometric chemical equations for CO2 show that adding CO2 would increase acidity, and by derivation, that the ocean would create a bottleneck to absorbing CO2 (IPCC and Revelle limit this to ACO2, which is compounding the nonsense). Those effects are true only in transitions between thermodynamic equilibrium states, which exist nowhere on Earth.
This is what you said before. The main problem in this all is that you use a different definition for “equilibrium” than most other people (especially chemists) do. The Bjerrum plot has nothing to do with stoichiometry, neither is a fixed equilibrium. It is the result of reactions which go both ways, depending of the concentrations of each of the constituents and with a minor influence of temperature (thus little to do with any thermodynamic “equilibrium”). It is applicable at every moment in every layer of the ocean, from top to bottom and applicable for any mix of ingredients and shows the relative amounts of each reactant, both ways, when the concentration of one of them changes. Thus if the CO2 concentration in the atmosphere increases and hence the CO2 concentration in the water (no matter what depth), the chemical reactions following the increase of [CO2] increase the [H+] (decrease the pH), whose feedback pushes bicarbonate back to increase the concentration of [CO2], which prevent further uptake. This all is undergrad chemistry, where all relevant details can be found at:
http://www.eng.warwick.ac.uk/staff/gpk/Teaching-undergrad/es427/Exam%200405%20Revision/Ocean-chemistry.pdf
Earlier you wrote:
Ocean acidity (pH) and relative DIC content are effects of dissolved CO2, and not contributors to dissolution. Henry’s Law is not known to depend on pH or DIC.
To the extent that your physics is different, I challenge you to provide unbiased evidence as I have done for my position. If you rely on IPCC, including any of its authors, you will fail my challenge.

Now you write:
Your references to the Wattenberg measurements and Meteor cruises were useless. Regrettably, I don’t read German. They appear to be irrelevant, if for no other reason than the fact that IPCC did not rely on them.
The Wattenberg graph simply shows, as translated in plain English in my second addition, the dependency of [CO2] or pCO2 or fCO2, or anyhow you call it, of seawater on the pH of the same, for equal temperature and salt content. It is from the 1920’s and shows where at a temperature of 18C, the seawater is absorbing, emitting or strongly emitting CO2. If the pCO2 of the seawater is higher than the pCO2 of the atmosphere, then we will see outgassing, or reverse if the pCO2 of seawater is lower. That is because pH has a strong influence on the concentration of free [CO2] in seawater, thus combined with Henry’s Law on the tendency to escape from the solution. pH doesn’t change Henry’s Law, but it definitively changes dissolution of total CO2 in seawater. It only temporarely changes the dissolution of free [CO2], until that is back into dynamic equilibrium with the atmosphere, according to Henry’s Law.
Wattenberg used both calculated and in situ measurements of pCO2, the same calculations (with some refinement) still are in use and are at the base of the Bjerrum plot where the IPCC’s remarks are based on. Thus this is one of the oldest proofs that the IPCC is right – in this case. Thus not only Henry’s Law is important in solubility calculations, one also need to take into account the changes in pH and DIC.
———–
A lot of remarks about skin “model” and so can be returned with the same reaction: that is not what I said. The skin of the oceans is the upper fraction of a mm and is in direct contact with the atmosphere. That will be in very fast “almost” instantaneous equilibrium with the atmosphere. Everything that is deeper needs more time. That is where we may agree.
The conditions in the layer do not satisfy the Bjerrum plot.
Of course they do, as the Bjerrum plot shows the result for all kinds of conditions. For every initial condition or change in conditions, the Bjerrum plot shows the result or resulting changes. With the underlying reactions one can directly calculate the result for any change in conditions.
The surface layer takes about a year to get fully charged with CO2, because that is approximately the amount of time it takes to cool to ice water. It is the time it takes to travel from the tropics to the poles.
Nice try! In many cases, the ocean flows simply circulate from warm to cold and back to warm, see e.g. the North Atlantic gyre. The THC is the main driver for the warm-cold-deep ocean transfer and back. Even that one goes at the surface from warm (Pacific) to cold (Southern) to warm (Atlantic) to cold (Arctic)… In the Atlantic warm/cold part the surface speed is about 1 m/s, that needs about 3 months to reach the Arctic…
It’s bidirectional in the sense that one may either load or unload a 10-ton truck. Henry’s Law provides the carrying capacity. It’s in gms CO2/100 gms H2O, not grams per unit time.
Again you use Henry’s Law as a static item. Bidirectional means that if you load the water with extra CO2 above what Henry’s Law dictates as carrying capacity, based on pCO2 of the atmosphere above it, that extra CO2 will transfer to the atmosphere. Thus the difference between pCO2 of water and atmosphere dictates if there will be any net transfer or not and in what direction. Henry’s Law is dynamic for both CO2 in the atmosphere as good as for CO2 in the water phase, not static.
Dynamic equilibrium is your personal invention, not physics. The second part would be true passing between states of thermodynamic equilibrium, where the stoichiometric equilibrium constants apply.
Well thanks, wait for my Nobel Prize. Unfortunately that this is used by so many others before me. The chemical reactions/equilibria involved here are by nature all dynamic, have very little to do with thermodynamics (except if for strong exothermic or endothermic reactions, which is not the case here) and absolutely nothing to do with stoichiometry in this case.
Please ask anyone with some knowledge of chemical reactions to explain what happens if you change e.g. the pH of the “equilibria”…
the deep ocean does not react with the atmosphere, if that is what you mean.
At the sink place of the THC, the atmosphere interacts directly with the deep oceans, as the THC sinks to the bottom. At the upwelling places, the THC has such a high pCO2, that it sets CO2 free at a high pace, effectively bypassing the mixed layer.
But it doesn’t matter much if the interaction is directly or indirectly via the mixed layer.
Figure 2 is in dimensionless slopes of residuals in ppm/ppm for CO2, and in meg/meg for O2/N2.
The individual points are from CO2-O2/N2 pairs, but the scales show the height +/- of the CO2 points, which is of interest. That the time scale is missing is not important in their reasoning that the pairs indicate a reasonable correlation between the two variables.
IPCC’s CO2 records are well-calibrated. Its linear gain factors used for the calibration are secret. The burden is on IPCC and you to define well-mixed, and then to demonstrate its existence.
Again and again the same false allegation, based on a wrong interpretation of one sentence. Nowhere is a linear gain factor used for calibration. The linear gain factor is used to produce a smooth curve through the data, that is all.
The raw (hourly averaged), unaltered in any way data from four stations with continuous recording are available. The data from different flask samplings by different laboratories are available for a lot of places, including MLO (two different lines, sampled by CDIAC and NOAA). The data from airplane measurements are available like these from Colorado, within a few tenths of MLO at 6,000 km distance, if compared for the same day and above 500-1000 m height over mid-land:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/inversion_co2.jpg
And last but not least, we have satellite measurements, measuring CO2 everywhere above some height, where the scientists show that CO2 is not “well mixed”, as they see a variability of some 4% of the absolute value over a month. But averaged over a year, that is less than 2%:
http://svs.gsfc.nasa.gov/vis/a000000/a003400/a003440/index.html
If all the hundreds of people working in different organisations in different countries involved in CO2 measurements find similar values for the same places in the same time of the year, how can there be a deliberate (variable?) “correction factor” to please the IPCC, without anyone playing the whistleblower, not even after retirement.
Thus please, retract your false accusation, you only make yourself unbelievable.

Phil.
July 3, 2010 2:41 pm

Jeff Glassman says:
June 30, 2010 at 7:42 am
Re Phil, 6/29/10 at 9:18 am:
Phil’s like the kid who just woke up in the middle of the lecture and shouted out. He appears to have studied chemistry, at least through the lectures on equilibria, but never grasping either logic, e.g., if … then construction, argumentation, e.g., point – counterpoint, laboratory work, e.g., “the experiment was conducted” [period], or analysis, e.g., “Le Chatelier’s principle and reaction kinetics do just fine” [period].

Scientific discussion doesn’t include behaving like an obnoxious jerk and throwing out ad hominem.
Phil’s first two paragraphs are lifted verbatim out of Wikipedia, but without attribution or quotations. He strangely offsets his second paragraph (“>>”) as if it were a quote from somewhere on high. (A tip for the Phils of the world: Wikipedia is a great source for linking to research, but a risky one for quoting.)
Those paragraphs came from your post on June 29, 2010 at 7:34 pm, I believe you used the “>>” to indicate they were quotes by Ferdinand, where he got them from I have no idea. I italicize any material from an earlier post that I’m responding to (as does Ferdinand I believe, you use “>>” which is puzzling considering your reaction to it above).
Phil restates the stoichiometric equations as if that added to the dialog. The discussion has gone way beyond that stage. Those equations are explicit in IPCC’s Fourth Assessment Report, attributed to Zeebe & Wolf-Gladrow. That authority makes explicit that the equations apply to thermodynamic equilibrium. Even though climate is a fine example of a thermodynamic problem, IPCC not once mentions the term “thermodynamic equilibrium” (it does manage “equilibrium thermodynamics once) in its last two Assessment Reports. IPCC uses equilibrium repeatedly, and even implying the surface layer is in equilibrium by applying the equations Phil just repeated. But IPCC and some of the contributors to this thread, including Phil, do not grasp the implication.
Those aren’t stoichiometric equations they are chemical equilibria (which are also a thermodynamic equilibria).
Phil inserts Le Chatelier’s principle as if it applied. It is an axiom about systems saying they will transition between equilibrium states, given the chance. It says nothing about the trajectory between equilibrium states. I made the point that nothing is known about any trajectory between equilibrium states, to which Phil said “Rubbish” and for proof dropped the names “Le Chatelier’s principle and reaction kinetics” as if that meant something.
Which of course they do, they allow us to describe the transition to the equilibrium state following a perturbation, which you appear to be unaware of. Le Chatelier’s principle describes the way the equilibrium will shift given a change to a system in equilibrium. For example changing the pressure of a system of reacting gases, e.g. N2,H2 & NH3. The rate of that change is described by chemical kinetics.
Phil interrupts the dialog to say,
>> Strictly Henry’s Law doesn’t apply to CO2 in water because CO2 reacts with water but it’s a good approximation.
To the contrary, Henry’s Law does indeed apply to CO2 in water. The application is analogous to applying blackbody radiation to Earth or the Moon. Of course it applies – just don’t leave out the empirical factor called emissivity. That adjusts the à priori model to fit the real world. In construction, it’s what is called a butch plate. A solubility curve exists for CO2. It is empirical in origin, and has undergone refinements for climate. So even though CO2 in solution undergoes a chemical reaction, forbidden in the reasoning behind Henry’s Law, the CO2 coefficient is a nice butch plate that brings the à priori into accord with experiment.

Which is what I said, without the irrelevant crap and bad analogy.
How well does solubility work? The test is in the application, and as I reported here recently, it works quite well for closure between THC flow rate and air-sea flux estimates. It also provides a best estimate for the relationship between CO2 and temperature in the Vostok record.
How well, then, do the stoichiometric equilibrium equations work for the surface layer? Answer: not very well at all. They lead to the Revelle buffer factor nonsense, and the build up of atmospheric ACO2 but not nCO2, and they make AGW work. But in the process, they violate Henry’s Law, which actually does work.

They don’t violate Henry’s Law, without taking the chemical equilibria into account Henry’s Law is useless for seawater. The ‘Revelle buffer nonsense’ as you call is necessary to account for the chemical composition of seawater, because the simple application of Henry’s Law is not appropriate when there is a chemical reaction between the gas and the solvent.
More insulting crap deleted
The issue here is not the equilibrium relationships repeated by a startled Phil. It is the application of those relationships where neither thermodynamic equilibrium nor even chemical equilibrium exists.
Cut the editorial crap! This is where chemical kinetics comes in (my PhD topic by the way).

July 3, 2010 10:26 pm

Re Ferdinand Engelbeen, 7/3/10 at 1:05 pm:
Your frank discussion of your problem in reconciling the Takahashi model and the 90/92 flux model is appreciated.
The +90/-92 reference applies to the total annual uptake and outgas of the surface ocean to the atmosphere of AR4, Figure 7.3, p. 515. The actual numbers shown there are +90.6/-92.2, with a disclaimer in the text, for a difference of 1.6 GtC/yr total uptake. That figure includes figures for ACO2, which are +20/-22.2. The net is the number necessary for increase in CO2 at MLO to be ACO2, which added to the net terrestrial uptake of 1.0, account for about half man’s emissions. This is supposed to be a refinement over the TAR where the flux was +90/-90. TAR, Fig. 3.1, p. 188; ¶3.2.3.1, p. 197. The numbers published by UColo are +103/-107 and by Texas A&M are +90/-92. The difference per AR4 is -1.6 GtC/yr.
IPCC says with respect to the Takahashi diagram,
>>This estimated global flux consists of an uptake of anthropogenic CO2 of –2.2 GtC yr^–1 … . AR4, Figure 7.8, p. 523.
>>With these corrections, estimates from all methods are consistent, resulting in a well-constrained global oceanic sink for anthropogenic CO2 (see Table 7.1). The uncertainty around the different estimates is more difficult to judge and varies considerably with the method. Four estimates appear better constrained than the others. The estimate for the ocean uptake of atmospheric CO2 of –2.2 ± 0.5 GtC yr^–1 centred around 1998 based on the atmospheric O2/N2 ratio needs to be corrected for the oceanic O2 changes (Manning and Keeling, 2006). The estimate of –2.0 ± 0.4 GtC yr^–1 centred around 1995 based on CFC observations provides a constraint from observed physical transport in the ocean. These estimates of the ocean sink are shown in Figure 7.6. The mean estimates of –2.2 ± 0.25 and –2.2 ± 0.2 GtC yr^–1 centred around 1995 and 1994 provide constraints based on a large number of ocean carbon observations. These well-constrained estimates all point to a decadal mean ocean CO2 sink of –2.2 ± 0.4 GtC yr^–1 centred around 1996, where the uncertainty is the root mean square of all errors. AR4, ¶7.3.2.2.1 Ocean-atmosphere flux, p. 519.
So pick your favorite number. Figure 7.3 is “for the 1990s”. IPCC also says,
So a first puzzler is how did Takahashi, et al. manage to measure ACO2 flux, and reject nCO2 flux?
Next, if one adds all the positive and negative fluxes separately in the Takahashi diagram, suitably converted from mol m^-2 yr^-1 to PgC/yr, and supplying a reasonable model for the individual cell area, one gets +1.01/-2.41. That’s a net of -1.4, and rather in the ballpark of IPCC’s -2.2 net.
So, how did it happen that the difference between to large numbers turn out to be about the same as the sums of a 1750 small numbers? Or, why didn’t the sum of the uptake and outgas cells turn out to be near +90/-92?
One explanation is that the calibration of the Takahashi cells was arbitrary in the first place, forced by assumption to look like a small, incomplete ACO2 uptake, while nCO2 is in balance. So Takahashi might be recalibrated to produce the +90/-92 result.
Perhaps a better view is to recognize that the Takahashi diagram represents the flux, a rate, across the interface of the cell, and not the total accumulated in the cell. Meanwhile the +90/-92 model is a bulk calculation.
You said,
>> The main problem in this all is that you use a different definition for “equilibrium” than most other people (especially chemists) do. The Bjerrum plot has nothing to do with stoichiometry, neither is a fixed equilibrium.
Wrong. IPCC’s equations 7.1 and 7.2, conveniently abbreviated by Phil, above, on 6/29/10, are stoichiometric equations. They are found online in Zeebe & Wolf Gladrow, “CO2 in Seawater: Equilibrium, Kinetics, Isotopes”, 24/06/2006, Chart 3, but including four stoichiometric equilibrium constants for the reactions. The solution to the stoichiometric equations is the Bjerrum plot, id., Chart 5.
I don’t know what you mean by a “fixed equilibrium”. These are not my words. The equations and their solution applies, as I have cited above, only to thermodynamic equilibrium, which I have also labored to define for you. I have insisted on using nothing but thermodynamic equilibrium, denying you and others on reliance on the stoichiometric equations in disequilibrium, including your favorite state of dynamic equilibrium. I rely on the same authority as IPCC does. Accusing me of changing the condition in any way is quite incredible.
You say,
>>The chemical reactions/equilibria involved here are by nature all dynamic, have very little to do with thermodynamics (except if for strong exothermic or endothermic reactions, which is not the case here) and absolutely nothing to do with stoichiometry in this case. Please ask anyone with some knowledge of chemical reactions to explain what happens if you change e.g. the pH of the “equilibria”…
Don’t you wish! These assertions contradict Zeebe & Wolf-Gladrow, and IPCC since Z&W-G are IPCC’s authority.
With regard to asking an authority, you have failed to grasp the problem. I have no argument with you about what happens in equilibria. So, I concede the challenge. Now I pose to you the complementary challenge: what do experts say happens with any of these equations and reactions when not in thermodynamic equilibrium.
You say,
>>Nowhere is a linear gain factor used for calibration.
As I have demonstrated here, researchers applied a linear gain factor to the monthly records at SPO and Baring Head, but not at MLO. To say these are now calibration is a stretch when IPCC talks about inter and intra network calibrations. You are again using a different word than used by IPCC and writers.
Also, the linear gain factor was in addition to the smoothing via harmonics and splines. A linear gain factor should not have any smoothing property.
You say,
>>Again and again the same false allegation, based on a wrong interpretation of one sentence.
Wrong. CO2 is not known not to be well-mixed for a variety of reasons. One of them is the falsification of the records by IPCC to make MLO overlay both SPO and Baring Head. Another is the heavily smoothed nature of these curves. These results to not occur in nature. Another is the fact that SPO sits in a CO2 sink, while MLO sits in the plume of massive oceanic outgassing, and the records should not look alike.
And you say,
>>And last but not least, we have satellite measurements, measuring CO2 everywhere above some height, where the scientists show that CO2 is not “well mixed”, as they see a variability of some 4% of the absolute value over a month.
The satellite is AIRS (Atmospheric Infrared Sounder) and the altitude at which CO2 is imaged is above 8 km. I can’t confirm your number of 4%, but dense, rolling clouds of CO2 are seen billowing up from 8 km. One part of this phenomenon is likely that the uplifting to altitude is irregular, making CO2 at altitude not well-mixed even if it were well-mixed below. A better conjecture is that the atmospheric processes and the imaging tend to blur the imaged CO2 concentration, and that the surface patterns are more intense and focused than those seen above 8 km.
I cannot testify what “hundreds of people working” on the problem do or say. I can testify to what IPCC has done with their work, and it is unscientific and, in my view, criminal.
I stand by my accusations, and my analysis of solubility and thermodynamics discussed here.

Ferdinand Engelbeen
July 4, 2010 7:12 am

Jeff Glassman says:
July 3, 2010 at 10:26 pm
So a first puzzler is how did Takahashi, et al. manage to measure ACO2 flux, and reject nCO2 flux?
Takahashi measured tCO2 fluxes, the 90/92 GtC from the IPCC being total fluxes and the difference is the net sink rate in the oceans. The 90/92 GtC is responsible for the residence time and only of interest for total exchanges between the atmosphere and oceans (including the fate of 14C from the atomic bomb testing). The net sink rate of around 2 GtC is of more interest as that is what the decay rate influences for any extra added CO2 (whatever the origin).
The emissions are known with reasonable accuracy, the increase in the atmosphere with quite good accuracy, the difference is what is absorbed by other reservoirs. The partitioning between oceans and vegetation as sinks can be calculated from d13C changes and oxygen use. See:
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf
That gives an alternative calculation of the ocean’s sink rate, still with wide margins of error.
———-
Wrong. IPCC’s equations 7.1 and 7.2, conveniently abbreviated by Phil, above, on 6/29/10, are stoichiometric equations. They are found online in Zeebe & Wolf Gladrow, “CO2 in Seawater: Equilibrium, Kinetics, Isotopes”, 24/06/2006, Chart 3, but including four stoichiometric equilibrium constants for the reactions. The solution to the stoichiometric equations is the Bjerrum plot, id., Chart 5.
Wrong again. As clearly indicated in Zeebe & Wolf-Gladrow, the equilibrium reaction is simply mass equilibrium for any amount of mass of the items in the equations. The reactions go both ways, whatever the concentrations or change in concentrations involved, and the results are dictated by the equilibrition constants k1 and k2. It is the equilibrium constants which are deduced from stoichiometric conditions, not the basic equilibrium reaction.
The equilibrium constants change with temperature, pressure and salt content. More with temperature than I expected, thus one need to take into account the influence of these parameters when calculating the results. Wattenberg in the 1920’s cruises did correct his findings for temperature, salt content and pressure…
As most of the reactions are very fast (less than 1 second), the reaction on any change in mass of one of the reactants (including pH) or circumstances (temperature, pressure) is almost instantly. Thus thermal equilibrium or not, for any instance or change of temperature, the results can be calculated.
———–
Wrong. CO2 is not known not to be well-mixed for a variety of reasons. One of them is the falsification of the records by IPCC to make MLO overlay both SPO and Baring Head. Another is the heavily smoothed nature of these curves. These results to not occur in nature. Another is the fact that SPO sits in a CO2 sink, while MLO sits in the plume of massive oceanic outgassing, and the records should not look alike.
I have sent the graphs of the raw, unaltered (hourly averaged) data + the smoothed curve of MLO together with these from the South Pole. These show clear differences in seasonal trend and a few ppmv difference over a year. The yearly average and trend of the smoothed data and of the raw data and independent flask data for each station are almost identical. MLO is in a “huge” plume out of the oceans, but shows less seasonal variation and similar vlaues than other NH stations. South Pole is in an ice desert, far away from the southern ocean sink and at 3,000 meter receives mainly air (and precipitation) from the whole SH oceans, up to the equator.
And have a look at the color scale of the AIRS satellite: 365-380 ppmv, or +/- 2% of the full scale for one month, where a lot of the differences can be seen. Over a year that is even less. Above Mauna Loa, the “plume” is even lower in CO2 content than at the NH mid-latitudes (flask measurements at sea level in Hawai and Mauna Loa show near identical values, while MLO is already mixed in the trade winds) and the South Pole is higher in CO2 content than most of the Southern Ocean.
Your allegations are simply without any ground.

tonyb
Editor
July 4, 2010 7:40 am

Ferdinand, Phil and Jeff.
I don’t want to interrupt the interesting exchanges between the three of you, but feel I need some clarification.
I can understand Phil’s position clearly. Over the last two years I have come to know him as a thoughtful and well informed contributor who believes in the conventional radiative physics theory. In this respect he believes that increasing CO2 concentration will inevitably lead to a temperature rise and positive feedbacks that will significantly increase global temperatures. This is a perfectly respectable position which, although I may disagree with his end results, is one he defends well. As an aside, and this is not aimed at Phil, I do find it curious that if the case for radiative physics is so strong, why do so many top scientists spend so much time effort and money into trying to make historic temperatures (LIA, MWP etc) conform to what they believe it should have been, not what it actually was?
Jeff appears to fundamentally disagree with the figures produced by the IPCC and also with the physics which supposedly shows CO2 and its side effects to be contributing to an overall warming of the planet. In other words he does not accept the theory of radiative physics as understood by Phil
Ferdinand has the most nuanced view it seems to me. He makes an eloquent case that historic 19th Century CO2 readings are often highly inaccurate-something I still disagree with him on-but more importantly defends the modern IPCC figures as being completely accurate. However, as far as I can tell Ferdinand does not believe that CO2 (at 380ppm) causes much, if any, warming, so in that respect he appears to be much closer to Jeff’s position on radiative physics than he is to Phil’s.
I can understand where Jeff and Phil are coming from therefore, but would much appreciate it if Ferdinand can link to one of his excellent papers showing why he believes that, whilst the Co2 concentrations are correct, they do not have the effect on temperatures that radiative physics-as understood by Phil-suggests they should?
Tonyb

July 4, 2010 8:10 am

Re Phil, 7/3/10 at 2:41 pm:
First, I owe you an apology. I copy the posts into a word processor for highlighting, color coding, and footnoting to build a response. When I copied your post of 6/29/10, I lost the first two offsetting quotes and the italics. That’s why I found your marks on just the second paragraph to be strange. Anyway, you were correctly quoting me, and I apologize for criticizing you on that point.
Now it’s your turn back in the barrel, and for some wire brushing, to mix the metaphors.
You said of that same quote,
>>… I believe you used the “>>” to indicate they were quotes by Ferdinand, where he got them from I have no idea.
Believe as you wish and conjecture as you might, but I was quoting Wikipedia as I said immediately above the quotation.
You don’t know what ad hominem means. When I criticize your writing as being illogical, disconnected, and outright false, those are not ad hominems. An ad hominem is an attack on the person, not his argument. When you call me an obnoxious jerk without facts or definitions, that is an ad hominem.
When I wrote that “No theory exists to guide us in disequilibrium”, you responded
>>Rubbish, Le Chatelier’s principle and reaction kinetics do just fine.”
That is false. Le Chatelier’s principle is only a guide that an isolated system disturbed from an equilibrium state will move toward an equilibrium state. It tells us nothing about the path that might be taken, or what happens to a system that is not isolated. You provide evidence that you don’t understand what you think you know, that you are willing to misrepresent what you know to score a point in argument, or that you are a careless writer. That is an offensive insertion into the dialog.
Then you add in your latest post (op. cit.),
>>Which of course they do, they allow us to describe the transition to the equilibrium state following a perturbation, which you appear to be unaware of. Le Chatelier’s principle describes the way the equilibrium will shift given a change to a system in equilibrium. For example changing the pressure of a system of reacting gases, e.g. N2,H2 & NH3. The rate of that change is described by chemical kinetics.
Le Chatelier’s principle describes no “way”, that is, no path, no direction, no trajectory. You are correct now that the LCP applies to systems in equilibrium, before and after the disturbance or change. Your previous “rubbish” crack you supported by urging that the LCP was a model for the state of systems while in disequilibrium.
Your reference to chemical kinetics might be true, but you provide no evidence of that fact. You just name drop “chemical kinetics” as if that was enough. The reader is supposed to take what you say as evidence because Phil has a PhD topic, another naked claim gratuitously inserted.
You say,
>>Those aren’t stoichiometric equations they are chemical equilibria (which are also a thermodynamic equilibria).
Your sentence is wrong on one point and ambiguous on another. Stoichiometric equations apply in thermodynamic equilibrium, which implies chemical equilibrium. Your sentence can be read to say because the equations require chemical equilibrium, that implies thermodynamic equilibrium, which is false.
But to say they are stoichiometric equations in the first place is wrong, which is more evidence that you don’t know what you think you know. As I had to explain to Engelbeen, they certainly are stoichiometric equations. Here’s a compact quote from a professor’s online notes that might help you:
>>Stoichiometry describes the proportions in which chemical species combine.
>>CH4 + 2O2 –> CO2 + 2H2O
>>is a stoichiometric equation for the combustion of methane … . All stoichiometric equations can be represented generically as:
>>aA+bB –> cC+dD
>>The numbers which quantify the amounts (a,b,c,d, nu) are called stoichiometric coefficients.
http://www.cbu.edu/~rprice/lectures/reactive.html
Does that help? Do you see now that the equations you wrote on 6/29/10 at 9:18 pm are by definition stoichiometric equations? If you need help applying this lesson to the carbonate system, you might want to read the Zeebe and Wolf-Gladrow paper I cited above. They refer to the coefficients as “stoichiometric equilibrium constant[s]”. In their entry in the Encyclopedia of Paleoclimatology, etc., above, they also specify that thermodynamic equilibrium is required.
You criticize my explanation to you about solubility as “irrelevant crap and bad analogy”. Here’s what you had to say:
>>Strictly Henry’s Law doesn’t apply to CO2 in water because CO2 reacts with water but it’s a good approximation.
while you say
>> … without taking the chemical equilibria into account Henry’s Law is useless for seawater.
Phil concludes,
>>The ‘Revelle buffer nonsense’ as you call [it] is necessary to account for the chemical composition of seawater, because the simple application of Henry’s Law is not appropriate when there is a chemical reaction between the gas and the solvent.
Therefore, Phil’s conclusion is that Henry’s Law, though a “good approximation”, is never appropriate because CO2(aq) always undergoes chemical reactions in the solvent. Utterly flummoxed and outwitted, he dismisses all explanations of his errors as “crap”.
For the sake of other readers, Phil has unwittingly contradicted himself. We know that CO2 is highly soluble in water, including seawater. The water need not be in equilibrium, and it is still highly soluble. However, Henry’s constants are only known with any precision for equilibrium. As Phil first implied, those constants are a good approximation at one atmosphere and for sea surface temperatures. We know they are good approximations, not because Phil is a PhD candidate, but because when we calculate the CO2 uptake and outgassing for the conveyor belt (aka THC, MOC), using those constants, reasonable temperatures, and reasonable estimates for the flow rate in the THC, the whole system hangs together.
At the same time, the stoichiometric equations are valid, too, in disequilibrium, and the relative ratios of reactants, the stoichiometric constants are, like Henry’s constants, only defined and known under thermodynamic equilibrium. So to say the stoichiometric equations are valid is a tautology. For the carbonate system, they express chemical reactions involving all the possible components of CO2 in molecular or other forms. However, in disequilibrium, the chemical equations are not in balance, and we have nothing to suggest what the appropriate ratios are.
One thing that is known, however, is that Henry’s Law is a “good approximation”, and that it does not depend on the pH of the water, or on its stoichiometric state of imbalance, at least down to the fourth order of significance. The science is down to its limits under the state of the art for measuring the flux of CO2 across the air-sea interface using the known parameters of the pCO2 in the air and the temperature of the water, and we might throw in salinity.
IPCC, however, has a model for the air-sea flux that makes it a bottleneck for ACO2 at about 6 GtC/yr, but not at all for nCO2 at 90 to 110 GtC/yr. The difference is political: IPCC reads its charter from WMO and the UN as charging it with finding data to demonstrate its assumption that ACO2 causes global warming – overt selecting data to fit the theory!
But the estimated emissions from fossil fuel are not large enough. So IPCC adopted the Revelle & Suess’s failed 1957 conjecture of a buffer, one that would cause only ACO2 to accumulate in the atmosphere. Within a year or so after Revelle published his grant request, now elevated in the community to a paper, Bert Bolin took up the conjecture and elaborated on it. Bolin carried that conjecture with him when he became the first Chairman of IPCC upon its founding in 1988. Since that time, IPCC has elaborated on the Revelle buffer but never validated it.
The Revelle buffer is invalid. It violates Henry’s law, and not just under the idealized state of thermodynamic equilibrium, but in the empirical world of it being a “good approximation”. Because the only known difference between ACO2 and nCO2 is the isotopic mix ratio, the Revelle buffer requires the ocean to fractionate, to discriminate between different molecular weights of the components of 12CO2, 13CO2, and 14CO2. That effect may well exist, but it is far below the state of the art in measuring. The conjecture requires different Henry’s constants for the different isotopes. Even at that, the ocean would fractionate to change the mix of CO2 in the atmosphere, an example of the Suess effect, and not just absorb the two species, ACO2 and nCO2, intact.
The Revelle buffer is a relationship between measurable components in the ocean. As a result, it always has a numerical value. But in no way does a numerical value demonstrate the conjecture of a buffer effect. IPCC reports on the successful part of evaluating the Revelle buffer over the open ocean, but intentionally deleted the part that shows the measurements strongly temperature dependent. During the drafting of its Fourth Assessment Report, one of its most respected reviewers, Nicolas Gruber said, “it is wrong to suggest that the spatial distribution of the buffer factor shown in Figure 7.3.10c is driven by temperature.” That relationship between the Revelle buffer factor and temperature was a simple mapping of Henry’s Law as it depends on temperature at a constant partial pressure in the atmosphere. What IPCC’s scientists had managed to do was measure solubility (they never mention Henry’s Law), and instead of finding that the Revelle buffer was a simple matter of solubility, they suppressed the measurements, removing Figure 7.3.10c from the final report.
A word for the readers on thermodynamic equilibrium. It is the toughest of standards, good only for the field of thermodynamics, a field that deals with unmeasurable, unobservable macroparameters. Examples are a global average surface temperature, and a global average Bond albedo. In thermodynamic equilibrium, all motion has ceased, as has all heat, the flow of thermal energy. The surface ocean is stirred vigorously by various forces, including thermal energy that is in perpetual exchange with the atmosphere and deep space.
So the surface ocean is in perpetual disequilibrium. Science has no model by which to establish the mix ratio of CO2(aq) and its reactants. IPCC claims to have demonstrated the validity of the buffering by evidence that the build-up in atmospheric CO2 bears the fingerprint of human activity. Those demonstrations involve a misunderstanding of ice core data and its low frequency filtering effect, and two outright fraudulent graphical demonstrations that the burning of fossil fuels produces a predicted reduction in the atmospheric mix of CO2 isotopes, and that the build-up of CO2 corresponds to the depletion of atmospheric O2 according to a stoichiometric relation for burning fossil fuels.
While CO2 is a greenhouse gas, meaning that it does contribute to the blanket effect of the atmosphere, but the incremental amount is infinitesimal and saturating. So all this straightening out of IPCC is fighting a five alarm fire, and for the likes of Phil, it is blowing out matches.

July 4, 2010 9:06 am

Re tonyb, 7/4/10 at 7:40 am:
By “radiative physics” I assume you mean IPCC’s radiative forcing paradigm. I can have no objection to that. Science does not dictate valid forms for models.
IPCC’s paradigm never rose above the conjecture level before its efforts to validate it had the opposite effect of invalidating it. I rank scientific models nested in increasing quality as conjectures, hypotheses, theories, and laws. The rules are simple and experiential. A conjecture can be most anything not contradicted by evidence. A hypothesis adds that the model is complete, covering its entire domain with no contradictions, and making a prediction, its range, that is beyond chance. A theory adds that at least one non-trivial prediction has been validated with data. A law adds that all predictions and all consequences of the model have been validated with data.
A model may violate any axiom, maxim, or principle of science. The acid test is advancement to a theory – validation of a non-trivial prediction. Ethics demand that a scientist not try to influence public opinion based on less than a theory.
IPCC claims predictive power for its model, but it has failed to produce it. Meanwhile, it has attempted to justify its modeling with overwhelming reports that violate known principles and laws, that violate ethical practices, and that distort data. See rocketscientistsjournal.com, “IPCC’s Fatal Flaws” for a discussion of eight such errors. That list has grown. We can add to the violation of Henry’s Law, violation of the Beer-Lambert Law. See also id., “SGW”, which provides an alternative model for global warming, and in the process examines IPCC’s false and fraudulent claims of fingerprints of human activity in its data.
The rest is in the details.

Ferdinand Engelbeen
July 5, 2010 12:07 pm

Jeff Glassman says:
July 4, 2010 at 8:10 am
Stoichiometry describes the proportions in which chemical species combine.
CH4 + 2O2 –> CO2 + 2H2O

Indeed that is a stoichiometric reaction, going completely to the right side, it is practically irreversable. But:
CO2 + H20 -> <- CO3(–) + 2 H(+)
is not a stoichiometric reaction, it is a dynamic equilibrium reaction leading to a “steady state composition” (which reaction constants in many cases are determined from stoichiometric start conditions). That may be read from the same source:
http://www.cbu.edu/~rprice/lectures/multrxn.html
Thus the rest of your explanation is based on -again- a wrong interpretation of a chemical equilibrium reaction, where there isn’t any need for stoichiometric (start) conditions.

Ferdinand Engelbeen
July 5, 2010 1:39 pm

With the use of the “smaller than” and “larger than” signs, halve of the reactions disappeared, as that probably is interpreted as HTML code… Anyway an equilibrium reaction is quite different from a stoichiometric reaction…

July 5, 2010 2:25 pm

Re Ferdinand Engelbeen, 7/5/10 at 12:07 pm:
I checked your authority, Randel M. Price, Assoc. Prof., Chemical Engineering, at the URL you gave, which is to his class notes for students.
I did not use the term stoichiometric reaction. I used the term stoichiometric equation, citing authority. Price does not define what is or what is not a stoichiometric equation.
You mention that the reaction you cite is “practically irreversible”. Price does give names to reactions based on whether they are reversible or irreversible.
Price says,
>>If the reaction mixture is held under controlled conditions, eventually it will balance out to a fixed composition. This “long time” condition is called equilibrium, and the “equilibrium composition” (“steady state composition”) is of great importance.
but Price does not use either of the terms “equilibrium reaction” and “stoichiometric reaction” which you argue are “quite different”. What Price seems to be saying is that any reaction mixture will reach an equilibrium state, which looks like Le Chatelier’s principle, does it not? Nothing too profound here, and nothing that makes a point in this dialog.
Price does not mention the phrase you promote and again cite, “dynamic equilibrium”.
You rely on “stoichiometric (start) conditions”. Price doesn’t even use the word “start”, much less your term.
To the contrary, the authorities I cited require thermodynamic equilibrium, and they are the same people cited by IPCC. Price does not require equilibrium.
The authorities I cite provide a stoichiometric equilibrium constant, K_2 and K_2*, for the reaction you claim is not stoichiometric. Z&W-G, op. cit., chart 3. (Note: the asterisk might be to distinguish between a constant for chemical equilibrium rather than thermodynamic equilibrium.) That is strongly suggestive that the reaction is indeed stoichiometric, though I have no authority so elementary that it would say that the “stoichiometric equilibrium constant” applies to stoichiometric equations.
You accuse me of making “a wrong interpretation”. On that subject, Price carries this important passage:
>>WARNING!: Different authorities use different definitions of some of these terms. If you bring information in from an outside source, be sure you know how it defines selectivity, etc.
You have thrown that caution to the wind, providing no authority * for substituting dynamic equilibrium for thermodynamic equilibrium, * for declaring equations not to be stoichiometric when they fit the stoichiometric definition provided, * for declaring Henry’s Law to involve the partial pressure difference when the elementary authority provided said the Law’s dependence on pressure only involves pCO2(g), • for restricting the meaning of calibration to the laboratory while IPCC uses calibration to indicate post-laboratory adjustments among and between stations in a network, • for endorsing the use of Henry’s Law to determine the rate of CO2 flux instead of what the Law provides: the total CO2 in solution.
P.S.
With regard to your amplifying post at 1:39 pm, some of the work that needs to be posted here is a challenge. It seems impossible under the imposed html limitations for the site, especially showing a stoichiometric equation with its constant placed above the reaction symbol. I did not rely on the typography of your post.

Ferdinand Engelbeen
July 5, 2010 4:37 pm

Jeff Glassman says:
July 5, 2010 at 2:25 pm
I did not use the term stoichiometric reaction. I used the term stoichiometric equation, citing authority. Price does not define what is or what is not a stoichiometric equation.
OK, I see my confusion now.
The stoichiometric equation by Price is simply the ratios (“stoichiometric coefficients”) of each product and each reactant of a chemical reaction. That applies to both reversible and irreversible reactions.
But the stoichiometric equilibrium constants are not the same items as the stoichiometric coefficients, that is where you are confused:
Stoichiometric coefficients for
CH4 + 2O2 –> CO2 + 2H2O
are the 1 for CH4, the 2 for O2, the 1 for CO2 and the 2 for H2O, or the number of moles for each of them.
While the stoichiometric equilibrium constants are ratios at chemical equilibrium (at a certain thermodynamic equilibrium) between resulting products and the reactants of the equation, to the power of the stoichiometric coefficients.
There is no need at all that the equilibrium reaction itself starts at or implies in any way stoichiometric conditions. The equilibrium constants are know for different thermodynamic conditions, so that the ratio of all reactants and products can be calculated for any level or change in concentrations or conditions.

Ferdinand Engelbeen
July 6, 2010 5:41 am

You have thrown that caution to the wind, providing no authority * for substituting dynamic equilibrium for thermodynamic equilibrium, * for declaring equations not to be stoichiometric when they fit the stoichiometric definition provided, * for declaring Henry’s Law to involve the partial pressure difference when the elementary authority provided said the Law’s dependence on pressure only involves pCO2(g), • for restricting the meaning of calibration to the laboratory while IPCC uses calibration to indicate post-laboratory adjustments among and between stations in a network, • for endorsing the use of Henry’s Law to determine the rate of CO2 flux instead of what the Law provides: the total CO2 in solution.
I can return the compliment by saying that you base your definitions on a misunderstood halve sentence and after that ignore what the authors and other sources from different authors and especially what the data show. No matter how much proof of the contrary is given, you stay by your opinion, which is mainly yours only:
for substituting dynamic equilibrium for thermodynamic equilibrium
thermodynamic equilibrium is one form of dynamic equilibrium, chemical equilibrium like the reaction of CO2 with water to form bicarbonate and back is another form.
for declaring equations not to be stoichiometric when they fit the stoichiometric definition provided
The reaction “constant” is defined stoichiometric and can be used whatever the initial mixture was, as the reaction constant, remains… constant (at constant thermodynamic conditions). These were already used in the 1920’s, including compensation for temperature, salt content and pressure.
for declaring Henry’s Law to involve the partial pressure difference when the elementary authority provided said the Law’s dependence on pressure only involves pCO2(g)
I never said that Henry’s Law involves partial pressure differences. Henry’s Law only shows the ratio’s of CO2 in the atmosphere and free CO2 in solution when a thermodynamic equilibrium is reached. If there is no partial pressure difference between what is in solution and in the gas phase above it, then both are in (thermodynamic) equilibrium. If the partial pressure of the ocean surface is higher or lower (as defined), then there is no equilibrium and the speed of transfer is in ratio with the partial pressure difference. The latter is used by Takahashi and many others.
for restricting the meaning of calibration to the laboratory while IPCC uses calibration to indicate post-laboratory adjustments among and between stations in a network
Totally nonsense, as can be seen in the raw measurements, both from flask data and continuous measurements, airplane and satellite data compared to the “adjusted” data. All CO2 data from 1,000 m high over land and from sealevel over the oceans to the stratosphere all over the world show variations within +/- 2% over a year. That is within the definition of “well mixed”.
for endorsing the use of Henry’s Law to determine the rate of CO2 flux instead of what the Law provides: the total CO2 in solution
Every textbook of chemistry (including Zeebe & Co) shows that Henry’s Law only determines free CO2 in solution (and back!) at equilibrium. If there is disequilibrium, either by changes in atmospheric CO2 or by changes in free CO2 in solution, a flux will occur against the direction of the disturbance. That is e.g. the case for a change of ocean pH or total CO2 in solution, as the Wattenberg measurements showed. The change of the chemical equilibrium by pH or DIC is not part of Henry’s Law, but influences the concentration of free CO2, thus Henry must work harder to restore the thermodynamic equilibrium…

Phil.
July 6, 2010 8:34 am

Jeff Glassman says:
July 4, 2010 at 8:10 am
Re Phil, 7/3/10 at 2:41 pm:
You don’t know what ad hominem means. When I criticize your writing as being illogical, disconnected, and outright false, those are not ad hominems. An ad hominem is an attack on the person, not his argument. When you call me an obnoxious jerk without facts or definitions, that is an ad hominem.

I am well aware of what ad hominem is and the following is a good example of it!
Jeff Glassman says:
June 30, 2010 at 7:42 am
Re Phil, 6/29/10 at 9:18 am:
Phil’s like the kid who just woke up in the middle of the lecture and shouted out. He appears to have studied chemistry, at least through the lectures on equilibria,

The phrase I used to describe you is an accurate description of your behavior on this thread.
Apparently you did not study chemistry, for your edification the symbol ‘⇋’ used below is conventionally used by chemists to indicate chemical equilibrium between the species.
Do you see now that the equations you wrote on 6/29/10 at 9:18 pm are by definition stoichiometric equations?
So by definition they are not! (By the way I don’t need to read an elementary textbook to know what a stoichiometric equation is, and don’t take lectures from someone who does.)
They should not be taken as a stoichiometric reaction since the reactions often differ (e.g. multiple steps, different order etc), the reaction kinetics describes the actual steps involved to reach the ultimate equilibrium state and the rate by which the equilibrium is reached. In this regard the kinetics is vital, a system can be in disequilibrium but if the reactions leading to equilibrium are slow then the equilibrium state may never be reached. (A good example would be the proteins in your body, the equilibrium state is a mess of amino acids due to hydrolysis by water, fortunately the rate of hydrolysis is extremely slow.)
“In the case of CO2/water you have the more complicated system of
CO2(g)⇋CO2(l)+H2O⇋H2CO3⇋HCO3− + H+⇋CO32− + H+”
You wrote that “No theory exists to guide us in disequilibrium”, and I responded
“Rubbish, Le Chatelier’s principle and reaction kinetics do just fine.”
You ignored the ‘and’ in your critique!
I further said that: “Le Chatelier’s principle describes the way the equilibrium will shift given a change to a system in equilibrium. For example changing the pressure of a system of reacting gases, e.g. N2,H2 & NH3. The rate of that change is described by chemical kinetics.”
Something you apparently have not grasped and frankly disqualifies you from a learned discussion on this topic. You appear to not even possess a high school knowledge of chemical kinetics and equilibrium which are a basic requirement for the understanding of the interaction of CO2 and seawater.
Your attempts to demean as a bolster to your argument is offensive, for your information it has been several decades since I was a graduate student

July 6, 2010 9:22 am

Re Ferdinand Engelbeen, 7/6/10 at 4:37 & 5:41 am:
On 7/4/10 at 8:10, I provided you a sorely needed definition of stoichiometric equations. It happened to be from Randel Price, an authority you have also used, although the article is merely notes for his students (that is, I note, not subject to review). Regardless, his definition included defining the stoichiometric coefficients. Except for that definition, I have not used the term stoichiometric coefficients anywhere.
In my next paragraph to the Price citation, I referred to Z&W-G “stoichiometric equilibrium constant[s]” as “coefficients”. This misled you. I did not intend a reference to stoichiometric coefficients, but instead to the reaction coefficients that appear above the reaction arrows. My post on 7/5/10 at 2:25 by its reference to the K_2 coefficients should have clarified the matter. Sorry about the confusion.
You claim that I, too, have thrown caution to the wind by not providing references for my terms and models. To the contrary, I claimed to have provided those references in the same “You have thrown caution to the wind” paragraph. If you believe that I have taken any kind of controversial position without authority, I would be happy to rectify the omission. I have laboriously provided citations during the dialog.
For example, on 6/26/10 at 2:03 pm, I provided you a textbook definition of thermodynamic equilibrium, using Zemansky. It excludes a state in which matter or heat is being exchanged internally or with the external world. That pretty much excludes whatever you mean by “dynamic equilibrium”. You continue to defend that phrase, going so far as to say,
>> thermodynamic equilibrium is one form of dynamic equilibrium, chemical equilibrium like the reaction of CO2 with water to form bicarbonate and back is another form.
Surely whatever definition you might be using, dynamic equilibrium would hold all the way down to no dynamics at all, so I would expect thermodynamic equilibrium, a defined term, to be a subset of dynamic equilibrium, should that term ever be defined and brought into this discussion. However, thermodynamic equilibrium even viewed as a subset of dynamic equilibrium is a highly restrictive, theoretical state. Therefore, it must not be ignored where a model specifies that state as a condition.
You claim,
>>I never said that Henry’s Law involves partial pressure differences.
when you had asserted
>>Henry’s Law still is working, but the amount of free CO2 at the surface is not only influenced by temperature, but by a host of other factors. At last it is the real partial pressure of free CO2 in the last few cm of water which decides which way CO2 will go: in or out of the waters, if the difference with pCO2 of the atmosphere is higher or lower. Engelbeen, 6/22/10 at 5:03 am
That sounds like a contradiction to me.
You wrote,
>>>> for restricting the meaning of calibration to the laboratory while IPCC uses calibration to indicate post-laboratory adjustments among and between stations in a network

>>Totally nonsense, as can be seen in the raw measurements, … .
You response is irrelevant. The statement you call nonsense is about IPCC’s use of calibration AFTER the lab work, meaning AFTER the raw measurements. You return to endorsing the lab work. Besides, IPCC doesn’t even publish raw data, at least as far as CO2 concentrations are concerned.
You claim,
>>All CO2 data from 1,000 m high over land and from sea level over the oceans to the stratosphere all over the world show variations within +/- 2% over a year.
That is one brave claim! If it is true, why didn’t IPCC use that information, instead of inserting a “linear gain factor” to adjust non-MLO sites to agree with MLO? If ± 2% is significant, why didn’t IPCC use a number like that instead of its qualitative, undefined “well-mixed” characteristic? I don’t have the time to check a pair of raw CO2 data records, much less all such pairs, to cure my skepticism about your claim. And I have no idea what kind of smoothing you used to calculate your annual variation. Is this comparison published, perhaps on your website?
You wrote,
>>>>for endorsing the use of Henry’s Law to determine the rate of CO2 flux instead of what the Law provides: the total CO2 in solution

>>Every textbook of chemistry (including Zeebe & Co) shows that Henry’s Law only determines free CO2 in solution (and back!) at equilibrium. If there is disequilibrium, either by changes in atmospheric CO2 or by changes in free CO2 in solution, a flux will occur against the direction of the disturbance.
First, “total CO2 in solution” means total CO2(aq). Your criticism is to deny what I did not say: that Henry’s Law somehow applies to DIC or DIC+DOC+POC, where DIC is CO2(aq)+HCO3-+CO3–. (If you need an authority, let me know.) Here is one of the places where you and other IPCC believers go far off track. You fix the ratio of DIC:DOC:POC, like 2000:38:1, AR4, ¶7.3.4.1) and the concentrations of CO2(aq):HCO3-:CO3—(about 1%:91%:8%, TAR, Box 3.3, p. 1079; <1%:90%:9% at pH = 8.2, Zeebe & Wolf-Gladrow, slide 8). You use the fixed ratios determined for thermodynamic equilibrium for an ocean never in thermodynamic equilibrium. You use the equations for thermodynamic equilibrium for the state of dynamic equilibrium, which you seem unable to define, for equations that are inapplicable.
As a result you convert Henry's Law from determining the concentration of CO2(aq), but locked to the whole ocean system of CO2(aq):HCO3-:CO3–:DOC:POC. You create an equilibrium bottleneck to defeat Henry's Law. Because the ratios are fixed in your model, you could equally well express Henry's Law for any one of the forms of CO2, the total CO2 in all forms, or any subtotal.
Your theory is essential to AGW. Not enough ACO2 is emitted in any year to be the alleged cause of global warming, so it must be accumulating in the atmosphere (while preposterously natural CO2 is not!). This fixing of the ratios creates that bottleneck. Thus the atmosphere is an accumulator of a certain species of CO2, while the ocean adjusts all its ratios, including its pH (a fear bonus), all based on games played with equilibrium. In this theory, the atmosphere is a buffer in the sense of being an accumulator of ACO2, and the ocean is a buffer in the sense of a resistance against dissolution of CO2(g). This buffer, viewed either equivalent way, is the Revelle buffer.
Of course, your model that you share with IPCC, including the Revelle buffer, is balderdash. In ordinary physics, that is, excluding climatology, Henry's Law runs according to established theory to load the surface layer of the ocean with an unrestricted amount of CO2(aq), up to the limits for its temperature. This dissolution is just one more source of disequilibrium for the surface layer (along with solar radiation, long wave radiation to space, wind, ocean currents, ocean biology, etc.). In short, CO2(aq):HCO3-:CO3—is not in any predetermined ratio. The Bjerrum plot will have no meaning in disequilibrium, the real world.
The buffer holding excess CO2 is not the atmosphere. Under Henry's Law, the CO2 buffer in the system is the surface layer of the ocean. Henry does not have to work at all. He just watches.
Now this is all a tempest in a teapot because radiative forcing (RF) is not proportional to the logarithm of the concentration of CO2(g). The theory for radiation absorption in a gas is the Beer-Lambert Law, the next major law different only in the climatology consensus. The radiative forcing follows an S-curve, meaning that the effects of CO2 are (a) bounded and (b) saturate. Under the logarithm model, CO2 can absorb more than 100% of its absorption bands, and the RF can grow to infinity.
Even with these corrections to the IPCC model, we still have a tempest in a teapot. That is because climate is regulated by the slow negative feedback of cloud albedo when Earth is in a warm state ("interglacials", AR4, Glossary, p. 948). IPCC's climate sensitivity could be a tenth of its "very likely value of about 3ºC" with an immeasurably small change in cloud albedo, or it might lie in the range of 0.4ºC and 0.5ºC, according to Lindzen. See RSJ, “The Acquittal of CO2”, response to John, Channel Isles, 11/4/09. Citations for the fact that Earth’s climate is regulated by cloud albedo on request.

July 6, 2010 3:03 pm

Re Phil, 7/6/10 at 8:34 am:
Here comes Phil again, rubbing his eyes. Tsk!
He takes the opening 30 words out of context, even to the extent of ending with a comma, to assert that he found an ad hominem. If those 30 words had ended where his citation ended, they would indeed have been an ad hominem. But the text he dropped goes on to support that conclusion as reasonable. In context, it is not an ad hominem at all.
Phil attempts to make a clarification by claiming that the equations he pointlessly inserted — just to show off and to interrupt the dialog — were conventional expressions of chemical equilibrium. The criticism I leveled against Phil, which he attempts to address, is that his following observation was silly:
>>Those aren’t stoichiometric equations they are chemical equilibria (which are also a thermodynamic equilibria).
Even the elementary text provided for his education didn’t use the equilibrium symbol he now explains. He concludes from this slightly interesting explanation of the obvious,
>> Apparently you did not study chemistry, for your edification the symbol …
What possibly could be the connection between my education and a chemistry symbol that I neither used nor discussed?
He adds,
>>They should not be taken as a stoichiometric reaction since the reactions often differ (e.g. multiple steps, different order etc), the reaction kinetics describes the actual steps involved to reach the ultimate equilibrium state and the rate by which the equilibrium is reached.
Phil doesn’t argue that Price’s definition laid out for him is wrong. Nor does he provide an alternative definition. He just plows ahead, adding requirements into Price’s definition to prove he was right in the first place. This is butchery of logic. It is irrational and offensive inserted into an otherwise somewhat respectful and possibly constructive dialog.
Phil considers the following to be an ad hominem:
>>>>He appears to have studied chemistry, at least through the lectures on equilibria,
Ad hominem? It’s a compliment! Readers reading Phil’s crackly “Those aren’t stoichiometric equations” might think he had no chemistry background at all. I discerned in his clumsy writing that he did.
Phil writes,
>>In this regard the kinetics is vital, a system can be in disequilibrium but if the reactions leading to equilibrium are slow then the equilibrium state may never be reached.
Who cares? Who needs the system to be in any kind of equilibrium? The ocean is not in any kind of useful equilibrium as far as the carbonate system is concerned, so why search for an equilibrium model? Moreover, the reactions span the extremely fast to the extremely slow, relative to climate scales, so what are we to make of his observation about slow reactions?
Phil brought up “Le Chatelier’s principle AND reaction kinetics do just fine” in guiding us in disequilibrium, now emphasizing for an excuse that he used the conjunction. Of course, he’s right. He could have written, “The Big Bang Theory and reaction kinetics do just fine”, or “Evolution and reaction kinetics do just fine”. He could have written “x and reaction kinetics do just fine”, where x is a dummy variable. (See Variables for Dummies, eh?) His insertion of “Le Chatelier’s principle” is for no purpose whatsoever. He never relies on it. This is name dropping to make himself look smart.
Phil shoots back,
>>You appear to not even possess a high school knowledge of chemical kinetics and equilibrium which are a basic requirement for the understanding of the interaction of CO2 and seawater.
OK, fine. Split your infinitives! Though I caution him, appearances can be deceiving. So he grades my writing, in part calling it “rubbish”, but where’s his answer sheet?
Even after introducing Le Chatelier’s principle and (emphasis, and) reaction kinetics, he in no way demonstrates how these or any other bits of science might allow anyone to determine the concentration of CO2(aq) in the surface ocean, or how it changes on the insertion of ACO2 into the atmosphere. The system changes from a highly agitated, disequilibrium state to another highly agitated, disequilibrium state, never bumping into Le Chatelier’s principle, or being influenced by it. Phil needs to come forward with his solution to the air-sea carbonate state using reaction kinetics, or any other tool at his disposal.
What really chafes about Phil’s writings is not his insults and arrogance, but in the bargain, his complete failure to contribute something redeeming.
I stand by my conclusion that “No theory guides us in disequilibrium”.
Phil says,
>>Your attempts to demean as a bolster to your argument is offensive, …
Phil spikes his writing with vulgarity and invective, jarring illogic, disrespect, and incomplete and irrelevant material. When called to task, with specifics, he takes offense.
Back in the days when the military taught Morse code to its radio operators, trainees practiced under a sign that contained a philosophy for life: Don’t Send Faster Than You Can Receive.
The lesson for Phil is don’t dish it out if he can’t take it.

Phil.
July 6, 2010 4:19 pm

Jeff Glassman says:
July 6, 2010 at 3:03 pm
Re Phil, 7/6/10 at 8:34 am:
Here comes Phil again, rubbing his eyes. Tsk!

Continuing the ad hominem in the same vein as you started!
……..
Phil spikes his writing with vulgarity and invective, jarring illogic, disrespect,
What’s sauce for the goose is sauce for the gander, respect has to be earned and you failed to earn any in your first response to me. A dialog with you is clearly a waste of time since you don’t understand the material but have such arrogance that you feel you know better than experts in the field. I’m afraid I don’t have Ferdinand’s patience so you can continue rambling away and demonstrating your ignorance.

Ferdinand Engelbeen
July 6, 2010 4:49 pm

Jeff Glassman says:
July 6, 2010 at 9:22 am
I provided you a textbook definition of thermodynamic equilibrium, using Zemansky. It excludes a state in which matter or heat is being exchanged internally or with the external world. That pretty much excludes whatever you mean by “dynamic equilibrium”.
First, it seems that the current definition of thermodynamic equilibrium is that this only occurs when everything is in mechanical, chemical and thermal equilibrium.
See Wiki:
http://en.wikipedia.org/wiki/Thermodynamic_equilibrium
and many other sources, including:
http://www.wisegeek.com/what-is-thermodynamic-equilibrium.htm
For me thermodynamic equilibrium was the same as the thermal equilibrium in the above definition. I am not the only one to confuse both, even the NASA (!) does:
http://www.grc.nasa.gov/WWW/K-12/airplane/thermo0.html
But nevertheless, the equilibrium is always dynamic for thermal equilibrium, except for zero absolute temperature in the Wiki definition, Thus the definition by Zemanski only holds for a non-existing condition, as always heat is exchanged between objects, even if only by radiation. It seems to me that Zemanski is somewhat behind reality.
For chemical equilibria, some reactions are reversible and thus are going into dynamic equilibrium, others are irreversible and end in only (stoichiometric) endproducts. Only in that case there is no further exchange of matter. Again Zemanski’s definition only holds for a subset.
Diffusion is seen as a specific part of chemical equilibria, gas dissolving is a chemical equilibrium too and in almost all cases a dynamic equilibrium, which is reached when as much molecules enter the liquid as the number which leave the liquid. The only exception again is when an irreversible reaction with the liquid occurs.
Thus a very large group of all equilibria is dynamic in nature. That is all thermal equilibria and many chemical equilibria, including near all dissolution equilibria (the latter according to Henry’s Law).

At last it is the real partial pressure of free CO2 in the last few cm of water which decides which way CO2 will go: in or out of the waters, if the difference with pCO2 of the atmosphere is higher or lower.

That sounds like a contradiction to me.
Henry’s Law dictates where the equilibrium is between CO2(g) and CO2(aq).
At equilibrium, pCO2(g) and pCO2(aq) are equal.
I suppose you agree that if CO2(g) increases, there would be a difference between CO2(g_new) and CO2(aq_old), that can be expressed as dpCO2. That will push more CO2 into the water until both are again in equilibrium.
Where you seem to have difficulties is to accept that the same can occur at the water side: if for any reason (besides temperature and salinity), CO2(aq) changes, that influences pCO2(aq) and a net flux will occur until CO2(g) and CO2(aq) are in equilibrium again (thus pCO2(aq) = pCO2(g)), according to Henry’s Law.
You response is irrelevant. The statement you call nonsense is about IPCC’s use of calibration AFTER the lab work, meaning AFTER the raw measurements. You return to endorsing the lab work. Besides, IPCC doesn’t even publish raw data, at least as far as CO2 concentrations are concerned.
The IPCC doesn’t publish its own work of CO2, they don’t measure, calibrate or intercalibrate any single CO2 measurement. The figures published by the IPCC are from NOAA, which is the leading organisation for measuring, calibration and intercalibration of CO2 figures used by the IPCC. Besides NOAA, Scripps still uses its own calibrations of the calibration gases (which they receive from NOAA), takes its own CO2 flask samples at different places (including MLO), and so do several other organisations all over the world. All these data (unfiltered and filtered, raw and averaged, smoothed and unsmoothed) are available on line, or if the load is too heavy available on simple request. There is a hell of a difference in openness between CO2 people and e.g. temperature or paleo people. No Jones and Mann here.
Where you referred to in your accusation is the “correction procedure” when too many data are missing. Current datasheets e.g. contain two columns for yearly averages: the arithmic mean (“annual” and the adjacent column (“annual-fit”) which contains values derived from a curve. Flask data don’t contain the second column (too few data). Here the note for Mauna Loa from Keeling Sr. from an older file (2004):
The “annual” average is the arithmetic mean of the twelve monthly values. In years with one or two missing monthly values, annual values were calculated by substituting a fit value (4-harmonics with gain factor and spline) for that month and then averaging the twelve monthly values.
The two columns differ from each other in the second decimal only…
Thus the “linear gain factor” has absolutely nothing to do with calibration, the IPCC or after-the-fact manipulation.
Further, the available raw hourly averages need a few seconds download time, fit easely in Excel, need some weed-out of unavailable data and a translation of month-day-hour into a day of the year to make a curve. Add to that the adjusted, manipulated, horrible distorted monthly averages from another downloaded file, compare the averages and trends and there it is: no difference beyond a few tenths of a ppmv. Ready within half an hour, far less time than to react on my writings…
First, “total CO2 in solution” means total CO2(aq).
Sorry, but that is quite confusing, as most of us use “total” CO2 as DIC.
You fix the ratio of DIC:DOC:POC,
You use the fixed ratios determined for thermodynamic equilibrium for an ocean never in thermodynamic equilibrium. You use the equations for thermodynamic equilibrium for the state of dynamic equilibrium, which you seem unable to define, for equations that are inapplicable
Nobody “fixes” these ratio’s. To the contrary. It is the change in ratio’s which is of interest and which causes the Revelle factor, the real uptake or release of total carbon (to make a difference with your total CO2) by ocean water. What you are saying that we can’t use the Bjerrum plot (or the calculations behind it), because the ocean is never in equilibrium. That is equivalent to saying that one can’t take the sea surface temperature because the ocean temperature is never in equilibrium. Of course we can measure and/or calculate any or all of these, including the local Revelle factor, pCO2 (~CO2aq), -bi-carbonate, DIC, pH, for every point of the oceans. Deriving some overall parameters (average, fluxes) of all these individual data is of a different order and of different certainty.
You create an equilibrium bottleneck to defeat Henry’s Law. Because the ratios are fixed in your model, you could equally well express Henry’s Law for any one of the forms of CO2, the total CO2 in all forms, or any subtotal.
What made you think that these are fixed ratios? Nobody fixes the ratios, to the contrary. What the examples show is the ratios for one specific temperature, (pressure,) pH, DIC and salt content. Change one of these, like pH, and all ratios change, including the amount of free (total) CO2(aq). That can be read in all the textbooks, including what the IPCC says. That is what the Bjerrum plot shows: the result of any change (or non-change). Not only applicable at equililibrium or for fixed ratios.
Henry’s Law runs according to established theory to load the surface layer of the ocean with an unrestricted amount of CO2(aq), up to the limits for its temperature.
Did we say something else? But what you forget is that a 100% increase of CO2(g) indeed increases CO2(aq) with 100%, but that does increase total carbon, DIC, with not more than 10%. You see, the ratios change if you change one of the ingredients…
Thus while the atmospheric CO2 content increased with 30% to 800GtC over the past 150 years or so, the ocean surface layer total carbon increased only with 3% from ~1000 to ~1030 GtC in the same period. So far the buffer working of the ocean surfaces…

July 8, 2010 8:57 am

Re Ferdinand Engelbeen, 7/6/10 at 9:22 am
Thank you for the encouraging response.
Your interpretation of the “current definition of thermodynamic equilibrium” is correct. However, I would be most reluctant to admit an alternative definition, especially just to accommodate the climatology consensus, meaning the AGW dogma.
Sometimes science or math makes interesting strides in a new direction when an axiom is revised. A new geometry arose from letting parallel lines meet, and relativity arose by abandoning the Newtonian axiom that a universal, unidirectional clock exists. This is true for axioms, but is it ever true for laws? Perhaps the laws of thermodynamics are ripe for a revision on a cosmic scale. Who knows where the crazy cosmologists will go next as their standard theory becomes an ever more bizarre crazy-quilt of patches. The Laws of Thermodynamics are not suitable for revision on a microscopic scale because thermodynamics by definition is a macro or bulk science.
I note that your Wikipedia reference says,
>>Classical thermodynamics deals with dynamic equilibrium states.
I eagerly clicked on “Dynamic equilibrium” hoping to find the missing definition. Instead, I found this:
>>A dynamic equilibrium exists when a reversible reaction ceases to change its ratio of reactants/products, but substances move between the chemicals at an equal rate, meaning there is no net change. It is a particular example of a system in a steady state.
That is not a definition. It is a particular example stating where and when dynamic equilibrium occurs in chemical processes. Is dynamic equilibrium applicable only to chemical processes?
I clicked on the link to steady state, hoping to find clarification. Wikipedia provides that steady state occurs when the partial derivative of any property of the system with respect to time is zero. That doesn’t help. Its not a poor definition of steady state because of a problem that arises in the very example, above. Dynamic equilibrium only requires that the flux in equal the flux out, not that they are both zero.
The problem is that one needs to define steady state with respect to a system, defined by a fixed set of parameters. The partial derivative of those parameters must be zero. So, the system needs to be defined at the outset in terms of net flux.
Zemansky solves this problem. The definition I supplied earlier defines thermodynamic equilibrium in terms of changes to a system, in particular of (unbalanced) forces, structure, or coordinates. This leads to another definition from Zemansky, p. 4:
>>Macroscopic quantities having a bearing on the internal state of a system are called thermodynamic coordinates.
So if we were to define our air-sea system with care, we’d be sure to use the net flux across the boundary. Wikipedia’s example would come closer to being a definition cast in terms of a system and its coordinates. Then the dynamic equilibrium would be equivalent to thermodynamic equilibrium except for one additional problem: Wikipedia requires the process by which dynamic equilibrium is achieved to be reversible. As we have discussed above, reversible processes never occur in nature. That’s a violation of the Second Law. Furthermore, reversibility is a restriction not required for thermodynamic equilibrium.
To the extent that the Wikipedia entry reflects the definition of dynamic equilibrium, it is more restrictive than thermodynamic equilibrium. When Zeebe & Wolf-Gladrow required thermodynamic equilibrium for the solution to the carbonate system, that was sufficient.
Re calibration:
You say,
>>Here the note for Mauna Loa from Keeling Sr. from an older file (2004):
>>>>The “annual” average is the arithmetic mean of the twelve monthly values. In years with one or two missing monthly values, annual values were calculated by substituting a fit value (4-harmonics with gain factor and spline) for that month and then averaging the twelve monthly values.
>>The two columns differ from each other in the second decimal only…
Thus the “linear gain factor” has absolutely nothing to do with calibration, the IPCC or after-the-fact manipulation.
We already discussed this. I gave you citations for the fact that IPCC authors did not use the linear gain factor at MLO, and your citation only confirms that point. On the other hand, they used a linear gain factor for Barrow, Alaska (BRW) and one (which we must assume is different) for the South Pole (SPO). Then IPCC plotted the MLO and the SPO data on top of one another, and they matched. Surprise! Then they did the same thing for MLO and BRW, and they again matched. Wonder of wonders! Thus CO2 appeared to be well-mixed, and hence MLO concentrations were global. Do this on a government contract in the US and you might go to jail. IPCC calls it calibration.
Re fixed ratios:
The fixed ratios I gave were merely numerical examples used by advocates from IPCC Reports. For DIC, those ratios correspond to the Bjerrum plot at some pH, the abscissa. The Bjerrum plot defines fixed ratios by the relationships between simultaneous concentration curves dependent on pH. The plot by implication fixes those ratios. The fixed ratios are functions of pH.
You may not rely on the Bjerrum plot, the solution to the stoichiometric equations for the carbonate system in the surface layer, until you establish that thermodynamic equilibrium exists in the surface layer. Of course, it does not. For the same reason, you may not rely on dynamic equilibrium, because, as inferred from your source, it is even more restrictive than thermodynamic equilibrium.
The idea that you may not rely on a law, theory, or model in general without satisfying its boundary conditions is so fundamental that it is not even mentioned as a scientific postulate. Also the fact that a modeled relation can be measured in no way validates the model. For example, the Revelle factor is defined as a ratio. Call it r. So RF = r, and r is assumed measurable all over the ocean. But so is RF’ = r +r^2, or = r^(1/2).
The Bjerrum plot is used by IPCC believers as a state diagram. As you have urged and implied, added CO2 obeys the equations, shifting the plot left to a more acidic state. We could plot the system state on the Bjerrum plot before and after the added CO2.
Unfortunately, the CO2(aq) graph on the Bjerrum plot of the three components of DIC has no meaning in disequilibrium. The CO2(aq) will be different. It might be shifted up by a constant. In disequilibrium, the intersections with the ion graphs become meaningless. This shifting is necessary to accommodate Henry’s Law for added atmospheric CO2, meaning added pCO2.
You say,
>>Thus while the atmospheric CO2 content increased with 30% to 800 GtC over the past 150 years or so, the ocean surface layer total carbon increased only with 3% from ~1000 to ~1030 GtC in the same period. So far the buffer working of the ocean surfaces.
During that 150 year period, SST increased, causing the ocean to release CO2(g) according to the solubility curve. Added pCO2 causes uptake to increase proportionally, but outgassing to decrease in inverse proportion. This requires a mass balance analysis, which is missing from IPCC reports. With reasonable estimates for temperature increase over the past 150 years, and using IPCC’s outgassing of 90 GtC/yr and its uptake of 92 GtC/yr, your 30% increase in CO2 problem may over specify the problem and have no solution. IPCC assumes this problem away by (a) not revealing its mass balance analysis, (b) assuming nCO2 to be in balance, and then (c) treating ACO2 on the margin. The problem is nonlinear, invalidating IPCC’s radiative forcing paradigm, i.e., the response to ACO2 is not additive with the response to nCO2.
I suspect your 3% number is an estimate, being merely one tenth of 30%, where the number of one tenth comes from thermodynamic equilibrium equations, a state that never exists in the surface ocean. This is my guess because the buffer, at least as defined by IPCC, depends on thermodynamic equilibrium for its existence when thermodynamic equilibrium is not present.
If you could provide a reference for the calculation I might confirm whether my guess is right.

Ferdinand Engelbeen
July 8, 2010 5:01 pm

Jeff Glassman says:
July 8, 2010 at 8:57 am
Wikipedia requires the process by which dynamic equilibrium is achieved to be reversible. As we have discussed above, reversible processes never occur in nature. That’s a violation of the Second Law.
Accorsing to Wiki ( http://en.wikipedia.org/wiki/Second_law_of_thermodynamics ), the second law of thermodynamics only shows degradation of (heat) processes to minimum values (maximum ethropy). In that way, no natural processes are reversible, without heat added from external sources. But as on earth we have an external source of heat, many natural processes are in dynamic equilibrium. Including temperature, where incoming and outgoing energy are more or less equal. Or CO2 in the atmosphere vs. the ocean surface. Or CO2/bi/carbonate levels in the oceans. The second law applies in first instance for isolated systems which are not in equilibrium… Here on earth no system is isolated from the rest of the earth, or the universe.
When Zeebe & Wolf-Gladrow required thermodynamic equilibrium for the solution to the carbonate system, that was sufficient.
Thermodynamic equilibrium doesn’t necessary mean a dynamic equilibrium, but doesn’t exclude it either.
The carbonate system is a typical example of a system in dynamic equilibrium. As the reaction in any direction is very fast (within one second), any change in concentration or conditions (like temperature) leads near momentary to a new dynamic equilibrium at another level.
Thus thermodynamic equilibrium is near continuously fulfilled for this system.
——–
We already discussed this. I gave you citations for the fact that IPCC authors did not use the linear gain factor at MLO, and your citation only confirms that point. On the other hand, they used a linear gain factor for Barrow, Alaska (BRW) and one (which we must assume is different) for the South Pole (SPO).
You are so determined to show that CO2 is not well mixed, that you accuse everybody on this earth who has anything to do with CO2 measurements of manipulating the data. Sorry but that is two bridges too far. As said in previous message, the raw (hourly averaged) data are available for MLO, BRW and SPO. Thus from (near) the North Pole to the South Pole. Do the calculations yourself and compare them to the “manipulated” monthly averages. It’s only halve an hour work.
Further, the note on the most recent file for monthly averages from the South Pole is exactly te same as for Mauna Loa:
The monthly values have been adjusted to the 15th of each month. Missing values are denoted by -99.99. The “annual” average is the arithmetic mean of the twelve monthly values.
In years with one or two missing monthly values, annual values were calculated by substituting a fit value (4-harmonics with gain factor and spline) for that month and then averaging the twelve monthly values.

See: ftp://cdiac.ornl.gov/pub/trends/co2/sposio.20jan2009.co2
On the same website ( ftp://cdiac.ornl.gov/pub/trends/co2/ ) one can find the origin of your confusion:
ftp://cdiac.ornl.gov/pub/trends/co2/sposio.co2
These are flask data, which are taken in triplo every two weeks. This makes a far more irregular plot, reason why they use a fitting curve to present the data. For continuous measurements, this procedure is not used, except when there is a lack of data in maximum two months (otherwise the yearly average is indicated as “missing”).
Thus there is not the slightest shred of evidence left that the IPCC or any of its authors has biased the data from any station.
————
You may not rely on the Bjerrum plot, the solution to the stoichiometric equations for the carbonate system in the surface layer, until you establish that thermodynamic equilibrium exists in the surface layer.
It is established that a thermodynamic equilibrium exists for any point of the oceans. But as no point in the oceans is isolated from all the others, the equilibrium may change over time, depending of heat, gas, liquid flows, evaporation,… Even Wattenberg 80 years ago did know that and used the calculations now behind the Bjerrum plot.
For example, the Revelle factor is defined as a ratio. Call it r. So RF = r, and r is assumed measurable all over the ocean. But so is RF’ = r +r^2, or = r^(1/2).
Not the best example: the Revelle factor is a ratio of measurable concentrations. Not measurable itself.
Unfortunately, the CO2(aq) graph on the Bjerrum plot of the three components of DIC has no meaning in disequilibrium. The CO2(aq) will be different. It might be shifted up by a constant. In disequilibrium, the intersections with the ion graphs become meaningless. This shifting is necessary to accommodate Henry’s Law for added atmospheric CO2, meaning added pCO2.
Because the three components are always near instantly in (dynamic and thermodynamic) equilibrium, indeed CO2(aq) may shift up and down, compared to the atmospheric CO2(g), as that is a (dynamic) equilibrium which is not instantaneous and needs quite a lot of time. That can be measured as pCO2(aq) (as defined… in equilibrium) and compared to pCO2(g) above the same surface… If pCO2(aq) is higher than pCO2(g), then the (net) flux will go out of the water to obey Henry’s Law.
——–
During that 150 year period, SST increased, causing the ocean to release CO2(g) according to the solubility curve.
Which shows about 8 ppmv effect of an increase in temperature of 1 C over the past 420,000 years, including the previous interglacial (the Eemian) which was about 2 C warmer than today. If we may assume an increase of at maximum 1 C since the LIA, the maximum increase of CO2 due to the solubility curve is 8 ppmv in the atmosphere. That’s all. The rest of the 100+ ppmv increase is quite certainly caused by human emissions.
IPCC assumes this problem away by (a) not revealing its mass balance analysis
What kind of mass balance do you need? The simplest one is this:
Humans add 8 GtC/year of CO2 from fossil fuel burning to the atmosphere. The measured increase of CO2 in the atmosphere over the past years is about 4 GtC. With some unknown total inflows and outflows per year by nature the final mass balance after a year is:
x + 8 GtC = y + 4 GtC
where x is the total amount of CO2 (as GtC) released by nature (oceans, vegetation decay, rock weathering,…) and y the total amount of CO2 taken away by nature.
Thus
x – y = -4 GtC
Whatever the real value of x exchanged over a year (15 GtC, 150 GtC or 1500 GtC), y (the total natural carbon sink) is always 4 GtC larger than x (the total natural carbon release). Thus nature is a net sink for CO2 and adds nothing, zero, nada net CO2 as mass to the atmosphere. It is as simple as that.
the response to ACO2 is not additive with the response to nCO2.
The response to aCO2 is completely in line with the addition. nCO2 plays no important role in the increase, only is significant in the disturbance around the increase, as a fast, but limited (about 4 ppmv/C) response to temperature. See:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg
and
http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em.jpg
I suspect your 3% number is an estimate, being merely one tenth of 30%, where the number of one tenth comes from thermodynamic equilibrium equations, a state that never exists in the surface ocean.
Compare the increase of CO2 in the atmosphere with the increase of DIC in the ocean mixed layer at Bermuda:
http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
pCO2(g) increased about 30 ppmv in the period 1984-2004 or about 10%.
pCO2(aq) in the same period increased with about 25 ppmv or about 7.5%.
nDIC in the same period increased with 0.8%.
Remarkable that they could manage to obtain so many obeserved data for something that is not measurable.
pCO2(aq) in average is lower than pCO2(g), with exception of mid-summer. Thus the Bermuda ocean water is a net sink for CO2.

July 10, 2010 4:37 pm

Re Ferdinand Engelbeen, 7/8/10 at 5:01 Pm
1. Re Species of Equilibrium
You discovered in Wikipedia, a notoriously risky source, what I already gave you from Zemansky, a supreme source. Then you say,
>>But as on earth we have an external source of heat, many natural processes are in dynamic equilibrium.
I can’t let you get away with a wishy-washy definition of dynamic equilibrium. Because it’s a term you, and not I, insist on using, you have the burden of providing an authoritative definition. The definition I inferred from your Wikipedia citation is the best that has been supplied on this thread. A reasonable inference from that article is that dynamic equilibrium implies thermodynamic equilibrium, but not the reverse.
>>Here on earth no system is isolated from the rest of the earth, or the universe.
We model earthly systems all the time without concern for such academic, cosmic worries. What you have stated is true, but so extreme as to be irrelevant.
That argument is not the reason for saying that the surface layer of the ocean is not in thermodynamic equilibrium. It is not in mechanical equilibrium because it is stirred by winds, currents, biology and geology. It is not in thermal equilibrium because it is heated by the Sun and cooled by radiation to space, and heat is exchanged by conduction and convection to the deeper ocean. It is not in chemical equilibrium because of biology and the carbonate process, as a minimum. All three equilibria, however, must be satisfied to be in thermodynamic equilibrium.
The reason the surface layer is not in equilibrium is not a cosmic argument, but a local, observable argument, enmeshed in the more complex climate models. The notion of surface layer equilibrium is, to be kind, unreasonable.
You wrote:
>>Thermodynamic equilibrium doesn’t necessary mean a dynamic equilibrium, but doesn’t exclude it either.

Thermodynamic equilibrium is a state, independent of the processes by which it was attained. According to the definiton on the table, dynamic equilibrium is a state with not just active processes, but with idealized, reversible processes.
Thermodynamic equilibrium is an idealized condition, too, involving rarely or never observable macroparameters, like entropy and enthalpy, and much more. Arguably, thermodynamic equilibrium exists nowhere on Earth. Dynamic equilibrium, though, is a more severe constraint, involving reversible processes, which, if they existed, would violate the Second Law. Thermodynamic equilibrium is hypothetical, but not a violation of the laws of the discipline. In summary, dynamic equilibrium, as interpreted from the Wikipedia article, is a thermodynamic equilibrium in which the Second Law is violated.
Later you say,
>>It is established that a thermodynamic equilibrium exists for any point of the oceans.
This is preposterous, as shown by the discussion above. For the AGW story to have any validity, ACO2 must accumulate in the atmosphere. The model that makes that happen is the Revelle buffer factor. That factor is derived from the stoichiometric equilibrium equations, for which the solution is the Bjerrum plot. That solution applies only in thermodynamic equilibrium.
The whole model hinges on the existence of thermodynamic equilibrium. You may not simply proclaim that thermodynamic equilibrium exists.
Perhaps you are assuming, maybe without realizing it, that the Bjerrum plot is valid and applicable, and therefore that the surface ocean must be in equilibrium. You must start from the other end, and show the impossible: that the surface layer is in thermodynamic equilibrium.
You claim, unnecessarily as it turns out, that the surface ocean is in dynamic equilibrium. For this to be true, the CO2 flux from air to sea would have to balance the flux from sea to air. However, you also rely on the fact that two fluxes are not equal in your argument justifying the Takahashi analysis. That, you claim, is what causes CO2 to be absorbed. You can’t have it both ways. You may not rely on a net flux of zero for the Bjerrum solution to hold, and assume a positive flux for the ocean in absorbing CO2, or vice versa for outgassing.
2. Re the Well Mixed assumption
You say,
>>You are so determined to show that CO2 is not well mixed, that you accuse everybody on this earth who has anything to do with CO2 measurements of manipulating the data.
Your accusation is unwarranted. IPCC manipulated the data, and I provided you the evidence. You refuse to respond to that evidence, except to revert to the claim that the laboratory work was all honest and competent. You ask me to study the laboratory work, saying it will only take a half hour. But time is not the issue. The problem is that all that could be demonstrated from such a study is whether IPCC’s fraud penetrated to the laboratory level. AGW exists only because of IPCC, and that makes demonstrating the crime involved dependent on what IPCC provided as the basis for its claim. The fraud is contained in the IPCC Assessment Reports, which do not include either laboratory data, or, most pointedly, calibration data.
You quote the May 2005 version of the monthly data explanation for SPO, and claim it is “exactly the same as for Mauna Loa”. The URL in your reference inexplicably carries a 2009 date. It includes a “gain factor” for SPO, called a “linear gain factor” in the later 2008 report, and which you strangely site as the “origin of my confusion”. I didn’t find an old MLO report to go with your 2005 data, but the latest MLO report does not include a gain factor anywhere, and including the point where it is included for other stations.
You need to show, and have failed to do so, that the MLO data not only includes a “gain factor”, but that it is identical to the SPO gain factor. You also need to show that your data were the latest in effect at the time of the Fourth Assessment Report.
Further, I am not “determined to show that CO2 is not well mixed”. I am convinced from good evidence, now including satellite imagery, from superior modeling, from the lack of a criterion for well-mixed, and from IPCC’s fraudulent treatment of data that the claim of CO2 being well-mixed in the atmosphere is without merit. At the same time, I am aware of the great importance of that well-mixed assumption to the AGW model, and hence the motivation for the abuses of science practiced by the Panel and its authors.
You say,
>>Thus there is not the slightest shred of evidence left that the IPCC or any of its authors has biased the data from any station.
First, real world data records are neither as well-behaved internally as are the MLO, Baring Head, BRW, and SPO records in IPCC reports, nor do they overlay one another with such precision as IPCC shows. IPCC graphs of these records are naïve, simplistic, unrealistic, and highly suspicious. I would remind you also that uncorrelated data subjected to mathematical smoothing can become correlated.
Second, IPCC admits making inter-network and intra-network calibrations. This is tantamount to a confession.
Third, IPCC does not report its calibration values, suggesting an intent to deceive.
3. Re Immediate reaction assumption
IPCC says,
>>Carbon dioxide entering the surface ocean IMMEDIATELY reacts with water to form bicarbonate (HCO3–) and carbonate (CO3 2–) ions. CAPS added, AR4, ¶7.3.1.1, p. 514.
Accordingly, you say,
>>Because the three components are always NEAR INSTANTLY in (dynamic and thermodynamic) equilibrium… . CAPS added.
No evidence exists to support this conclusion. It is a presumably necessary assumption to make the Bjerrum solution valid so that the ocean will create a buffer against solubility, and so that added CO2 will cause another crisis by acidifying the ocean. The motivation behind this naked assumption is clear and objectionable.
Furthermore, even if the reactions were nearly instantaneous, we might assume them to be equally fast in reverse on the same authority. When the ocean outgasses to the atmosphere, we might justifiably assume equilibrium remains in the surface layer so that CO2(aq) is created instantaneously from the ions in the reverse reactions as fast as it is outgassed as CO2(g).
In other words, I could accept your instantaneous conversion of CO2(aq) to HCO3^- and CO3^– if you would grant instantaneous conversion from HCO3^- and CO3^– to CO2(aq), and to boot, instantaneous CO2(aq) CO2(g). In this way, the surface ocean can be considered to be in equilibrium at a single point, i.e., a single pH on the Bjerrum plot, and Henry’s Law can proceed apace – no buffer (no bottleneck) and no acidification. I don’t care whether the surface ocean is modeled as being in disequilibrium or in a hypothetical state of equilibrium, so long as the physics of solubility of CO2 in water is respected, and the notions of the Revelle factor and acidification are left as failed conjectures.
4. Calculated bootstrap evidence for thermodynamic equilibrium
At your invitation, I read “The Interannual to Decadal Variability of the Ocean Carbon Cycle” from the website of the Marine Biogeochemistry Lab. First I would observe the realistic APPEARANCE of the data graphed in Figure 1, normalized DIC, normalized TA [Total Alkalinity], pCO2, and pH. Data points are connected in follow-the-dots fashion, which is OK here, and the trend lines are graphed separately. This is distinctly different from various smoothing techniques IPCC applied to shift curves, and then to have the points represent plot points instead of data points. Also the data appear to have not unreasonable noise in amplitude and phase.
The legend for Figure 1 says, “Figure prepared for the IPCC 4th Assessment, in preparation 2004”. It says,
>>pCO2 data was [sic] calculated from DIC and alkalinity data using dissociation constants and theoretical considerations outlined in Bates et al., 1996. … pH data was [sic] calculated from DIC and alkalinity data using dissociation constants and theoretical considerations outlined in Bates et al., 1996.
That IPCC Report would issue three years later, with just the pCO2 and pH data merged into Figure 5.9, p. 404. This legend confirms that “Values of pCO2 and pH were CALCULATED from DIC and alkalinity at … BATS [Bermuda Atlantic Time-series Study] … .” CAPS added.
I found this likely, though not unique, reference on other pages from the same Lab: Bates, N.R., Michaels, A.F., and Knap, A.H., 1996, “Seasonal and interannual variability of the oceanic carbon dioxide system at the U.S. JGOFS Bermuda Atlantic Time-series Site. Deep-Sea Research II, 43(2-3), 347-383”. I retrieved a pdf image copy, and found the following:
>> The oceanic CO2 system can be characterized by measuring two of the parameters (i.e. pH, pCO2 or fCO2, TCO2, and TA), with the other two parameters CALCULATED USING THERMODYNAMIC RELATIONSHIPS (Stoll et al., 1993; Millero et al., 1993b; DOE, 1994). …
>>TCO2, TA, temperature, discrete salinity and nutrient data were used to CALCULATE pCO2 AND pH VALUES USING THE ALGEBRAIC RELATIONSHIPS given in Peng et al. (1987) AND DISSOCIATION CONSTANTS for carbonic acid (Goyet and Poisson, 1989), borate (Dickson, 1990), and phosphate (DOE, 1994). CAPS added.
The pH and pCO2 records look like data because they were calculated from noisy data.
Your observations relative to these articles are:
>> pCO2(g) increased about 30 ppmv in the period 1984-2004 or about 10%.
pCO2(aq) in the same period increased with about 25 ppmv or about 7.5%.
nDIC in the same period increased with 0.8%.
>>Remarkable that they could manage to obtain so many observed data for something that is not measurable.
You’ve been hoodwinked. What is remarkable is that the only records from Bermuda (BATS) reported by IPCC were calculated, not measured. Specifically, the pCO2 on which you rely to demonstrate the 10:1 relationship given by the Bjerrum plot, the solution to the thermodynamic equilibrium chemical equations, were computed using those relationships.
5. Re Mass Balance
You asked,
>>What kind of mass balance do you need?
Answer: the same as suggested by IPCC when it said,
>>[T]wo methods can be used to quantify the net global land-atmosphere flux: … (2) inferring the land-atmosphere flux simultaneously with the ocean sink by inverse analysis or MASS BALANCE COMPUTATIONS using atmospheric CO2 data, … . CAPS added, AR4, ¶7.3.2.2.2, p 519.
The boundary conditions for the necessary mass balance include the following. IPCC showed the air-ocean “sink” for the 1990s in Figure 7.3, p. 515. It absorbed 92.2 GtC/yr from the atmosphere and outgassed 90.6 GtC/yr into an atmosphere of 762 GtC. Meanwhile the ACO2 emissions were 6.4 GtC/yr from fossil fuels, plus other terrestrial fluxes. IPCC showed the MLO and Baring Head CO2 concentrations increased from about 320 ppm in 1970 to about 380 in 2005. IPCC also showed an isotopic lightening of the atmosphere from about -7.6 per mil in 1981 to roughly -8.1 per mil in 2003. (I believe I confused the names Baring Head and Point Barrow (BRW) previously.)
The mass balance needs to cover the periods of 1750 to about a century into the future. For a first cut, the terrestrial fluxes may be assumed to have a net zero effect, and the process may be assumed for the first cut to be isothermal. Then, the absorption of CO2 into the ocean must be set proportional the atmospheric concentration of CO2, equivalently the pCO2. The outgassing must be inversely proportional to pCO2. The change in total CO2 in the atmosphere must be equal to the sum of the outgas plus the ACO2, less the amount absorbed. Certain reasonable assumptions are required, including the shape of the rise in ACO2 emissions, and the isotopic mixes of the atmosphere, and hence the absorbed CO2, and of the outgassing CO2, presumed dominantly to be from ancient waters. These assumptions may be tested by the mass balance model.
Later, temperature effects may be added to make the outgassing proportional to solubility curve for CO2 in water as a function of a reasonable model for SST.
As you say,
>>It’s as simple as that.
6. Re Henry’s Law, temperature, and atmospheric CO2 increases
You previously hypothesized a 30% rise in atmospheric CO2 to 800 GtC over the past 150 years. Here’s a simplified model for the partial effect of temperature, using your data and a physical model instead of an arbitrary comparison of the growth of CO2 concentration and temperature.
Your increase in atmospheric CO2 concentration, C, is from 615.4 to 800, or 184.6 GtC. Assume that to be linear at 0.0163 GtC/yr. IPCC’s estimate for the ocean’s CO2 outgas, O0, was 90 GtC for the 1990s. In this partial model, the uptake (I) is always at ice water temperature, so does not participate. The outgas is inversely proportional to X, the solubility factor (Henry’s coefficient) at an effective temperature roughly corresponding to tropical waters. The solubility is approximately linear at these temperatures, X(T) = X0 – mT. At 30ºC, X0 is 0.1257 and m is – 0.0035/ºC. At 20ºC, X0 = 0.1688 and m = -0.0049/ºC. Handbook of Chemistry & Physics, 34th ed., 1953, Solubility of Gases in Water, p. 1532. Slopes are calculated from the table entries at the specified temperature and the next lower temperature.
With these assumptions, delta outgas, DelO = O – O0 ~ O0*m*T/X0. We use delta outgas because the question is about the marginal effects of a change from the initial conditions of the 1990s. Let T = i*DelT, a linear temperature rise, so O – O0 = O0*m*DelT*i/X0. The total added ocean flux for the 150 years is O0*m*DelT/X0 times the sum of i from 0 to 150. That sum is N*(N+1)/2 = 11325. Set the total added flux for 150 years equal to your atmospheric increase of 184.6 GtC, and solve for DelT. For T0 = 30ºC, DelT = -0.006505ºC and the total temperature increase over the past 150 years is 0.976ºC. For T0 = 20ºC, DelT = -0.00624ºC and the total temperature rise was 0.936ºC.
Thus the temperature effects alone COULD account for the increase in atmospheric CO2 based on Henry’s Law, a reasonable natural temperature rise, and the natural CO2 flux, which IPCC negligently rendered constant and benign within its model of the carbon cycle. This analysis is by the partial derivative with respect to temperature, keeping the responses to pressure constant. It shows why a more complete mass balance treatment is necessary. Ocean uptake is proportional to C, and its outgas is inversely proportional to C.
A mass balance analysis is a scenario obeying principles of physics and IPCC’s boundary conditions. It might show that solubility accounts for the estimated rise in atmospheric CO2 as a feedback from atmospheric pressure, with SST having a negligible effect. The analysis can account for the proportion of that rise due to ACO2 (it is not 50% of the added ACO2!), and it can account for the observed change in the isotopic ratio. It should also show why IPCC’s model in which it adds the natural CO2 response to the ACO2 response is a modeling error. The carbon cycle is nonlinear so the response of the sum of forcings, nCO2 and ACO2, is not equal to the sum of the individual responses. Symbolically, R(nCO2 + ACO2) ≠ R(nCO2) + R(ACO2). The radiative forcing paradigm is invalid.
7. Re Regional data in global models
A final observation is that climate is a thermodynamic problem, meaning that it is a problem in macroparameters. A model that mixes macroparameters, mesoparameters, and microparameters is in grave danger of not converging to a useful model, that is, one that makes non-trivial predictions.
The following are examples of relevant macroparameters: global average surface temperature, SST, Bond albedo, and the key parameter, climate sensitivity. For these a global model is required for the ocean. Phenomena like the Atlantic gyre (Englebeen, 7/2/10), the Bermuda ocean water, above, and salinity are regional phenomena, confounding considerations already taken into account in the assumed global ocean model. So, too, is the Takahashi analysis, and considerations found here and there, in the literature and in the blogosphere, for the atmospheric temperature lapse rate and longwave radiation at various altitudes. The thermodynamic solution to climate is not likely sensitive to these mesoparameters.
Another way of looking at the problem is that regional effects, whether horizontal or vertical, are not uniquely determined by a state specified by thermodynamic coordinates for climate. Attempting to include irrelevant regional phenomena will most likely have the effect of preventing the global model from converging. This is Occam’s Razor raised from the subjective realm of elegance and simplicity to the objective realm of a modeling imperative.

Ferdinand Engelbeen
July 11, 2010 7:43 am

Jeff Glassman says:
July 10, 2010 at 4:37 pm
1. Re Species of Equilibrium
Then the dynamic equilibrium would be equivalent to thermodynamic equilibrium except for one additional problem: Wikipedia requires the process by which dynamic equilibrium is achieved to be reversible. As we have discussed above, reversible processes never occur in nature. That’s a violation of the Second Law.
Come on Jeff, Wiki shows that the Second Law also is applicable for reversible processes. And almost all processes in nature are reversible. Only for isolated systems in disequilibrium, the process is irreversible. Which is not the case for many processes on this earth…
The reason the surface layer is not in equilibrium is not a cosmic argument, but a local, observable argument, enmeshed in the more complex climate models. The notion of surface layer equilibrium is, to be kind, unreasonable.
The total ocean surface is never in equilibrium, there we agree, but at any particular point of the oceans, surface or deep part, the ocean is in thermodynamic (physical, chemical and thermal) equilibrium. One can measure temperature, concentrations, pH, pCO2(aq)… of that point. Any few of these parameters is sufficient to calculate the rest of the parameters of that point, based on the dynamic equilibrium equations. That that point (slightly) differs in equilibrium with the next point is true, but doesn’t change the fact that the equilibrium equations are applicable.
You claim, unnecessarily as it turns out, that the surface ocean is in dynamic equilibrium. For this to be true, the CO2 flux from air to sea would have to balance the flux from sea to air. However, you also rely on the fact that two fluxes are not equal in your argument justifying the Takahashi analysis.
Please read more carefully what I wrote: All in-water chemical processes are very fast in dynamic equilibrium (including thermodynamic equilibrium), within one second (correction: CO2 to bicarbonate and reverse needs 30 seconds half life, which still is more than fast enough). That includes pCO2(aq), but that excludes the pCO2(aq)-pCO2(g) equilibrium which is slow and depends on mechanical factors like stirring (by wind speed). Thus the Bjerrum plot and the underlying equations and the Revelle factor are applicable for any point in the oceans, but the CO2 flux between air and water depends on additional mechanical factors which are time dependent.
See: http://www.geo.uu.nl/Research/Geochemistry/kb/Knowledgebook/CO2_transfer.pdf
Thus only at the ultimate ocean surface skin, there is an equilibrium between pCO2(g) and pCO2(aq), the rest of the ocean surface mixed layer needs more time (about one year, according to Takahashi…).
2. Re the Well Mixed assumption
You need to show, and have failed to do so, that the MLO data not only includes a “gain factor”, but that it is identical to the SPO gain factor. You also need to show that your data were the latest in effect at the time of the Fourth Assessment Report.
As shown from the different files, MLO doesn’t use a gain factor and SPO doesn’t use a gain factor, except if there are too many missing data from maximum two months. Then a curve derived from previous years + a (linear) gain factor, which represents the increase since last year, is used to represent one or two missing monthly values. If more months are missing, the whole year is indicated as missing. This is the current procedure that is used for all stations with continuous measurements. That are the data used by the IPCC. The older procedure was simply plotting a curve through the remaining data. Doesn’t make any appreciable difference.
The procedure for flask measurements is different, as these are far more irregular and are always represented by curve fitting, which includes a linear gain factor over last year values. These values are not used by the IPCC. But they show a good agreement (the raw data) with the continuous measurements taken at the same place within a few tenths of a ppmv.
Different filtering techniques were tested to represent MLO data as smoothed values:
http://www.catskill.net/denisenorris/ThoningK_JGR89.pdf
That procedure caused some problems, when all remaining data are at the beginning or end of a month:
http://wattsupwiththat.com/2008/08/06/post-mortem-on-the-mauna-loa-co2-data-eruption/
Based on that experience, a change log was started:
http://www.esrl.noaa.gov/gmd/ccgg/trends/trends_log.html
For the South Pole, similar procedures are used, but need far less filtering (except for mechanical problems), as the data are far less influenced by local disturbances. That was clear for the past (up to 1996): http://tenaya.ucsd.edu/~dettinge/co2.pdf (chapter 2).
Thus any change of procedures for the South Pole only after 1996 would have an impact on the difference with MLO.
You ask me to study the laboratory work, saying it will only take a half hour. But time is not the issue. The problem is that all that could be demonstrated from such a study is whether IPCC’s fraud penetrated to the laboratory level.
If you weren’t so stubborn, you could see how you undermine the credibility of yourself and all other skeptics. The raw data are the raw data. These show normal, local variability caused by local volcanic out gassing, upwind conditions, land side wind (all or any of these if applicable) and mechanical problems for all stations. The raw data and the monthly, “cleaned” data show exactly the same curve, trend and average, within tenths of a ppmv. They can be compared to flask data taken at the same place, by different labs, using different (calibration) methods. These differ not more than a few tenths of a ppmv.
You assume, based on the misinterpretation of one sentence in a report, that all these people are involved in the “manipulation” to show that CO2 is well-mixed. For such a grave accusation, some better proof should be given.
Second, IPCC admits making inter-network and intra-network calibrations. This is tantamount to a confession.
Third, IPCC does not report its calibration values, suggesting an intent to deceive.

As explained several times, the calibration gases are calibrated centrally by NOAA (and checked by different labs and with different methods) and the equipment of different measuring points is intercalibrated with the same calibration gases. That is necessary to maintain the integrity of the data. This is so basic for any laboratory that I suppose that you haven’t the slightest knowledge of the real world in these matters. Or should you like to have a blood test done by a laboratory that wasn’t calibrated/intercalibrated with other labs? See the calibration procedures at MLO (which applies to all other stations too):
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html#instrument
If anybody wants to manipulate the raw data from any station, the only way is by manipulating the calibration gases, making them different for each station. But that would be seen in the independent flask sampling and calibration…
I am aware of the great importance of that well-mixed assumption to the AGW model, and hence the motivation for the abuses of science practiced by the Panel and its authors.
As the differences in CO2 levels in 95% of the atmosphere are less than 2% in absolute level, that has not the slightest influence on the AGW model, as that is based on a doubling and more of CO2 in absolute levels…
3. Re Immediate reaction assumption
No evidence exists to support this conclusion. It is a presumably necessary assumption to make the Bjerrum solution valid so that the ocean will create a buffer against solubility, and so that added CO2 will cause another crisis by acidifying the ocean. The motivation behind this naked assumption is clear and objectionable.
The reaction constants and speeds were established in the early 1900’s, long before there was any fear of acidifying the oceans or CAGW.
I had some link to the reaction speed, but can’t find it back. The slowest seems the two-way CO2 + H2O = HCO3(-) + H(+) conversion with 30 seconds half life at 37 C (in blood…), that is about 2 minutes at zero C. Not really slow for the three forms of carbon equilibrium at 1 or 100 m depth in the oceans, but for enzymatic reactions in blood, that is a lot of time:
http://www.acidbase.org/index.php?show=sb&action=explode&id=63&sid=66
4. Calculated bootstrap evidence for thermodynamic equilibrium
You’ve been hoodwinked. What is remarkable is that the only records from Bermuda (BATS) reported by IPCC were calculated, not measured. Specifically, the pCO2 on which you rely to demonstrate the 10:1 relationship given by the Bjerrum plot, the solution to the thermodynamic equilibrium chemical equations, were computed using those relationships.
As was demonstrated over 80 years ago, the calculated and measured pCO2(aq) values and pH values for any point in the oceans are equal. No need to emphasize the point that these are calculated, it doesn’t demonstrate anything else than that theresearchers assume that the relationship still holds.
That pCO2 calculated and directly measured are interchangeable, can be seen by the use of volunteer ships with automated pCO2(aq) (spray) measurements:
http://www.bios.edu/Labs/co2lab/research/PCO2VOS.html
Even if pCO2(aq) and pH were calculated, pCO2(g) and DIC were measured, and the increase in DIC is less than 10% of the increase of pCO2(g), which is what the discussion was about. Thus while the oceans are a good buffer for pH changes, they can’t cope with the CO2 mass changes in the atmosphere.
5. Re Mass Balance
The change in total CO2 in the atmosphere must be equal to the sum of the outgas plus the ACO2, less the amount absorbed.
Which is currently about +4 GtC/year in the atmosphere. There is not the slightest need to know the real amount of outgas or amount absorbed, because we know the difference between the two: -4 GtC. It is as simple as that. To know the real amount of out gassing and absorption is of interest for the fine details of the carbon cycle, but not necessary for an overall mass balance, which shows that humans are to blame for the increase, not nature, as long as the increase in the atmosphere is less than the addition by humans.
We know that with good accuracy for the past 50+ years. With less accuracy (both for emissions and CO2 changes) for pre-1960 values. And as we can’t look into the future, I don’t see any need to make any prediction.
For any company, not many shareholders are interested in the turnover of a factory, they are interested in the gain (or loss) only… If they invest each year more than the gain the company shows, I don’t think the shareholders would be very happy.
6. Re Henry’s Law, temperature, and atmospheric CO2 increases
Nice theoretical calculation, but…
– There is little evidence that the temperature at the oceanic (CO2) hot spots increased much. Like at the poles minimum, the maximum temperature of the ocean surface is limited, and wind speed and/or the frequency of hurricanes increases to maintain the maximum temperature.
– A large part of the 90/92 GtC as assumed by the IPCC is bidirectional seasonal from the mid-latitude oceans. According to my d13C calculations, that means that only about 40 GtC is permanently exchanged between the warm and cold ocean parts.
– Didn’t you forget something? If there is initially more out gassing and equal sink capacity, pCO2(g) will increase, thus reducing the speed of out gassing at the equator and increasing the speed of uptake near the poles (you know, the exchange needs time…). Thus at a certain moment, a new (dynamic…) equilibrium is reached and out gassing and uptake are equal again, at a higher level of pCO2(g) (and a slightly higher level of outflows and inflows).
– From the previous interglacial, we know that a higher SH ocean temperature and much higher land temperatures (+ 5 C in Alaska, forests in Alaska and Siberia until the Arctic Ocean) existed for several thousands of years. Whatever the resolution of the Vostok record, the long-range average CO2 level in that period was 290 ppmv, with a sensitivity of CO2 for temperature changes of 8 ppmv/C.
7. Re Regional data in global models
I agree that modeling temperature, ocean flows, CO2 exchanges between the different compartments etc. are not easy to obtain and the calculations still have large margins of error. There are some reasonable alternatives to know the partitioning of net CO2 sinks between the oceans and vegetation, based on d13C and O2 balances, but the detailed carbon cycle still will need a lot of work.
But we don’t need these details for an overall CO2 mass balance of the atmosphere, neither for the origin of the increase over the past 150+ years…

July 12, 2010 8:55 am

Re Ferdinand Engelbeen, 7/11/10 at 7:43 am
After I pointed out we have already discussed this reversibility matter, you insisted,
>>Come on Jeff, Wiki shows that the Second Law also is applicable for reversible processes. And almost all processes in nature are reversible. Only for isolated systems in disequilibrium, the process is irreversible. Which is not the case for many processes on this earth.
Just to sharpen the focus and to help others you don’t want to search through this long, long thread, here’s what I posted previously:
>>>>The question immediately arises as to whether natural processes, i.e., the familiar processes of nature, are reversible or not. The purpose of this chapter is to show that it is a consequence of the second law of thermodynamics that all natural processes are irreversible. Zemansky, M. W., “Heat and Thermodynamics”, Ch. 8, Reversibility and Irreversibility, McGraw-Hill, Fourth Ed., 1957, p. 151-2.


We are at an impasse on dynamic equilibrium, and I can contribute no further. I must skip every paragraph you write in which you rely on this ill-defined concept, one that contradicts the conditions actually imposed for the chemical equations, the Bjerrum plot solution, and the Revelle factor conjecture to be valid.
You wrote,
>>SPO doesn’t use a gain factor, except if there are too many missing data from maximum two months.
The CDIAC material doesn’t say that. Can you provide an authority for this claim?
For support, you referenced a paper, “How we measure background CO2 levels on Mauna Loa”:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html#instrument
A book could be written about this procedure. A few critical items are the subjective nature of what it calls “‘outlier rejection'”, a most problematic procedure, and of the rejection of data for various conditions, mostly associated with the wind vector, which seems not to have been recorded. The lab rejects data, but says reassuringly, “No data are thrown away.” The lab computes minute and hourly averages, plus monthly and yearly means. The latter are not shown, but they are for comparison with the Scripps reductions. The graphs in this paper look like real data, unlike the records published by IPCC.
Scripps and MLO use different data selection methods, the paper says without explanation. It mentions no smoothing performed on the monthly and yearly records prior to comparing them, yet smoothing is apparent in the IPCC reports. Smoothing alone could account for the apparent agreement reported in the paper. The monkey business with the CO2 data seems to occur with the smoothing and the linear gain factors. Your observations and conclusions about the handling of data prior to those processes adjustments are to no avail.
The procedure says nothing about a linear gain factor, which is not surprising because CDIAC omits that factor for MLO data. Still, the MLO procedure seems to have a procedure in place for computing its averages in spite of not having a gain factor. Therefore, I am skeptical of your claim that the gain factor used at the other stations is for adjusting for missing data. Linear extrapolation might compensate for missing data, but that should not be called a gain factor.
The procedure notes that the MLO data might “be representative of … hopefully, the globe”. It spends some ink on discussing local phenomena and global phenomena, mentioning even human activity. It never mentions the massive natural outgassing from the ocean and the convection and wind currents that carry that CO2 rich air across Hawaii, where it is modulated by the local and seasonal winds. Keeling warned about relying on CO2 measurements taken near sources and sinks, but these investigators have yet to discover that MLO sits in the plume of ocean outgassing that is nominally 15 times as great as man’s puny emissions.
You say,
>>As the differences in CO2 levels in 95% of the atmosphere are less than 2% in absolute level, that has not the slightest influence on the AGW model, as that is based on a doubling and more of CO2 in absolute levels.
First, the doubling model is based on the presumption that radiative forcing is dependent on the logarithm of the GHG concentration. This is fallacious, violating the Beer-Lambert Law. It quickly leads to impossible results, and obliterates the natural saturation effects revealed from application of Beer-Lambert.
Second, IPCC’s model for the doubling is open-loop with respect to the dominant feedback in all of climate, the cloud albedo effect. As I have reported previously, cloud albedo could reduce the climate sensitivity by a factor of 10 without even being measurable in the state-of-the-art of albedo estimation. Recent satellite measurements as reported by Lindzen show the climate sensitivity to be a factor of 4 less than estimated by IPCC.
Apply the Beer-Lambert Law and close the cloud albedo feedback loop, and you will see that while CO2 has a positive effect on global temperatures, it is too small to be measured – it is lost in the noise. Perfecting CO2 measurement techniques has great academic interest, but no effect on a practical climate model.
You wrote,
>>The reaction constants and speeds were established in the early 1900′s, long before there was any fear of acidifying the oceans or CAGW.
I had some link to the reaction speed, but can’t find it back. The slowest seems the two-way CO2 + H2O = HCO3(-) + H(+) conversion with 30 seconds half life at 37 C (in blood…), that is about 2 minutes at zero C. Not really slow for the three forms of carbon equilibrium at 1 or 100 m depth in the oceans, … .
So may I presume a slug of CO2 was inserted in blood at t=0 and that it exhibited a half life of 30 seconds? Was the half life shown to be constant, or at least linear with the size of the slug? When that slug was first inserted, what were the ratios of CO2(aq):HCO3-:CO3–? Now suppose a stream of CO2 is inserted into a solvent. What are those ratios?
You wrote,
>>As was demonstrated over 80 years ago, the calculated and measured pCO2(aq) values and pH values for any point in the oceans are equal. … That pCO2 calculated and directly measured are interchangeable, can be seen by the use of volunteer ships with automated pCO2(aq) (spray) measurements:
http://www.bios.edu/Labs/co2lab/research/PCO2VOS.html
If what you claim were true, science might be well along the way of validating the thermodynamic equilibrium model applied to the real ocean. I could find that claim nowhere.
Your link was to the Marine Biogeochemistry Lab blog, again. This time to a page called, “Measurements of Partial Pressure of CO2 on Volunteering Observing Ships (VOS)”. It had nothing to support your claim of an equivalence established between calculations and measurements. It did have three other links, with many links to other links, and so on. I searched through several levels and could find nothing to support your claim.
Nowhere in this search could I find an instance of the same parameter both calculated and measured. In every case that I searched, I came to the point where the data were interpreted according to assumption of thermodynamic equilibrium, or to a vague reference to one or more publications by Weiss (e.g., 1974, 1982), papers which are not freely available to the public.
Weiss (1974) is cited in Zeebe & Wolf-Gladrow’s encyclopedia article previously reference on this thread on 6/27 at 7:56 am. That source suggests that the results from Weiss (1974) apply in thermodynamic equilibrium. The CDIAC “Guide to Best Practices for Ocean CO2 Measurements”, 10/12/07, and the previous DOE “Handbook of Methods for the Analysis of the Various Parameters of Carbon Dioxide System in Sea Water”, 9/29/97, credit Weiss (1974) for the virial coefficients used to expand the ideal gas law from pV/RT to an expression for a real gas. Weiss (1974) may also be the source for reformulating Henry’s Law from pCO2 to fCO2, that is, from dependence on partial pressure to fugacity, a calculated parameter for a real gas equivalent to the partial pressure calculated for an ideal gas in a mixture of ideal gases.
Both handbooks cited provide this relevant passage:
>>Equations that describe the CO2 system in sea water
>>It is possible, in theory, to obtain a COMPLETE description of the carbon dioxide system in a sample of sea water at a particular temperature and pressure provided that the following information is known:
>>• the solubility constant for CO2 in sea water, K0,
>>• the equilibrium constants for each of the acid–base pairs that are assumed to exist in the solution,
>>• the total concentrations of all the non-CO2 acid–base pairs,
>>• the values of at least two of the CO2 related parameters: C_T , A_T , f(CO2), [H+].
>>The optimal choice of experimental variables is dictated by the nature of the problem being studied and remains at the discretion of the investigator. Although each of the CO2 related parameters is linearly independent, they are not orthogonal. For certain combinations there are limits to the accuracy with which the other parameters can be predicted from the measured data. These errors end up being propagated through the equations presented here. Such errors result from all the experimentally derived information, including the various equilibrium constants. As a consequence it is usually better to MEASURE a particular parameter directly using one of the methods detailed in Chapter 4 than to calculate it from other measurements. Italics converted to CAPS.
Here C_T is DIC, and A_T is the total alkalinity. Note, too, that it refers to a mere possibility in theory, meaning with certain hypothetical assumptions.
As qualitative as this handbook warning is, it contradicts your claim.
You claim,
>>The raw data and the monthly, “cleaned” data show exactly the same curve, trend and average, within tenths of a ppmv. They can be compared to flask data taken at the same place, by different labs, using different (calibration) methods. These differ not more than a few tenths of a ppmv.
Where are the data? You say “can be compared”, and then provide an accuracy. Don’t you mean, “were compared”, and don’t you need a citation?
If what you claim was true, why didn’t IPCC exploit that fact? Why do they display only the doctored monthly data? I would agree that THOSE data, the “‘cleaned'” records, differ by less than a few tenths of a ppmv.
You say,
>>>> The change in total CO2 in the atmosphere must be equal to the sum of the outgas plus the ACO2, less the amount absorbed.
>>Which is currently about +4 GtC/year in the atmosphere. There is not the slightest need to know the real amount of outgas or amount absorbed, because we know the difference between the two: -4 GtC. It is as simple as that.
Nonsense. IPCC claims the ocean outgassing is about 90 GtC per year and its uptake is about 92.2 GtC per year. You cannot work on the margin. The small difference between two large numbers is a classic error, and here it ignores the physics. IPCC keeps those number constant and in balance, when they must vary according to the law of solubility.
You continue,
>>To know the real amount of out gassing and absorption is of interest for the fine details of the carbon cycle, but not necessary for an overall mass balance, which shows that humans are to blame for the increase, not nature, as long as the increase in the atmosphere is less than the addition by humans.
What you are being shown is not the fine details, but the coarse details, the first order effects canceled by IPCC. Those first order effects are developed in the mass balance, an analysis IPCC implies that it made, but did not publish. You are quite right that the results from the IPCC show (according to IPCC, or better, “tend to show”) that humans are to blame. Of course, that was IPCC’s preconceived notion in its interpretation of its charter, and the rest of IPCC’s work is to select and to distort data to support its assumption. At the highest level, this is anti-science.
And lest we forget the rest of the story, the Revelle buffer, which IPCC elevated from a failed conjecture to a viable theory, is based on the same thermodynamic equilibrium assumption. And so, too, is the beautiful Takahashi diagram. These two pieces of the AGW conjecture are cast by IPCC as applying only to ACO2 or perhaps just fossil fuel emissions. Even if one were to accept that the surface layer of the ocean is close enough to thermodynamic equilibrium (close being a meaningless concept) for government work, the application of the results to ACO2 but not natural CO2 is ludicrous.
The mass balance analysis is crucial to a high fidelity modeling of the carbon cycle. It is not crucial to a climate model because that the carbon cycle is immaterial as a cause of global warming.
You wrote about
>>any point in the ocean
and
>>oceanic (CO2) hot spots
These may be examples of the ultimate in local phenomena. These reflect an inappropriate scale for the thermodynamic problem of the climate, and constitute a distraction from the ultimate question.
You asked,
>>Didn’t you forget something? If there is initially more out gassing and equal sink capacity, pCO2(g) will increase, thus reducing the speed of out gassing at the equator and increasing the speed of uptake near the poles (you know, the exchange needs time…).
No, and exactly! YOU postulated that CO2 increased by 30% over the past 150 years. Partial pressure is defined as the mole fraction of CO2 times the total pressure, where the total pressure here is about one atmosphere. Therefore pCO2(g) increases by the same 30% in your example. I was simply demonstrating how temperature alone could account for that increase, according to the law of solubility.
You claimed,
>>Thus at a certain moment, a new (dynamic…) equilibrium is reached and out gassing and uptake are equal again, at a higher level of pCO2(g) (and a slightly higher level of outflows and inflows).
Why would that happen? Are you claiming Le Chatelier’s principle applies to your dynamic equilibrium? Isn’t that rather like saying a cone balanced on its apex will tend to resume that balance after being disturbed?
The Vostok record shows, within its granularity, that for more than a half million years of natural climate, the CO2 never stabilized. The CO2 is coming and going from someplace, and as my paper “The Acquittal of Carbon Dioxide” shows, the best answer is the surface water and is a consequence of Henry’s Law.
You say,
>>Whatever the resolution of the Vostok record, the long-range average CO2 level in that period was 290 ppmv, with a sensitivity of CO2 for temperature changes of 8 ppmv/C.
Just bear in mind that the Vostok record is heavily smoothed. The closure time of the firn, being between several decades to as much as a millennium and a half, acts like a low pass filter. This sharply reduces the variability of the measurements compared to modern methods by a factor called the variance reduction ratio for the filter. An event like the one currently being observed at Mauna Loa, deemed to have existed for only 60 years or so, would be lost in the noise in the Vostok record. Also remember that Henry’s Law says that the CO2 fluxes are dependent on the relative concentration of CO2 in the atmosphere and not just on temperature. Once again what is need is a mass balance analysis.
You close with,
>>But we don’t need these details for an overall CO2 mass balance of the atmosphere, neither for the origin of the increase over the past 150+ years.
I agree. Those details, e.g., “exchanges between the different compartments” don’t participate in the mass balance analysis at all. And according to my unpublished mass balance model, the human caused part of atmospheric CO2 is approximately proportional to the ratio of its emissions to oceanic outgassing – about 6% to 10% all ’round.

Ferdinand Engelbeen
July 12, 2010 5:18 pm

Jeff Glassman says:
July 12, 2010 at 8:55 am
We are at an impasse on dynamic equilibrium, and I can contribute no further. I must skip every paragraph you write in which you rely on this ill-defined concept, one that contradicts the conditions actually imposed for the chemical equations, the Bjerrum plot solution, and the Revelle factor conjecture to be valid.
According to Wiki:
In simple terms, the second law is an expression of the fact that over time, differences in temperature, pressure, and chemical potential tend to even out in a physical system that is isolated from the outside world.
That is what the rest of the world simply uses for chemical (dynamic) equilibria and is happy with the results, as long as the equilibria are given sufficient time to equilibrate. No problem thus for CO2 equilibria in water…
The CDIAC material doesn’t say that. Can you provide an authority for this claim?
From CDIAC file
ftp://cdiac.ornl.gov/pub/trends/co2/sposio.20jan2009.co2
The “annual” average is the arithmetic mean of the twelve monthly values.
In years with one or two missing monthly values, annual values were calculated by substituting a fit value (4-harmonics with gain factor and spline) for that month and then averaging the twelve monthly values.

That simply shows that CDIAC doesn’t use a gain factor for continuous data from the South Pole in years without missing months (= months with less than 10 valid daily averages) and only uses the gain factor for the missing month(s). And it shows that the gain factor at SPO is not used to match the MLO data, but is used to match the remaining months of SPO for that year.
The lab rejects data, but says reassuringly, “No data are thrown away.” The lab computes minute and hourly averages, plus monthly and yearly means. The latter are not shown, but they are for comparison with the Scripps reductions. The graphs in this paper look like real data, unlike the records published by IPCC.
Indeed, no data are thrown away. These are still available in the hourly averages. The rejected data are not used for further averaging, and are indicated in the hourly averages file with different flags, showing the reason for rejection. Thus one can plot both all hourly average data with and without flagged outliers.
All data including outliers for 2004:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/mlo2004_hr_raw.jpg
Without outliers:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/mlo2004_hr_selected.gif
The average and trend with and without outliers doesn’t differ with more than 0.1 ppmv.
The daily, monthly and yearly averages are only based on the selected data, without outliers, these are available in separate files. The NOAA monthly averages is what the IPCC used, no additional smoothing is performed, as the monthly averages (without any additional gain, except for missing months) are smooth enough. Thus indeed, the monthly averages are smoothed by selection, but that doesn’t change the average or trend beyond a few tenths of a ppmv.
It never mentions the massive natural outgassing from the ocean and the convection and wind currents that carry that CO2 rich air across Hawaii, where it is modulated by the local and seasonal winds. Keeling warned about relying on CO2 measurements taken near sources and sinks, but these investigators have yet to discover that MLO sits in the plume of ocean outgassing that is nominally 15 times as great as man’s puny emissions.
The “massive” plume around MLO is not even visible in the satellite data, as the higher levels are more to the north, due to the massive decay of vegetation in winter + human emissions and the Ferrel cells dispersing that from mid-latitudes to the north:
http://airs.jpl.nasa.gov/story_archive/AIRS-CO2-Movie-2002-2009/
Look at the position of MLO and the scale of the color changes.
Airs data are from the mid troposphere, peaking around 6,000 m, but flanks going from zero meter to into the stratosphere. Thus including a lot of the air column around MLO.
In fact, if you take MLO’s data for convinience (the longest continuous record), or South Pole data or more or less “global” data (the average of several ocean level stations), it hardly matters as the difference in trend over the past 50+ years is less than 5 ppmv, while the level increased some 70 ppmv over the same period.
First, the doubling model is based on the presumption that radiative forcing is dependent on the logarithm of the GHG concentration.
I thought that that was simply measured in laboratory conditions, but I am not going to discuss that here. The point was that the current small deviations of CO2 levels around the world have no importance for the IPCC at all, as their models are based on a doubling and more in the future.
Now suppose a stream of CO2 is inserted into a solvent. What are those ratios?
The point is that, contrary to bodily fluids (where distances and enzymatic reactions are of very different orders), the speed of transfer of CO2(g) into CO2(aq) and reverse is many orders of magnitude slower than the speed of dissociation from CO2(aq) to bicarbonate and carbonate and reverse. Thus for any practical purpose, one may assume that the latter equilibriums are near instantaneous achieved, even if (relative) huge fluxes are present.
Feely shows the fluxes involved:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/images/fig05.gif
The peak ocean out gassing near the equator is about 1 mol/m2/year (or 44 grams CO2 per year per square meter). 1 m3 of air contains about 44 moles of air molecules. That can be used to calculate the difference in ppmv for this flux:
At the air side, that gives an increase of 2.6 ppmv in one m3/hr transport of air from other latitudes at sea level (Hadley cells), not even measurable by satellite with such extreme slow circulation. With more normal wind speeds, even Mauna Loa wouldn’t notice the difference.
At the water side, a loss of 2.6 ppmv in one m3/hr of water is peanuts as the change in equilibrium is halved in 40 seconds or so at 30 C (2 minutes at 0 C), reaching a new equilibrium within a few minutes below detection limit, if the change was pulse wise. But as the change is continuous, there is no detectable difference between level measured and equilibrium level. For the 100-200 m mixed layer, there is no measurable effect on the equilibrium calculations from even the largest CO2 fluxes for any momentary sample taken at any point in the layer.
Your link was to the Marine Biogeochemistry Lab blog, again. This time to a page called, “Measurements of Partial Pressure of CO2 on Volunteering Observing Ships (VOS)”. It had nothing to support your claim of an equivalence established between calculations and measurements.
The VOS have nobody on board which will take any samples and measure them. The equipment takes samples fully automatically: pCO2(aq) via spraying and measuring the above air in equilibrium, pCO2(g) by direct intake and pH with cells in the water and the water intake temperature. These values are used as is, eventually corrected for temperature. This was not described in detail in my reference, but was in another article for different (ferry) VOS ships in The Netherlands. Thus the calculated pCO2(aq) and pH at Bermuda and the VOS directly measured pCO2(aq) and pH are used interchangeable.
But here is a more detailed description by Takahashi, the same principles are used by the VOS ships today:
http://cdiac.ornl.gov/ftp/oceans/takasouth/Takahashi-pco2.html
As a consequence it is usually better to MEASURE a particular parameter directly using one of the methods detailed in Chapter 4 than to calculate it from other measurements.
Agreed, but the question is if the differences are important. Both DIC and CO2(g) were measured at Bermuda and show that DIC increases with less than 10% of the increase of CO2(g), which shows that the ocean’s mixed layer doesn’t absorb extra CO2 in the atmosphere that well. That was where the discussion was about. The differences between calculated and measured pCO2(aq) are not that important, as these are used interchangeable. The pH is different problem: the slight changes in pH are near unmeasurable, that is probably why they present calculated values.
Further, here is a comparison between calculated and measured pCO2(aq):
https://bluemoon.ucsd.edu/publications/tim/marchem_2000.pdf
(some problems with the certificate of the website, but seems to be OK)
In the range 200-500 microatm, the difference is between +/- 3% of the range. For higher levels that increases to -4 to +8% of the range.
Where are the data? You say “can be compared”, and then provide an accuracy. Don’t you mean, “were compared”, and don’t you need a citation?
If what you claim was true, why didn’t IPCC exploit that fact? Why do they display only the doctored monthly data? I would agree that THOSE data, the “‘cleaned’” records, differ by less than a few tenths of a ppmv.

Please do some effort yourself, the data can be easely found on line:
Hourly, uncorrected, averages including flagged outliers for Barrow, Mauna Loa, Samoa and South Pole:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/
daily, monthly and yearly averages, only based on selected data, including flask data:
ftp://cdiac.ornl.gov/pub/trends/co2/
and so on…
From Scripps (partly the same data, until NOAA started to manage the stations):
http://scrippsco2.ucsd.edu/data/atmospheric_co2.html
You can make it easy yourself and let the other plot the data:
http://cdiac.ornl.gov/trends/co2/
Or have a look at a comparison of Mauna Loa in situ and flask samples:
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html#replication
And I have made a few comparisons between the hourly averaged, raw data including all outliers and the daily and monthly averages, based on discarding all outliers a.o. for Mauna Loa and South Pole in the same plot:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
As already said several times and can be seen in the graphs and can be calculated, there is very little difference between the raw data and the “cleaned” monthly averages for average and trend. And South Pole and Mauna Loa show only a small difference as well as in the raw data as in the cleaned monthly averages, the latter is what the IPCC used. There is no need for the IPCC to show the raw data, these are of no interest for the increase and only show more noise.
Nonsense. IPCC claims the ocean outgassing is about 90 GtC per year and its uptake is about 92.2 GtC per year. You cannot work on the margin. The small difference between two large numbers is a classic error, and here it ignores the physics. IPCC keeps those number constant and in balance, when they must vary according to the law of solubility.
The total mass balance of CO2 in the atmosphere is not based on the in/outflows of the atmosphere with the oceans and vegetation, it is based on the inventories of fossil fuel use and the measured increase in the atmosphere. The difference is what is absorbed by nature as a whole, thus land vegetation + oceans. The balance indeed shows a year-by-year variability of +/- 2 GtC around the trend, mostly caused by temperature changes. But based on the fact that the net increase over the past 50+ years was always smaller than the emissions, the natural variability was always caused by changes in net sink capacity. If that was from increased outgassing or reduced uptake or both may all be true, but is not relevant: in not one year of the past 50+ years, nature as a whole was a net source of CO2.
What you are being shown is not the fine details, but the coarse details, the first order effects canceled by IPCC. Those first order effects are developed in the mass balance, an analysis IPCC implies that it made, but did not publish.
The emissions inventories are made by the finance departments of the individual states, the increase in the atmosphere is measured at Mauna Loa (but any other station would show near the same increase per year), the difference is the natural sink. That is all one needs to know that humans are the cause of the increase. What the IPCC did or didn’t do has no effect on that fact.
These two pieces of the AGW conjecture are cast by IPCC as applying only to ACO2 or perhaps just fossil fuel emissions.
You are again mistaken: all mass transfer calculations and equilibria are for total CO2, no differentiation between aCO2 or nCO2, Except for the differentiation between the different isotopes, which is slightly different for aCO2 and nCO2. That doesn’t influence the mass balances, neither the fluxes, but it influences the isotope balances. The latter are used to estimate where the aCO2 ultimately flows and resides.
No, and exactly! YOU postulated that CO2 increased by 30% over the past 150 years. Partial pressure is defined as the mole fraction of CO2 times the total pressure, where the total pressure here is about one atmosphere. Therefore pCO2(g) increases by the same 30% in your example. I was simply demonstrating how temperature alone could account for that increase, according to the law of solubility.
You started with an increasing temperature at the equator, leading to a constant increasing outflow into the atmosphere, while at the cold side the temperature remained constant, thus the inflow didn’t increase, leading to a constant increase in the atmosphere.
But that doesn’t hold at all. Even if at the source the outflow remained constant, the increase of CO2 in the atmosphere would increase the uptake at the poles at constant temperature (as long as Henry’s Law holds), until outflows and uptake are again in equilibrium. But it is even more restricted, because your first sentence is wrong:
The outgas is inversely proportional to X, the solubility factor (Henry’s coefficient) at an effective temperature roughly corresponding to tropical waters.
That is only one part of the equation (and not inversely, it is a ratio coefficient), the other part is the effective concentrations on both sides. If these are equal (including Henry’s Law) at the temperature of the tropical waters, the outgas will be zero. Thus the delta between pCO2(aq) and pCO2(g) is what drives the flux.
See for a good explanation:
http://www.apolloscitech.com/background.pdf
From that source:
The CO2 flux across the air-sea interface is calculated by the following widely used onedimensional stagnant thin-film model [7]:
CO2 Flux = kβ(pCO2w – pCO2a).
where k is the gas transfer velocity; β (Bunsen coefficient) is the solubility of CO2 at given temperature and salinity [8]. pCO2w and pCO2a represent the partial pressure of CO2 in surface water and overlaying air, respectively. Most of uncertainty in this calculation results from estimation of gas transfer velocity (k), which is empirically derived from sea surface wind speed.

Why would that happen? Are you claiming Le Chatelier’s principle applies to your dynamic equilibrium?
Of course that applies, as that is the case for every dynamic equilibrium. And the whole carbon cycle behaves like a simple process in dynamic equilibrium…
Just bear in mind that the Vostok record is heavily smoothed. The closure time of the firn, being between several decades to as much as a millennium and a half, acts like a low pass filter. This sharply reduces the variability of the measurements compared to modern methods by a factor called the variance reduction ratio for the filter.
Even with the smoothing of the Vostok record (+/- 600 years for the Eemian) a peak of 100+ ppmv would be seen as an anomaly in the record. But that is not the point: It is about the sensitivity of the carbon system as a whole for changes in temperature. And that shows a remarkable stable ratio of 8 ppmv/C over the whole 420,000 years (Dome C is going to extent that over 800,000 years). That includes changes in ocean temperature, ocean flows, ice sheet formation, vegetation area changes,… From slightly warmer to much lower temperatures than today.
The same 8 ppmv/C response is visible for the MWP-LIA cooling (resolution of 40 years in the Law Dome ice core) and the current very short term response is 4 ppmv/C around the trend. Thus I don’t see any reason (besides that the mass balance and the isotope balance also prohibits that) that the oceans are the cause of the CO2 increase, based on the sea surface temperature increase.
I agree. Those details, e.g., “exchanges between the different compartments” don’t participate in the mass balance analysis at all. And according to my unpublished mass balance model, the human caused part of atmospheric CO2 is approximately proportional to the ratio of its emissions to oceanic outgassing – about 6% to 10% all ’round.
Indeed the current atmospheric part of aCO2 is about that percentage, according to my calculations too, but despite that low percentage, aCO2 is fully responsible for the 30+% increase in total CO2 in the atmosphere. That is the difference between a percentage of a turnover and the cause of a gain or loss in mass.

tonyb
Editor
July 13, 2010 3:14 am

Hi Ferdinand.
I asked you this question way back in the thread but cannot trace an answer.
You make an eloquent case that man is responsible for the increase in co2 over the last 150 years. However I believe that you do not think that increased CO2 has any particular impact on the slight temperature rise we can observe over the last 350 years (CET)
Can you point me to a paper whereby you set out your case that whilst we are responsible for increased CO2 concentration this has limited impact on temperatures?
thanks
Tonyb

Ferdinand Engelbeen
July 13, 2010 5:52 am

Hi Tony,
As an aside, the films of Iain Stewart were broadcasted here on Flemish television last month… Had some comment on their website, like the “hide the decline” as discussed in Portsmouth. Now after Climategate did break out, the truth is emerging and I wonder if Iain will remember our discussion…
About the effect of a CO2 doubling:
– The physics of increased CO2 on IR absorption is quite solidly established. The effect is logarithmic, which means that each doubling has about the same effect. The effect of CO2 increase, water and other gases was measured in lab circumstances, leading to the Hitran calculations, the transmission of IR through the atmosphere for different wavelengths. That shows a reasonable agreement with real life measurements done by satellites. The effect of increasing CO2 until now is marginal and at the edge of the satellites detection system.
Nevertheless, the effect of only a CO2 doubling, without feedbacks, is limited to about 0.9 C. Including water vapor feedback, which is reasonably anticipated for higher seawater temperature, that gets to 1.3 C. Still a benign warming.
The other feedbacks which brings the IPCC estimate to 1.5-4.5 C, or an average of 3 C for 2xCO2 is where the discussion starts. That is based on two items: the negative forcing caused by human induced aerosols and the influence of clouds.
The first item is quite interesting: the cooler 1945-1975 period coudn’t be reproduced by the earlier models, as CO2 was steadily increasing. Therefore they needed a cooling support, which was found in human made sulfate aerosols. In that period these were increasing. Now if one calculates the effect of human sulfate aerosols (average lifetime in the troposphere of about 4 days) back from the natural sulfate aerosols by the Pinatubo eruption (average lifetime in the stratosphere about 3 years), then the net effect of human aerosols is less than 0.1 C cooling and since 1990 about steady (less in the Western world, more in SE Asia).
That was reflected in one of the shortest discussions at RealClimate:
http://www.realclimate.org/?comments_popup=245
Where a lot of links can be found.
In summary: the effect of cooling aerosols is largely overestimated (and of warming aerosols like soot underestimated), which has as consequence that the effect of GHGs is largely overestimated too, as both are in counterbalance for the 1945-1975 cooler period.
The second – even worse – problem in the climate models is the effect of clouds. These represent the largest factor in the width of the range of estimates. But the strange point is that all models include clouds as a positive feedback, while all specialists in cloud cover see clouds as a negative feedback. Clouds even may be part of internal earth cyclic behaviour (PDO, NAO,…) and may be a forcing for temperature, leading to temperature changes, instead of reverse.
All together in summary:
I do think that increasing of CO2 will have some effect, but at the low side or lower than the range the IPCC/models use. Thus in general a benign effect…

tonyb
Editor
July 13, 2010 7:46 am

Hi Ferdinand
I rather liked Iain Stewart although he was not able to answer our questions very well. He had a big following of groupies though didn’t he? 🙂
Driving that huge billboard around the city showing the Hockey stick made for great TV but poor science.
Ok, I understand your position now on warming. I don’t really disagree with you-I think the Misclowski theory appears to work through the physics quite well and he reckons a theoretical 0.7C rise on doubling, although a real world situation will almost certainly mitigate the actual rise. In this connection the positive feedbacks are rather fanciful I always feel. They have never happened in the past with this sort of gentle temperature increase so why should they happen now?
Why do you think that temperatures started rising hundreds of years before (possible) man made increases in CO2.
It is shown in numerous instrumental datasets such as CET
http://c3headlines.typepad.com/.a/6a010536b58035970c0120a7c87805970b-pi
By the way this has been a very interesting discussion with Jeff Glassman. I would score you 3 goals each and you are now in the last phase of extra time before the penalty shoot out. 🙂
I was rooting for Holland in the final but didn’t feel they played as well as they could and resorted to a lot of brutal football and on balance deserved to lose. A lesson for you and Jeff here, keep it clean!
best personal regards
Tonyb.

July 14, 2010 8:47 am

Re Ferdinand Engelbeen, 7/12/10 at 5:18 pm
1. Re Second Law & Dynamic Equilibrium
The Second Law eats Dynamic Equilibrium for breakfast. The Second Law tells us that systems spontaneously maximize their entropy, and that reversible processes do not exist in the real world. Without a definition for Dynamic Equilibrium, the notion remains too vague to even rank as a model. You use it is a universal substitute for thermodynamic equilibrium, When the latter is required for an à priori model but does not exist in the real world application, you insert dynamic equilibrium. As used by Wikipedia, Dynamic Equilibrium applies only to reversible processes, which violate the Second Law.
2. Re gain factor
Cited in full, you relied on this description of SPO CO2 data:
>>Monthly values are expressed in parts per million (ppm) and reported in the 2003A SIO manometric mole fraction scale. The monthly values have been adjusted to the 15th of each month. Missing values are denoted by -99.99. The “annual” average is the arithmetic mean of the twelve monthly values. In years with one or two missing monthly values, annual values were calculated by substituting a fit value (4-harmonics with gain factor and spline) for that month and then averaging the twelve monthly values. CDIAC, Keeling, C.D., et al., May, 2005.
As I previously reported to you, that citation is out of date. The new version is materially different, and constitutes a correction:
>>Values above are taken from a curve consisting of 4 harmonics plus a stiff spline and a linear gain factor, fit to monthly concentration values adjusted to represent 2400 hours on the 15th day of each month. Data used to derive this curve are shown in the accompanying graph. Units are parts per million by volume (ppmv) expressed in the 2003A SIO manometric mole fraction scale. The “annual average” is the arithmetic mean of the twelve monthly values. CDIAC, Keeling, R. F., et al., May 2008.
http://cdiac.ornl.gov/trends/co2/sio-spl.html
The ambiguous phrase “4-harmonics WITH gain factor and spline” (CAPS added) no longer can be read to connect just to missing data. It now clearly applies to the entire curve, as it probably did in the first instance. As I explained previously, a gain factor is not an appropriate name for extrapolation or interpolation.
The CO2 data are generally worthless when published only with secret gain factors or smoothing filters with secret forms or secret parameter values.
3. IPCC data smoothing
You provided links to two curves on your website. Neither even slightly resembles the records provided by IPCC in time span, variability, or data presentation. Compare with MLO v. SPO, TAR, Figure 3.2, p. 201, and MLO v. Baring Head, AR4 Figure 2.3, p. 138. The problem lies in those IPCC figures, and not some vague laboratory data. The fact that your curves might agree within 0.1 ppmv is irrelevant to the problem IPCC creates. IPCC’s smoothing appears in two ways. (1) The variabilities of both the trend and the seasonal variance of each record, considered separately, are too small compared with real data. And (2), the records considered pairwise agree too closely. You say that IPCC applied “no additional smoothing”. However, the smoothing was applied SOMEWHERE prior to publication in IPCC reports. Looking back just one level and according to IPCC authors, the smoothing included “4 harmonics plus a stiff spline and a linear gain factor” for SPO and Baring Head, at least, but not for MLO.
You insist,
>>In fact, if you take MLO’s data for convenience (the longest continuous record), or South Pole data or more or less “global” data (the average of several ocean level stations), it hardly matters as the difference in trend over the past 50+ years is less than 5 ppmv, while the level increased some 70 ppmv over the same period.
This is after filtering, smoothing, and a gain factor adjustment, all intended to make divergent data agree. If you find that the records agree at some level lower than the final, publication level, then you should work to discover how the data were manipulated to make that happen.
4. IPCC’s doubling error.
You say,
>>The point was that the current small deviations of CO2 levels around the world have no importance for the IPCC at all, as their models are based on a doubling and more in the future.
You left out important words. IPCC models climate sensitivity as a constant increase for each doubling of CO2 concentration. That is a logarithmic relationship. As a result, the effects of CO2 never saturate. The total RF for CO2 can quickly exceed the limits of its absorption bands. This model violates the Beer-Lambert Law. IPCC is enamored with the logarithmic model because (1) it exaggerates the effect of CO2 and (2) it doesn’t require the determination of the present day operating point for the climate with respect to CO2 concentration. The data to make that determination do not exist.
Since composing this response, you provided additional beliefs with respect to the logarithmic relationship, saying on 7/13/10 at 5:52 am
>>The physics of increased CO2 on IR absorption is quite solidly established. The effect is logarithmic, which means that each doubling has about the same effect.
This might be a belief in some circles, but it remains a violation of physics and even reason. For an à priori model, the logarithm function is easily fit to any concave (i.e., “concave down”) segment of specified curve in a region. That limited application of curve fitting is not sufficient to proclaim the specified curve to be logarithmic. As far as an à posteriori model is concerned, the experiment by which IPCC claims the sensitivity to the logarithm of CO2 concentration is linear has not been conducted. The Beer-Lambert Law provides instead that the response to the logarithm of increasing CO2 is an S-curve, concave up at first, then concave down.
Also you use expressions like “solidly established” and other references to wide spread practices. This may be the reason you subscribed to the belief system, but it is no part of science. A belief or a conjecture held by a petagaggle of climatologists is, alas, just a belief or a conjecture.
5. DIC measurements
You wrote,
>>Both DIC and CO2(g) were measured at Bermuda and show that DIC increases with less than 10% of the increase of CO2(g), which shows that the ocean’s mixed layer doesn’t absorb extra CO2 in the atmosphere that well.
I don’t understand what you have written. If I could see the data, I would have a chance. I exhausted myself searching through your citation plus its embedded citations through three or four levels, which came to nothing but calculations based on the hypothetical equilibrium model. Where exactly does the 10% number appear? With that information, I might understand the implied experiment and how you came to your conclusion that the ocean doesn’t absorb CO2 “that well” when CO2 is highly soluble in water.
6. Non-responsive answers.
I asked you where are the data supporting “a few tenths of a ppmv”, and you answer that I can make the calculations.
I asked why IPCC didn’t exploit what you assert: that the raw data match. You answer IPCC didn’t need to.
7. Basis for mass balance analysis
You claim,
>>The total mass balance of CO2 in the atmosphere is NOT based on the in/outflows of the atmosphere with the oceans and vegetation, it is based on the inventories of fossil fuel use and the measured increase in the atmosphere. CAPS replacing bold in original.
Previously, you wrote,
>>There are several mass balances in use, including by the IPCC. The IPCC doesn’t assume a separate aCO2 cycle (as MASS, but they do for isotope changes), as all emissions simply are mixed into the natural CO2 cycle. CAPS added.
And I responded,
>>Contrary to your assertion, IPCC does have separate ACO2 and nCO2 cycles. These it details in its carbon cycle figure. The ACO2 values are in red, the nCO2 values are in black. AR4, Figure 7.3, p. 515.

and you answered,
>>OK, this is the first time that I have looked at that graph. It looks identical to the NASA graph (which I thought it was), except that these make no differentiation between nCO2 and aCO2. It looks like a best guess of the partitioning between aCO2 and nCO2 in the total flows not as really separate cycles (except for the emissions of course). I suppose that the IPCC tried to show how much CO2 has increased (as mass) in different compartments as result of the emissions, but there is certainly not that much aCO2 in the atmosphere (at maximum some 8%) and I don’t think that the flows between oceans and atmosphere increased with 20 GtC/yr due to the emissions…
This is simply bad work.
 Engelbeen, 6/25/10, 3:10 pm.
So you previously recognized and wrote about IPCC reporting on the mass of CO2 and “the flows between oceans and atmospheres”. Just to be perfectly clear, Figure 7.3 which we discussed included the mass of CO2 in the reservoirs of the Atmosphere, Fossil Fuels, and Vegetation, Soil & Detritus. It included four exchange fluxes, in and out, between the atmosphere and the Vegetation parts, and it included four exchange fluxes, in and out, between the ocean and the atmosphere. The caption of Figure 7.3 includes,
>>Gross fluxes generally have uncertainties of more than ±20% but fractional amounts have been retained to achieve overall BALANCE when including estimates in fractions of GtC yr–1 for riverine transport, weathering, deep ocean burial, etc. CAPS added.
Now you say, with emphasis,
>>The total mass BALANCE of CO2 in the atmosphere is NOT based on the in/outflows of the atmosphere with the oceans and vegetation, it is based on the inventories of fossil fuel use and the measured increase in the atmosphere. CAPS added.
Ignoring the logical disconnect between the total mass balance compared with the fossil fuel use in your claim, it is nonetheless false. You appear not to be following the dialog on this thread. You are hip shooting – going for volume over substance.
You say,
>>As repeatedly said and shown with references to the (inter)calibration procedures, which you obviously choose to ignore, intercalibration is common practice in any type of laboratory to assure the correct operation of equipment and the correct value of calibration gases. This has nothing to do with bringing the observations into agreement. The observations are what they are, if you like them or not. And the IPCC has no business with the calibration and techniques used.

I have ignored none of it, and your accusation is baseless. You are wasting space by endlessly repeating the same irrelevant information about the quality of laboratory data. The fraud is manifest in the IPCC reports. The clues are the various inter and intra network calibrations admitted by IPCC, and the fact of the secret linear gain factors. IPCC applied those gain factors to the monthly data from stations other than MLO, and then provided graphs to show monthly MLO data tightly overlapping SPO and Baring Head data. You seem unable to discuss this matter and to stay on subject.
8. Analysis by partial derivatives
In case you weren’t conversant with partial derivatives, I laboriously laid out my analysis for you, saying,
>>This analysis is by the partial derivative with respect to temperature, keeping the responses to pressure constant. It shows why a more complete mass balance treatment is necessary.
You responded,
>>That is only one part of the equation (and not inversely, it is a ratio coefficient), the other part is the effective concentrations on both sides.
Of course! That is what I was trying to show you. That is what a partial derivative analysis means. Only one variable is to change at a time. The others are temporarily kept constant. I also told you what the next step is, and you repeat what the next step is.
9. pCO2(aq), the imaginary number.
I’ve lost count of the number of times you’ve tried to make this point:
>>If these are equal (including Henry’s Law) at the temperature of the tropical waters, the outgas will be zero. Thus the delta between pCO2(aq) and pCO2(g) is what drives the flux.

Henry’s Law does not involve the pCO2(aq). Henry was not so foolish. If pCO2(aq) existed, we might be able to measure it. If such things existed, we might be able to build a theory for the rate of flux between a gas and its solvent. Henry’s Law might be a lot easier to express, even limited to ideal gases. We might not need the condition of thermodynamic equilibrium.
On the other hand, the following equation is relevant:
F = ks(pCO2(aq) – pCO2(g))
It is discussed in the Rocket Scientist’s Journal, “On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, etc.”, Eq. (1). However, pCO2(aq) is actually the pCO2(g) in the air above a sample of the water after the system has reached some approximation equilibrium. It does not exist in the water to resist the flux, F, though the equation treats it that way. It is a scientific conjecture, set forth in a number of papers as if it were a validated model, i.e., a theory. The parameter k is the gas exchange relationship, usually attributed to Wanninkhof, an IPCC contributing author, for his extensive writings trying to establish a meaningful value based on wind speed and a viscosity factor for the water known as the Schmidt number. Wanninkhof, 1992, 1999, 2003. Wanninkhof says,
>>While it is doubtful that a single, simple parameterization with wind speed can cover all spatial scales and environmental conditions, wind is currently the most robust parameter available to estimate global exchange. Wanninkhof, R., et al., “A Cubic Relationship between Air-Sea CO2 Exchange and Wind Speed”, Geo.Phys.Res.Ltr., vol. 26, No. 13, 1889-1892, 7/1/99.
In other words, he’s looking for his keys where the light is good.
The gas exchange parameter has grave problems. Wanninkhof provides estimates for it from various investigators. See RSJ, “On Why CO2, etc.”, id., Figure 2. In this figure, most of the estimators go to zero at zero wind speed, meaning Henry’s Law ceases in zero wind. The disparity at high wind velocity on that chart is 8:1 between investigators. In Wanninkhof’s presentation to the Joint Global Ocean Flux Study Committee on 5/5/03 on his work, he provides a chart showing an estimate attributed to “Weseley et al, 1982” that is 16 times as great as an estimator fit to bomb-14C. Id., Slide 13 of 41. On Slide 15 he says, “Cross (1-way fluxes are about 50 times greater than net fluxes”. The meaning of net in this statement is unclear, due in part to the PowerPoint presentation having no accompanying text. Also, the meaning of net remains hidden within the several papers available on this subject, including Takahashi, “Global sea-air CO2 flux based on climatological surface ocean pCO2, etc.”, Deep-Sea Research II 49 (2002) 1601-1622. Is the k parameter calibrated to produce a net flux, or do the investigators do the subtraction after calculating the flux? Any measurements used to extract the k parameter would necessarily be a net amount, but what is the duration of the individual flux measurements?
Wanninkhof early worried:
>>It is not clear whether wind speed can be used by itself to estimate gas transfer velocities. Wanninkhof, R., “Relationship Between Wind Speed and Gas Exchange Over the Ocean”, J. Geophs.Res., vol. 97, No. C5, 7373-7382, 5/15/1992.
In the abstract, Wanninkhof says,
>>Some of the variability between different data sets can be accounted for by the suggested mechanisms, but MUCH OF THE VARIATION APPEARS DUE TO OTHER CAUSES. CAPS added, id.
The situation seems to have improved little over the ensuing two decades, although the writings and reliance on a spooky k parameter have proceeded as if the model had been validated. Seeing the actual flux data to which the coefficient was fit could have provided insight into the problem. And indeed, the problem should have been posed as a scientific model to predict air-sea CO2 flux, with the prediction tested against field data. That is not evident. Without such validation, the model could be no more than a hypothesis. Moreover, considering the fact that the crucial parameter of pCO2(aq) does not exist, ever — but certainly not simultaneously with pCO2(g) – reduces the model to a mere conjecture.
The fact that the Takahashi diagram, which is calculated from the Wanninkhof k parameter, results in a coherent picture lends no weight to the conjecture. The same observation applies to the Revelle factor. See RSJ, “On Why CO2, etc.”, Figures 3 and 4. The fact that something can be measured validates no model. Without the model and an explicit prediction, we can’t say what was actually measured. The Takahashi diagram and the Revelle chart appear to be SST plots and consequences of ordinary solubility, not flux.
10. The resurrection of dynamic equilibrium
You say,
>>>>Why would that happen? Are you claiming Le Chatelier’s principle applies to your dynamic equilibrium?
>>Of course that applies, as that is the case for every dynamic equilibrium. And the whole carbon cycle behaves like a simple process in dynamic equilibrium.
Where is you support for these claims? You weaken all your arguments by laying down naked claims for some.
11. AIRS data
The AIRS data confound my model and IPCC’s. The Equatorial outgassing recognized by IPCC and relied upon in my ocean macromodel, while visible in the Takahashi diagram is not visible in the AIRS data. Of course, the AIRS observations are well up into the troposphere, and my model in which the outgassing affects the MLO record need not rise to the AIRS altitudes. The AIRS data are sure to improve modeling. However, the new data tend strongly to invalidate IPCC’s critical well-mixed assumption.
For Tonyb, 7/13/10 at 7:46 am:
3 – 3? I demand a recount. Let’s invoke instant replay, goal by goal.

Ferdinand Engelbeen
July 14, 2010 5:32 pm

Jeff Glassman says:
July 14, 2010 at 8:47 am
1. Re Second Law & Dynamic Equilibrium
The Second Law eats Dynamic Equilibrium for breakfast. The Second Law tells us that systems spontaneously maximize their entropy, and that reversible processes do not exist in the real world. Without a definition for Dynamic Equilibrium, the notion remains too vague to even rank as a model. You use it is a universal substitute for thermodynamic equilibrium, When the latter is required for an à priori model but does not exist in the real world application, you insert dynamic equilibrium. As used by Wikipedia, Dynamic Equilibrium applies only to reversible processes, which violate the Second Law.

I don’t think that we ever will agree on this, but the natural world is full of processes which are in dynamic equilibrium, which obey the second law of thermodynamics by going into chemical, mass and thermal dynamic equilibrium with maximum enthropy when in isolation. The difference is that you make an absolute condition of non-reversibility (which exists for the enthropy), while the maximum enthropy may include any dynamical chemical equilibrium where the net (mass, energy) transfer is zero.
See http://en.wikipedia.org/wiki/Chemical_equilibrium
Some nice discussion about the many interpretations of the second law:
http://www.phys.uu.nl/igg/jos/publications/dresden.pdf
2. Re gain factor
If Keeling Sr. only used the “gain factor” if 1-2 months were missing, and IF Keeling Jr. used a “gain factor” to fit the South Pole data to the Mauna Loa data, that would give a hell of a jump between 2005 and 2006, as the first gain factor didn’t change the curve at all (other than for maximum 2 months), while the second gain factor must have a profound effect, according to you.
Further, as already said but which again is ignored, the second procedure applies not to the continuous measurements at the South Pole or any other station, but to biweekly flask samples as clearly is indicated in your reference:
Precise measurements of atmospheric CO2 at the South Pole have been obtained by Scripps Institution of Oceanography (SIO) researchers since 1957. This record is based primarily on biweekly flask sampling.
Then the whole curve is made to make a nice plot from noisy data.
The IPCC in its graph only used the “cleaned” monthly averaged continuous measurements of the different stations, without any “gain factor” (except for missing months), and compared the curve of Mauna Loa as delivered by NOAA, with that of Baring head, delivered by the National Institute of Water and Atmospheric Research of New Zealand. The “cleaned” monthly averages from Baring Head can be found here:
http://cdiac.ornl.gov/trends/co2/baring.html or directly from:
http://cdiac.ornl.gov/ftp/trends/co2/baring.177
This includes the notes (besides instrument malfunction to reject data):
Baseline data are selected from the remaining data based on steadiness of the CO2 concentration and on wind direction. At Baring Head maritime well mixed air masses come from the Southerly direction, and a baseline event is normally defined as one in which the local wind direction is from the South and the standard deviation of minute-by-minute CO2 concentrations is much smaller than 0.1 ppmv for 6 or more hours.
and
Annual means are simply the arithmetic mean of the monthly values calculated by CDIAC. Annual means are provided only for years with complete monthly records.
No curve fitting, no “linear gain” included at all…
But the raw flask data from different stations are available too: Keeling Sr. has plotted a few of them here:
http://cdiac.ornl.gov/ftp/ndp001a/ndp001a.pdf
The raw flask data for the different stations are here:
http://cdiac.ornl.gov/ftp/trends/co2/sio-keel-flask/
3. IPCC data smoothing
Was explained in item 2.
The IPCC doesn’t smooth, only used the “cleaned” monthly averages delivered by others. The rules for excluding outliers are pre-defined and the results with or without outliers are equal for any practical purpose.
You provided links to two curves on your website. Neither even slightly resembles the records provided by IPCC in time span, variability, or data presentation.
I have done my homework: if the raw and smoothed data for a few years resemble each other within the measurement error, I don’t see any reason to assume that the data for the rest of the years is manipulated to prove any preconceived theory. I have more interesting work to do than recalculate every single bit of information, but I like to check some random samples to see if what they say is true.
IPCC’s smoothing appears in two ways. (1) The variabilities of both the trend and the seasonal variance of each record, considered separately, are too small compared with real data. And (2), the records considered pairwise agree too closely.
Sorry, but that is your problem, not that of the IPCC, neither mine. Both the raw data (continuous and flasks) and the “cleaned” data are very close to each other for all “base level” stations over the world. That is probably because CO2 in general is well mixed. Which is what you don’t like to accept. Alternatively, you can measure the CO2 levels yourself. Hawaii is a nice spot to start with. The South Pole a little harsh, but Antarctica still is on my wish list of things to do before I die…
Final note on this: If you accuse someone or some organisation of huge manipulation of data, as you do, please check the facts thoroughly before you put that accusation online…
4. IPCC’s doubling error.
Sorry, no discussion on that point. I have given my opinion for Tony, and that is it.
5. DIC measurements
You wrote,
>>Both DIC and CO2(g) were measured at Bermuda and show that DIC increases with less than 10% of the increase of CO2(g), which shows that the ocean’s mixed layer doesn’t absorb extra CO2 in the atmosphere that well.
I don’t understand what you have written. If I could see the data, I would have a chance. I exhausted myself searching through your citation plus its embedded citations through three or four levels, which came to nothing but calculations based on the hypothetical equilibrium model.

You are looking too far: the Bermuda plot shows the nDIC, which is measured (but normalized to standard conditions of salinity) and the (not shown, but measured) CO2 levels in the atmosphere. I have taken the CO2 levels from Mauna Loa as base, but as the trend in the NH is the same for all stations, that makes no difference. While pCO2(g) increased 10%, nDIC (in micromoles/kg) increased 0.8% in the same period. See:
http://www.bios.edu/Labs/co2lab/research/IntDecVar_OCC.html
6. Non-responsive answers.
I asked you where are the data supporting “a few tenths of a ppmv”, and you answer that I can make the calculations.

As can be seen in the different plots I already sent in, where raw and “cleaned” data were compared in the same plot, there is no visible difference in seasonal amplitude, average or trend. Based on the scale of the plots, that are differences less than o.2 ppmv. But I have calculated the averages too, which was added in the same message:
The 2008 average for the raw hourly data of Samoa is 384.00 ppmv, for the selected daily data it is 393.91. For Mauna Loa: raw 385.34, selected 385.49.
I suppose that I may say that at least for Mauna Loa and Samoa (but also for all other baseline stations) the atmosphere is very well mixed…

Thus I supplied the differences in average for including and excluding the outliers. The non-excluded data are used for daily, monthly and yearly averages. The monthly averages is what the IPCC plotted. I have done that for a few years. If you don’t trust my calculations, then please help yourself…
I asked why IPCC didn’t exploit what you assert: that the raw data match. You answer IPCC didn’t need to.
Why should the IPCC have a look at the raw data at all, they are interested in background CO2 increase, not the marginal noise around the trend. And they trust NOAA to deliver the best data availbale. Even valued skeptics like Dr. Spencer, Lindzen and many others accept the (cleaned) Mauna Loa data as presented by NOAA.
7. Basis for mass balance analysis
So you previously recognized and wrote about IPCC reporting on the mass of CO2 and “the flows between oceans and atmospheres”. Just to be perfectly clear, Figure 7.3 which we discussed included the mass of CO2 in the reservoirs of the Atmosphere, Fossil Fuels, and Vegetation, Soil & Detritus. It included four exchange fluxes, in and out, between the atmosphere and the Vegetation parts, and it included four exchange fluxes, in and out, between the ocean and the atmosphere.

OK, as a starter, let’s forget the flux estimates for aCO2/nCO2 as what was plotted by the IPCC. There are too many errors in it. For the total mass of CO2 in the atmosphere, the total CO2 flows are of interest, not the partitioning between aCO2 and nCO2. NASA made the original plot:
http://earthobservatory.nasa.gov/Features/CarbonCycle/Images/carbon_cycle_diagram.jpg
Thus what is change in mass of CO2 in the atmosphere after a year? That is the difference of all ins and outs (seen from the atmosphere):
dCO2(atm) = CO2(in1 + in2 + in3 + …) – CO2(out1 + out2 + out3) + aCO2
4 GtC = CO2(90 + 121 + …) – CO2(92 + 122 + …) + 8 GtC
or
CO2(in) – CO2(out) = – 4 GtC
With other words, without knowing any individual flux or change in fluxes or direction of fluxes or total influxes or total outfluxes, the net result of one year carbon exchanges in all directions is that the sum of all outflows during one year is 4 GtC larger than the sum of all inflows. For another year that may be 3 GtC or 5 GtC, but in the past 50+ years always more outflow than inflow.
You seem somewhat disconnected with the principles of a bank account: while a huge amount of money flows back and forth between many contributors and many loaners, want counts is what the net result is off all those transactions is at the end of the year. If your contribution is larger than the gain, better look for another bank…
If you have a bussiness you don’t need a detailed record of all transactions during a day (you better do, but that is a different story), to count what is in your cash register and know what your gain or loss was for that day.
Thus while the detailed fluxes and inventories of the different compartiments are of interest for the details of the carbon cycle, they are not needed to know the net result. And as the increase in the atmosphere is less than the emissions, and the laws of non-destruction of mass still hold, the cause of the increase is perfectly known.
The clues are the various inter and intra network calibrations admitted by IPCC, and the fact of the secret linear gain factors. IPCC applied those gain factors to the monthly data from stations other than MLO, and then provided graphs to show monthly MLO data tightly overlapping SPO and Baring Head data. You seem unable to discuss this matter and to stay on subject.
You can endlessly repeat your accusations, but you have not a shred of evidence except your own misinterpretations of normal calibration procedures for calibration gases and equipment, not of the data) and the fact that the “gain” is the difference with last year needed for curve fitting of noisy flask data, not the continuous measurements.
8. Analysis by partial derivatives
Maybe my misinterpretation then, but need to look into the details. My interpretation was that in your calculation the outflow was constant over time, which can’t be the case if CO2(g) increases and temperature at the equator remains equal.
9. pCO2(aq), the imaginary number.
I’ve lost count of the number of times you’ve tried to make this point:
>>If these are equal (including Henry’s Law) at the temperature of the tropical waters, the outgas will be zero. Thus the delta between pCO2(aq) and pCO2(g) is what drives the flux.

Henry’s Law does not involve the pCO2(aq). Henry was not so foolish. If pCO2(aq) existed, we might be able to measure it.

Well, they are going to measure it (some do already in real life tests), liquid-liquid without any air involved:
http://bibapp.mbl.edu/works/18724
In any case, pCO2(aq) is the tendency of CO2 in solution to escape and proportional to the concentration of CO2 in the water phase, independent of the gas phase above it.
Moreover, considering the fact that the crucial parameter of pCO2(aq) does not exist, ever — but certainly not simultaneously with pCO2(g) – reduces the model to a mere conjecture.
The fact that pCO2(aq) is measured by seaships and other means for over 80 years and the database now contains many millions of such data, simultaneously with pCO2(g) of the same places, which shows huge differences from the equator (positive) to the poles (negative) is of course some trivial aside…
While I agree that the atmosphere – ocean flux is far from settled, if there is little to no wind, the flux of CO2 in/out the oceans depends largely on diffusion of CO2 through the water phase + the water-gas transfer, which are extremely slow. Whatever the pCO2(g)-pCO2(aq) difference. Or whatever Henry’s Law predicts. What you still not accept is that getting an equilibrium between what Henry’s Law dictates in the water phase from the gas phase concentration is not instantaneous and needs a lot of time, much more time than the changes in chemical equilibria which result from the changes in increasing or decreasing [CO2] in the water mass.
The Takahashi diagram and the Revelle chart appear to be SST plots and consequences of ordinary solubility, not flux.
Of course they appear like SST plots, as temperature is one of the main factors, but it is not the only factor.
10. The resurrection of dynamic equilibrium
You say,
>>>>Why would that happen? Are you claiming Le Chatelier’s principle applies to your dynamic equilibrium?
>>Of course that applies, as that is the case for every dynamic equilibrium. And the whole carbon cycle behaves like a simple process in dynamic equilibrium.
Where is you support for these claims? You weaken all your arguments by laying down naked claims for some.

According to Wiki, Le Chatelier’s principle:
Any change in status quo prompts an opposing reaction in the responding system.
If the temperature of seawater at the equator increases, more CO2 will build up in the atmosphere and that counteracts the release of CO2, at the same time increasing the uptake at the poles, even at constant temperature (you know Henry’s Law)…
If humans release CO2 into the atmosphere, that will increase the CO2 level of the atmosphere, which will reduce the release of CO2 at the equator and increase the uptake near the poles…
The CO2 response to temperature changes is remarkably linear over the past 420,000 years.
The CO2 response to human emissions is remarkably linear over the past 100+ years:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg
Thus the whole CO2 cycle behaves as a simple linear, first order process where the dynamic equilibrium is disturbed by temperature and the emissions.
11. AIRS data
Of course, the AIRS observations are well up into the troposphere, and my model in which the outgassing affects the MLO record need not rise to the AIRS altitudes. The AIRS data are sure to improve modeling. However, the new data tend strongly to invalidate IPCC’s critical well-mixed assumption.

There are no practical differences between measurements taken at sealevel at Hawaii and Mauna Loa at 3,400 m. There are no practical differences between AIRS data at the latitude of Mauna Loa (with peak at 6,000 m) and the data of Mauna Loa. The AIRS data show maximum CO2 levels in the mid-latitudes, ground and altitude stations show maxima in the mid and more nordic latitudes…
And again, if a monthly variation of only 2% of the range (of which most is due to the seasonal changes and the SH lag) is not well-mixed what on earth is then well-mixed?

Anderlan
August 2, 2010 11:17 pm

This is very dangerous, this is the first step down the road of believing the socialist hoax. Next is the idea that CO2, *OUR* CO2, could contribute heat to the atmosphere that wouldn’t otherwise be there. Now, we have to follow your Gaia equilibrium hypothesis that the earth will naturally (and quickly) bounce the heat credit back out into space. It is easier if we can just believe that CO2 isn’t increasing because of man, that CO2 doesn’t heat the atmosphere anyway, and that higher global average temps are good. (Those things are contradictory, but you only have to believe them one at a time, as the argument calls for.)

George E. Smith
August 5, 2010 6:24 pm

Thanks for the CO2 data; specially for the ice cores Willis.
Now let me see how many of your roolz I can break.
First question would be; for those of us that are largely onlookers, can you tell us approximately where these various ice cores hail from (pun intended) For example is Fuji actually an ice core off Mt Fuji; or am I jumping to an erroneous conclusion. Not that it’s a big deal just an approximate “west Antarctica” , “Central Greenland” , whatever will do.
First observation would be; that 280 ppm over all that pre-modern period is remarkably flat data. How could that possibly jibe with global temperature fluctuations over the same time frame given that Schneider’s law says:-
T2 – T1 = (cs). logbase2(CO2,2/CO2,1) thereby defining “Climate Sensitivity” (cs) as the mean global surface Temperature rise for any CO2 doubling; which I presume is the “Climate Science” equivalent to the velocity of light (c) in “Optical Science”.
Second observation:- I don’t see any relevence or meaning to the term “Residence Time” when it comes to CO2 in the atmosphere. As far as I can tell, CO2 like H2O is; and always has been, (since humans) a permanent component of the earth’s atmosphere; so it is NOT decaying.
I’m also puzzled by the term “e-folding time” since I can not distinguish it (as described) from the “Time Constant” (tau) of ordinary exponential decay. Do Climatologists have some special drive to make up their own terms, and ignore the vast body of previous Physics research and Literature.
I’m happy with : CO2,2 = CO2,1. e^(-t/tau) Which is still silly since CO2 is not decaying away. Now the advabtage of the standard time constant form is that the initial rate of decay is such as to decay to zero in one time constant (at that constant rate).
So for example, let us imagine that 280 ppm is the baseline to which CO2 wants to “decay” and we have a transient excess of 110 ppm giving us 390 ppm.
The annual peak to peak cycling of CO2 at the north Pole, and in fact of the whole >80 deg Arctic is 18 ppm versus 6 ppm for Mauna Loa.
And that 18 ppm drop occurs every year in just five months; and I believe is largely due to the arctic ice melt which then takes up CO2 from the atmosphere into all that melt water from which CO2 was excluded (segregated out) during the previous freeze.
So that 18 ppm drop in five months appears almost a straight line; and if we assume that left running the mechanism would run all the way down from 390 to 280 ppm, then the Time constant must be 110/18 * 5 months or 30.55 months.
That means a decay to 1% residual (excess over 280) would take five time constants or 53 months or 12.7 years.
But as I said; it isn’t really decaying anyway; since the amount is maintained by continuous emission. It is quite irrelevent that one CO2 molecule is being replaced by another; they do the same thing.
Willis you say the isotope argument is a weak one; and I agree; though I do not dismiss it.
Assuming it is true that C12 abundance has been increasing; or rather C13 isotope is decreasing; that is a plausible argument that a source of C13 depleted carbon is being released to the atmosphere. It is not such a good argument to say that the INCREASE in atmospheric CO2 is ALL from that source.
If somebody pumped oil from an oil field that for some reason contained a lot of Argon which we just vented to the atmosphere; then of course we would expect atmospheric Argon to increase. So I agree with you that the isotope argument is not definitive. I am also given to understand from some of the Botanical visitors to WUWT, that there are several kinds of botanical metabolisms depending on plant species, and that they behave quite differently as regards C12/C13.
But bottom line is; I’m not quite daft enough to claim that man is not the source of the CO2 increase; we certainly are emitting plenty; but I am curious that it is just 800 years or so since the missing or suppressed MWP so per Al Gore’s graphs we should be due for a CO2 goosing round about now; due to that warming period.
But I really like your ice cores; although I wonder how they relate to global CO2 since the antarctic at least is quite atypical when it comes to CO2; whcih despite glib claims by Climatologists; is anything but well mixed in the atmosphere. I didn’t know there could be anything on earth that was as hemispherically assymmetrical as the global distribution of CO2.
We have an 18 ppm p-p cycle north of 80 deg, falling to 6 ppm p-p at Mauna Loa, and then reversing to -1 ppm at the south pole.
So much for “well mixed”.
“Well mixed” to me, means that I can take a sample of atmospehre from anywhere on earth say near the surface, and not immediately adjacent to some hot CO2 source; and upon assay obtain the same species relative abundance everywhere.
And why is it that people use ppmv; when the material is not all at some standard condition; when it is so much easier to talk about mole abundance since we can actually count molecules.
But thanks again Willis for an enlightening post. I’m totally dependent on folks like you and Steve and others who have real data available and know how to get it.
George

George E. Smith
August 5, 2010 6:56 pm

As to “Reversible Processes” and the Second Law which is mashing around in the Englebeen et al discussion.
I’m a fan of Rudolph Clausius expostition of the second law; if only for its arcane, definitely non-Churchillian English exposition; to whit:-
” No CYCLIC machine can have no other effect than to transport “heat” from a source at one Temperature to a sink at a higher Temperarture. ”
And note that the process is to be cyclic but that does not imply reversibility.
We teach in Optics that Light is reversible. That is not even approximately true except in the most degenerate of cases involving no change in medium.
We could say that the rays of geometric optics are reversible.
But Physically, if you have a boundary between two media; there is always a ray split, and you get a reflected beam and a refracted beam typically both partially plane polarised.
If you reverse the reflected and refracted output beams they do not reform the original input beam and you still end up with multiple beams.
And true reversibility of a heat engine; would certainly seem to be prohibited.
As for clouds being a positive feedback in models; that is completely laughable. It always gets cooler in the shadow zone when a cloud passes in front of the sun, and the surface is typically illuminated by a nearly colimated beam from the sun (1/2 degree divergence) whereas the surface emitted LWIR from the shadow zone is at least Lambertian if the surface is optical, and more likely isotropic for a rough surface; so the cloud forming the shadow, can intercept only a small fraction of the LWIR fromt he shadow zone; and upon re-radiation from the cloud as an LWIR spectrum even less of it arives back at the shadow zone so there is no way the LWIR emissions make up for the loss of direct solar insolation.
And please remeber weather is NOT climate; so don’t tell me about last night’s weather with a high wispy cloud. When we talk about cloud effects we mean the effect of a change in global cloud cover that persists over some climatically significant time interval like say 30 years. It is ALWAYS negative feedback; NEVER positive.
And see Wentz et al SCIENCE for July 7 2007, “How Much More Rain will Global Warming Bring ?” The only thing Wentz does NOT say is that it is fashinable in most science circles to have more clouds of precipitating density when you have more rain.
The resulting negative feedback is huge compared to any possible CO2 effect.