A look at human CO2 emissions -vs- ocean absorption

Steve Fitzpatrick writes in with a short essay:

Graphic by NASA
Ocean CO2 absorption

On May 11 you reposted a blog from Dr. Roy Spencer, where he suggests that much of the increase in atmospheric CO2 could be due to warming of the oceans, and where he presents a few graphs that he claims are consistent with ocean surface temperature change contributing more than 80% of the measure increase in CO2 since 1958.  Dr. Spencer’s suggestion is contradicted by many published studies of absorption of CO2 by the ocean, with some studies dating from the early 1960’s, long before “global warming” was a political issue.  In this post I offer a simple model that shows why net absorption of CO2 by the ocean is most likely the main ocean effect.

If the rise in CO2 is being driven by human emissions, then the year-on-year increase in atmospheric CO2 ought to be a function of the rate of release of CO2, less any increase in the rate of removal of CO2 by increased plant growth and by absorption and chemical neutralization of CO2 by the ocean.  Both ocean absorption and plant growth rates should increase with increased CO2 concentration in the atmosphere.  To simplify things, I focus here only on ocean absorption.

On the other hand, surface temperature changes ought to have a relatively rapid effect, because the surface of the ocean is in contact with the atmosphere and so can quickly absorb or desorb CO2 as the water temperature changes.  In fact, the ocean surface continuously absorbs CO2 where the temperature is falling, mostly at high latitudes, and emits CO2 where the water is warming, mostly at lower latitudes.  Cold upwelling water from the deep ocean warms at the surface and desorbs CO2, while very cold water at high latitudes absorbs CO2 before it falls to the deep ocean.  An increase in average ocean surface temperature will cause more CO2 to be emitted from surface water, but this effect is limited to a very small volume fraction of the ocean.  Effects due to rapid temperature changes (annual time scale and less) are limited to a relatively thin layer, while the gradual absorption/neutralization process takes place at a rate controlled by ocean circulation and replacement of the surface water with upwelling (and “very old”) deep ocean water.

Any change in sea surface temperature should add to or subtract from the atmosphere’s CO2.

Annual change = (Annual emissions) – K1 * (CO2 – 285) + K2 * (delta SST)

Where “CO2” is the atmospheric concentration,  K1 is a unitless “ocean uptake constant”, and K2 is a sea surface absorption/temperature constant, with units of PPM per decree C.  Delta SST is the year-on-year change in average sea surface temperature.  K1 is related to how quickly surface water is replaced by deeper water, and it should be a relatively small number, since ocean circulation and mixing are slow.  K2 should be a relatively large number, since surface water temperature changes are relatively fast and we know that there is a strong short-term correlation between the rate of change of CO2 concentration and SST changes.

The model performs an iterative calculation (a step-wise approximation of integration) of the evolution of CO2 in the atmosphere.  Each year a change in CO2 is calculated using the above equation, that change is added to the atmospheric CO2 concentration from the previous year, and the process is then repeated.  The calculation starts with 1959, using a starting CO2 concentration of 315 (the value from Mauna Loa in 1958).

Measured CO2 values and measured year-on-year changes are from Mauna Loa.  Average SST’s are from GISS.  CO2 emissions, expressed as PPM potential increase in CO2 in the atmosphere, are based on worldwide carbon emissions (according to CDIAC at Oak Ridge) converted to an equivalent weight of CO2, divided by an assumed atmosphere weight of 5.3 X 10^9 million tons.  This result was scaled by a constant factor of 0.7232, which is 28.96/44 = 0.6582 (to convert weight fraction CO2 to volume fraction), multiplied by 1.099 to match up with the range of CO2 emissions that Dr. Spencer used in his May 11 blog post.   Note that nobody really knows the total carbon emissions, so different sources offer different estimates of total emissions.  The final two years of CO2 emissions I had to estimate beacause the CDIAC data ended in 2006.  I assumed an equilibrium ocean CO2 level of 285 PPM.  I optimized K1 and K2 by hand so that the model had a reasonable fit with the data; the values were 0.0215 for K1 and 5.0 for K2.  So the model equation is:

Annual change = (Annual emissions) – 0.0215 * (CO2 – 285) + 5.0 * (delta SST)

The graph titled “Annual Increase in CO2” compares the measured and calculated year-on-year changes along with the potential increase from fossil fuels.

FitzpatrickGraph1

The graph titled “Correlation: Model Increase vs. Mauna Loa Increase” shows that the model does a decent job of capturing the year-on-year temperature driven change in atmospheric CO2.

FitzpatrickGraph2

I suspect that if the model used monthly data and the 6-month lag between SST changes and CO2 changes that Dr. Spencer used, then the model fit would be better.

The graph titled “Measured CO2 versus Ocean Uptake Model” shows the final result of the calculation.

FitzpatrickGraph3

The evolution of CO2 in the atmosphere calculated by the model between 1958 and 2008 is reasonably close to the Mauna Loa record.  The model suggests that about 2.15 PPM equivalent of emitted CO2 is currently being absorbed, or about half the total emissions.

My only objective is to show that the CO2 released by human activities, combined with slow ocean absorption/neutralization and sea surface temperature variation, is broadly consistent with the measured historical trend in atmospheric CO2, including the effect of changing average SST on short term variation in the rate of CO2 increase.  Temperature changes in ocean surface waters cause shifts of a few PPM up and down in the rate of increase, but surface temperature changes do not explain 80% to 90% of the increase in atmospheric CO2 since 1958, as suggested in Dr. Spencer’s May 11 post.  Because of its relatively high pH, high buffering capacity, enormous mass, and slow circulation, the ocean is, and will be for a very long time, a significant net sink for atmospheric CO2.

With a bit of luck, continuing flat-to-falling average surface temperatures and ocean heat content will discredit the model predictions before too much economic damage is done.

Advertisements

188 thoughts on “A look at human CO2 emissions -vs- ocean absorption

  1. When you say “The model suggests that about 2.15 PPM equivalent of emitted CO2 is currently being absorbed, or about half the total emissions.” do you mean half of all anthropogenic co2 emission or all co2 emission? Please clarify “total emission”, I assume you mean “total anthropogenic emission”.

    Interesting post, thanks

  2. A few weeks ago I posted these plots showing that temperature could be related to CO2. I was hauled over the coals for plotting 2 rising variables against each other. Just as Steve Fitzpatrick does here. My objective was to show that there temperature and CO2 could be linked.

    I also derived an expression for temperature rise vs CO2 and plotted hadcrut3-(curve fit temp) vs time showing little erro other than noise:

    On a more recent thread there has been discussion about the annual variation in CO2

    Firstly NH dominates the world CO2 change (on antarctica has a CO2 level 6 months out of phase with NH. Barrow and resolute have the least filtered changes (sharpest dips) and so the july / august dip must be caused near these latitudes.
    Plot of 7 locations fron cdiac data:

    Note the sharpness of this summer dip.

    My speculation has been to question what can change the CO2 level by this small amplitude about 17ppm in 360ppm.
    I have likened it to a NH vacuum cleaner sucking CO2 and then suddenly being switched to blow. Vegetation would be a slow progressive change as spring moves northward. CO2 in the ocean would again be a slow progression. Is perhaps the sudden change in ocean ice the cause of the increase. This could cause the sucking from April to August but this should not get switched off until ice minimum in september. So what starts suddenly blowing CO2 in august?

    The slopes into and out of this dip are very similar: into the dip is -3.6ppm/month and out of the dip is 3.1ppm/month.

    Another interesting point is that it is generally agreed that SST have increased from 1975 to 2008 but looking at “hourly” plots of CO2 for 3 years (randomly chosen to be near start in middle and at end of records, but I have not checked the rest!) this turnover from -ve slope to +ve slope occurs within about +-1 week of the same day (assuming 365.25 days per year). Wouldn’t one expect a progressive +ve / -ve change?
    The hourly data plot:

  3. Do you have an opinion about cause of the fine structure within-year wriggles that are on graphs commonly seen from Barrow, Mauna Loa and the South Pole? They are often atributed to vegetation growth in the Northern Hemisphere, but I have trouble seeing this effect translated to the South Pole.

  4. PS.
    The point of my post is:
    – what causes this dip?
    – The dip exceeds the annual change in CO2 by almost an order of magnitude so a small change in the suck blow machine could have a significant effect on averaged co2 level
    – Can oceans really cause the dip?
    – the propagation of the blip is fast – it appears on christmas island and mauna loa (sh) data with a delay of 1 month from barrow (nh).

  5. So much for tipping points and runaway warming.
    The Earth is just as capable of sequestering the C02 that it did over billions of years.
    From the looks of the model, it’s more than up to the task.
    Hopefully, the flat to lowering surface temps will act like ice water in the faces of a prominent few…before they commit us to Economic HariKari.
    I can hear the snickers and guffaws as the rest of the world watches the West knife itself over Polly want a C02 cracker models.

  6. The Earth has evolved over geologic times to become a carbon-eating/energy-storing mechanism, having stored far more than is expelled/released in a cyclic manner. Life has something to do with it.
    Some lifeforms get thier thrills yelling Fire on a crowded planet.

  7. “With a bit of luck, continuing flat-to-falling average surface temperatures and ocean heat content will discredit the model predictions before too much economic damage is done.”

    Nice to see a bit of optimism here.

    And that’s surely the nub of the whole situation, isn’t it? Tackle what Robert D. Brinsmead calls “carbophobia” (http://www.bobbrinsmead.com/E_Vindication_of_Carbon.html) and the rest must follow i.e. the general public starting coming to its senses.

    I have long been convinced that carbophobia will bring down Obama. I know many people on this blog will be delighted with such an eventuality – I won’t.

  8. A potentially important difference betwen terrestrial an aquatic (dominated by marine-) systems should be noted. If my memory serves me, the photosynthesis on land (at least the large scale with implications for the carbon-cycle) is strongly attenuated when day-temperature is below 6 deg C. Marine photosynthesis is much less temperature dependent.

    Cassanders
    In Cod we trust

  9. John Wright (02:56:40) :

    “US Energy Secretary Steven Chu says the US will not be able to cut greenhouse emissions as much as it should due to domestic political opposition.”

    “Environmentalists said Prof Chu, a Nobel physicist, should be guided by science not politics.”

    And yet “Prof Chu told BBC News he feared the world might be heading towards a tipping point on climate change.”

    http://news.bbc.co.uk/1/hi/sci/tech/8061929.stm

    I regret that the BBC report is written by that biased twit Roger Harrabin, described as Environment analyst, BBC News. Read more about the man at http://bishophill.squarespace.com/display/Search?searchQuery=roger+harrabin&moduleId=1282578&moduleFilter=&categoryFilter=&startAt=0

    The BBS report is only worth reading for confirmation that “one compromise would be approving new coal-fired power plants without obliging them to capture and store their carbon. The UK government has made this a stipulation for new coal plants but Prof Chu declined to explain why the US government would not follow suit.”

    It’s a start.

  10. bill (02:18:26) :

    I would have thought the dip is well aligned with phytoplankton blooms in the higher latitudes in cold, nutrient rich, water during the NH summer.

    Perhaps the real question is why there isn’t an even bigger dip during the southern summer, given the size of the Pacific.

  11. OT:
    When cam we expect the release of David Archibald’s “Solar Cycle 24” ?
    And are we officially into the 24 SN cycle as of now ?

  12. Perry Debell (03:50:14) :
    John Wright (02:56:40) :

    “US Energy Secretary Steven Chu says the US will not be able to cut greenhouse emissions as much as it should due to domestic political opposition.”

    “Environmentalists said Prof Chu, a Nobel physicist, should be guided by science not politics.”

    I never wrote that,

    John Wright

  13. HappyDayz (04:07:10) :
    I would have thought the dip is well aligned with phytoplankton blooms in the higher latitudes in cold, nutrient rich, water during the NH summer.

    Perhaps the real question is why there isn’t an even bigger dip during the southern summer, given the size of the Pacific.

    Peak melt occurs in september. Does the pytoplankton bloom occur before this?

    “Arrigo found that the Cape Bathurst polynya contained considerable variability, in terms of initial polynya formation and in the extent and persistence of open water, over a five year period (1998-2002). Phytoplankton blooms also varied considerably in intensity and timing. Phytoplankton are plantlike organisms that contain green chlorophyll and are a primary food source for many marine mammals and birds, are tiny organisms that are responsible for most of the photosynthetic activity in the oceans.”
    http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2008/11/20/MNV1147VJE.DTL
    http://www.nasa.gov/centers/goddard/earthandsun/arctic_changes.html
    This graphic ( 1998 phytoplankton bloom in the Cape Bathurst polynya region)seems to show that the bloom is centred between 21st may and 23rd june which is not when the dip occurs

    These refs say that the bloom is affecterd by temperatures. The timing of the minima is not.

  14. No me parece correcto considerar solamente el fenómeno físico-químico de solubilidad. La absorción de CO2 en realidad la realiza el fitoplancton. Si disminuye la temperatura el CO2 disuelto (alimento) es mayor, la actividad de fotosíntesis es mayor y la absorción de CO2 aumenta. A pesar de que al aumentar la temperatura hay un menor consumo del fitoplancton (absorción) y que los océanos siempre desprenden CO2 por la actividad bacteriana, nunca hay una emisión neta a la atmósfera como saldo de estos procesos.

  15. I think is not correct to consider only the physical phenomenon of solubility. The absorption of CO2 actually makes phytoplankton. If the temperature decreases dissolved CO2 (food) is higher, then the activity of photosynthesis is higher and the absorption of CO2 increases. Despite the fact that with increasing temperature there is a lower consumption of phytoplankton (absorption) and that the oceans give off CO2 provided by bacterial activity, there is never a net emission to the atmosphere as the balance of these processes.

  16. This post typifies much of what passes for scientific analysis. First make unjustified assumptions, next ignore the massive errors in your estimates , ignore that the amount of human emission is a mere 3% of the total flux exchange which pretty much invalidates the whole numerical exercise, then produce a few graphs which surround your guesswork with a pseudo-scientific veneer.

    The initial assumptions that stand out for me are:
    1. Assume a net ocean sink without giving any mechanism for it whatsoever. As temperatures have been rising since the little ice age and it’s accepted even here that rising temps should cause the waters to disgorge CO2 rather than uptake it, so this starting assumption is just unphysical. To get around problem we always get some handwaving with references to deep water, circulations, or in this case “old water” and this post is no exception. Never mind that what oceanographers actually know about ocean movements is continually confounded and wrong. Witness the number of oceanographers who still say that the gulf stream warms Europe and it could be interrupted by the warming while Wunsch and Seager state show that this is a ridiculous myth among oceanographers which would require the planet to stop spinning to become true. There may actually be a mechanism for ocean uptake but I’ve yet to see anyone properly describe it.

    2. Land uptake is so small it can be dispensed with altogether. This oddity seems to arrive from these other IPCC assumptions:
    a) assumption A: the planet manmade deforestation gives a net increase of 20% of emissions. However the planet is greening (link given in Spencer post) and deserts are shrinking so this assumption is nonsensical.
    b) assumption B, which actually contradicts assumption A: Counting the green bits on the planet shows 6% increase in NPP in 30 years (and lets just ignore that it is 25% increase in vegetation) but in fact the CO2 storage is not in the leaves. When you measure the increase in the brownery it turns out that you find an extra 5 Gigatonnes with little effort (Link given in the Spencer post). How many extra carbon sinks could we find just by assuming the most logical explanation and going out to look for them?
    c) assumption C (not important for the numbers but worthy of mention as the most ridiculously speculative): While it’s certainly greening now it surely won’t be if the temperature gets much higher.
    d) Lastly, that the CO2 is actually increasing unnaturally in the first place. This of course seems utterly logical given that the planet is warming, we are emitting more and that we have several locations apparently showing this monotonic increase. But I linked already (on the Spencer thread) to a monitored desert location that should have shown an increase in line with Mauna Loa. It however showed no trend at all. Honestly, the fact that the land flux is 450 Gt, the ocean flux is 250 Gt and our contribution is between 7 and 27 Gt (depending on who’s official guesswork is used) should really give a few more people pause for reflection. We should also be wary of any data that isn’t available in raw, unadjusted form.

    Here’s an alternative hypothesis. All that we emit (between 7 and 27 Gt), being emitted pretty close to ground level is mainly soaked up quickly by plant life. Sea emissions are increasing naturally in response to temperature. Now for this hypothesis to be correct then desert locations should not see any increase in CO2 but all seaboard locations would. Easy to test: I see your Mauna Loa and I raise you one Nevada FACE experiment.

  17. I do not think anyone knows much about the physics and chemistry of gas emission and absorption in the open ocean.

  18. On point is the following:

    The Acquittal of Carbon Dioxide, by Jeffrey A. Glassman, PhD.

    http://www.rocketscientistsjournal.com/2006/10/co2_acquittal.html#more

    ABSTRACT

    Carbon dioxide in the atmosphere is the product of oceanic respiration due to the well‑known but under‑appreciated solubility pump. Carbon dioxide rises out of warm ocean waters where it is added to the atmosphere. There it is mixed with residual and accidental CO2, and circulated, to be absorbed into the sink of the cold ocean waters. Next the thermohaline circulation carries the CO2‑rich sea water deep into the ocean. A millennium later it appears at the surface in warm waters, saturated by lower pressure and higher temperature, to be exhausted back into the atmosphere.

    Throughout the past 420 millennia, comprising four interglacial periods, the Vostok record of atmospheric carbon dioxide concentration is imprinted with, and fully characterized by, the physics of the solubility of CO2 in water, along with the lag in the deep ocean circulation. Notwithstanding that carbon dioxide is a greenhouse gas, atmospheric carbon dioxide has neither caused nor amplified global temperature increases. Increased carbon dioxide has been an effect of global warming, not a cause. Technically, carbon dioxide is a lagging proxy for ocean temperatures. When global temperature, and along with it, ocean temperature rises, the physics of solubility causes atmospheric CO2 to increase. If increases in carbon dioxide, or any other greenhouse gas, could have in turn raised global temperatures, the positive feedback would have been catastrophic. While the conditions for such a catastrophe were present in the Vostok record from natural causes, the runaway event did not occur. Carbon dioxide does not accumulate in the atmosphere.

  19. It’s probably staring me in the face but where is the link to the Dr Roy paper from May 11?

  20. Steve,

    Some things about this model seem counter intuitive to me. If I understand you correctly, (emissions) are anthropogenic emissions only? If yes,

    1. Why would annual emissions be limited to anthropogenic emissions?

    Then,

    2. How could K1 not vary by temperature?
    3. What data supports the assumption of 285 ppm as a point of equilibrium in this absorption?
    4. If no one really knows the actual total emissions (and here I mean anthropogenic as well as natural), how can we determine anything?

  21. OT, but worth noting given the green hype/carbon tax bunk as of late….

    Sanyo hits world record for solar cell efficiency
    http://www.tgdaily.com/content/view/42559/135/

    Unfortunately, a whopping 23% does very little to replace current electricity generation, especially given battery limitations. It would be nice if the efficiency was greater than 50%….one can always hope, I guess!

  22. Bob Tisdale (01:30:26) :
    This link: http://data.giss.nasa.gov/gistemp/graphs/ allows you to download text data as well.

    stumpy (01:48:42) :
    ‘do you mean half of all anthropogenic co2 emission or all co2 emission? Please clarify “total emission”, I assume you mean “total anthropogenic emission”’

    Yes, that is what I mean. The overall annual exchange rate between the atmosphere carbon sinks (mainly land plants, top soil carbon, and ocean absorption) is much greater; on the order of 20% of the total atmospheric CO2 (~75 PPM equivalent), and presumably this process would be close to “in balance” without any increase in atmospheric CO2. Adding CO2 to the atmosphere increases plant uptake and ocean absorption and leads to a net flux from the atmosphere to these carbon sinks.

    bill (02:05:30) :

    I do not know why you would be dragged over the coals for plotting CO2 versus temperature; seems a reasonable thing to do. However, perhaps a better plot would be Ln(CO2) versus temperature, since the radiative effect of CO2 should be a log function of concentration, not linear. However, CO2 represents only ~55-60% of the total increase in infrared absorbing trace gases (methane, fluorocarbons, ozone, etc. cause the rest). Methane concentration is currently falling slightly, after increasing from the 1800’s through the 1990’s, while fluorocarbons have been falling since the early 1990’s. Perhaps a more robust plot would be increase in temperature versus the log trends in all these trace gases. From the 1950’s to the mid-1990’s, CO2 concentration may have been a fair stand-in for all the trace gases, but since then CO2 and the others are going their separate ways.

  23. Don’t forget Dr. Spencers purpose was to show that a simpler model fit the data as well or better than the popular one. He did just that, I didn’t see him claiming that this was all there was to it. His model does fit the paleo record and fits the current data as well or better than any of the popular AGW models.

  24. JamesG –

    The author has assumed that rising ocean temperatures have the effect of releasing CO2 – no one disagrees with this and he in fact does make that assumption (it’s his K2 parameter).

    The author has further assumed that if there is a break in equilibrium (ocean and atmospheric concentrations are not matched correcting for the rest of the physics) then there will be a net transfer one way or the other. This is his K1 parameter.

    These seem like infinitely reasonable assumptions to me. If there is another effect he should have included, please mention it. This is only a model, after all. The difference between what this author has published and the vast climate community is that they take their models MUCH more seriously than the author at least seems to.

    Assuming the model has SOME validity, the fit itself would have resulted in the correct signs for his constants and rough order-of-magnitude correctness. If the model has no validity, neither the signs nor the magnitude have any meaning.

    I agree with you and others that I’d like to see more than a little bit more transparency in the calculation, however. What are the assumptions that go into “(Annual Emissions)” for instance. If THAT were broken up between “natural” and human emissions (which would then also require uptake mechanisms on both land and sea to be specifically modeled) then I think this would be something more than just an interesting exercise. But, to be completely fair, he has placed essentially the same level of sophistication in his model as the Spencer model he’s responding to.

  25. Frank (04:47:36) and Alberto (05:13:48)

    Of course plankton consume CO2, just like land plants. The increase in CO2 uptake by land plants is documented, I am not familiar with the effect of increases dissolved CO2 on the rate of plankton uptake; it could be that the rate also increases. However independent of plant effects, there is a large (and well documented) physio-chemical absorption of CO2 by the ocean that is driven by the large temperature changes between tropics and high latitudes. I purposely ignored the plant effects to simply the analysis, but a more complete analysis would have to take these into account.

    The point of the post was not to show a perfect model, but to note that the measured change in atmospheric concentration of CO2 is consistent with addition of CO2 from human activities combined with a partial removal of this extra CO2 at a rate that is proportional to the increase in atmospheric concentration, combined with a sea surface temperature driven variation. It does not matter if the slow removal is by physio-chemical absorption by the ocean or by plant absorption or both. So long as the total increase in removal rate is approximately proportional to the increase in CO2, then the concentration of CO2 in the atmosphere ought to evolve roughly as the model shows; which is also how the measured concentration has evolved.

  26. James G has summed up a lot of my ideas on the flaws in this model, but I would just like to point out that the grahhs presented here illustrate how well you can ‘tune’ a model. The correlation between observed and and model predicted CO2 levels are excellent – because that is how the model was built.

    What this illustrates is – in my opinion – not a scientific argument but a legal one. In a scientific argument a theory (or model) is developed in such a way that it can be falsified – that is tested to see if it holds up. In a legal approach, evidence is selected/presented in such a way as to support a theory (or model).

    This is a fundamental issue with today’s society that goes beyond AGW, but is most tellingly illustrated by it.

    An economist who uses data to build their model is accused of ‘data-mining’ and will have trouble getting papers published on the basis that the assumptions which have to be bult in to make the model fit the data are made with a exisiting knowledge of the desired outcome. This may sound counter-intuitive (and sounded plain silly to me when I first came across it), but given that there are no experimental data (only observations), it makes sense since there is no other way of falsifying your model.

    The CO2/climate change debate is the perfect eample of why this matters – you start by saying that CO2 drives global temperature and then collect supporting evidence and build models with this in-built assumption. No falsification is possible in this process.

    I’m not the first to point this out – heck people all over the place have been pointing out that this isn’t science for years now, but models such as the one presented here still seem to fall into this trap. Devloping a model which shows great correlation to poorly understood data just shows your skill in model tuning and reveals no underlying scientific causation.

    This does not mean it has no value, but its value lies in whether it can be used to predict future CO2 levels – in the same way that weather forecasting uses models of past meteological data to forecast weather patterns. However, weather forecasters has a long history of success (or failure) to point to in validation of their models. Come back in, say, 10 years and see if your predicted values still match observations. Then it may be accepted as a useful model for predicting future CO2 levels, but even then it may not get us closer to knowing how CO2 and temperature are linked.

  27. Wondering Aloud (06:17:37) :

    ‘Don’t forget Dr. Spencers purpose was to show that a simpler model fit the data as well or better than the popular one. He did just that, I didn’t see him claiming that this was all there was to it. His model does fit the paleo record and fits the current data as well or better than any of the popular AGW models.’

    If you take a look at Dr. Spencer’s curves, you will see that there large differences between the hind-cast and measured CO2 concentration for 1958 to the present. The simple model I presented hind-casts the measured concentration reasonably well (no big differences in the shape of the curves), and also does a fair job of hind-casting the annual (temperature driven) variation in the year-on-year increases. In other words, the model I presented is a much better fit to the Mauna Loa data.

  28. I have also been following the jet stream. It would not be a reason for these odd slices in the nsidc picture. Neither would the Arctic temps. Arctic current temps also have not changed much. There is no reason I can see for the increased melt rate. This could be satellite issues again.

  29. Why is it that people rarely discuss CO2 uptake by whatever biochemical pathway it is that lays down limestone? It seems that there has been much more CaCO3 laid down through geologic time than coal and petroleum.

  30. John Wright (02:56:40) :

    I have long been convinced that carbophobia will bring down Obama. I know many people on this blog will be delighted with such an eventuality – I won’t.

    I wish what you say were true, however, I believe that we are headed down a bad road and there is no way this bus is going to turn around. At the moment, the Dem’s are hell bent on passing “Climate Legislation” at all costs. They are well on their way to fast tracking this suicidal bill, and I see nothing stopping it. “We The People” can scream all we want, but our politicians don’t give a rats ass what we say or think. I am sorry to sound so pessimistic, but I am becoming very scared at this point.

  31. There is much more to this than simple CO2 pumping and vegetation use to match up the curves. The quadrillions of tons of limestone represent fixing of not an insignificant amount of the CO2 that dissolves in the ocean and settles to the bottom, and the points of

    JamesG (05:18:37) :

    on the greater importance of vegetation CO2 demand (I like his doable experiment with deserts) than is accorded by IPCC makes the curve fit of the model highly artifactual. Isn’t there a bit of tautology in the derivation of the model – using results to generate the equation?

  32. Hank (06:57:10) :

    “Why is it that people rarely discuss CO2 uptake by whatever biochemical pathway it is that lays down limestone? It seems that there has been much more CaCO3 laid down through geologic time than coal and petroleum.”

    Egad Hank! You slipped your post in while I was awaitng moderation!!

  33. I am not a fluids chemist, but I read some time ago that decreased ocean CO2 uptake due to rising SSTs since 1800 would only account for 7 ppm of the increase in atmospheric concentration. In considering ocean uptake, and the rate of ocean uptake we also have to account for partial pressures, and, as noted by some above, ocean biota. I have recently read of another biota phenomenon discovered in the mid Atlantic that takes up CO2 and produces carbonates at a rate that surprised the researchers, but I didn’t keep the url. Because the IPCC had to assume very slow mixing between surface and deep ocean, in order to get their long CO2 half life in the atmosphere, I did some digging about 3 years ago, with what I think are very interesting results. Anthony, how would I send you this info so you can decide whether or not to post it? It’s 3 pages or so long, so I don’t want to just add it here. Murray

  34. Rainwater has the ability to wash CO2 out of the atmosphere, and may be an important mechanism in the ocean – atmosphere exchange.

    From a global water balance, I found an estimate of total global rainfall that came to about 100,000 Gt/yr (as H2O) over land, and 400,000 Gt/Yr over the oceans. CO2 is fairly soluble in water, and the colder the water, the more CO2 it can hold. The CO2 in ‘natural’ rainwater lowers the pH from neutral 7 to around 5.7. Since the observed pH of rainwater is similar to calculated pH at CO2 saturation, that suggests that rainwater, if not saturated with CO2, is fairly close to holding as much as it can. At saturation, the dissolved CO2 in water would be 0.23 g CO2/100g water at 10 deg C, at 15 deg C dissolved CO2 is 0.20 g/100g and at 20 deg C the dissolved CO2 would be 0.18 g/100g.

    Global average air temperature is around 15 deg C, but that varies widely over the planet, and of course the temperature of rainwater in the top of a cloud may not be the same as an average ground temperature. Just to get a rough idea of the magnitude of CO2 in rainwater, I did a couple calculations using the CO2 solubility at 20 deg C (warmer, holds less CO2) and at 10 deg C (colder, holds more CO2). Rainfall over land calculates as 49 to 68 Gt CO2/yr (as Carbon so we can compare to the atmospheric CO2 estimates). And for the ocean rainfall, it comes to 183 Gt/yr to 252 Gt/yr (as carbon). The land rainfall could end up ‘stored’ in a river or lake, go into the soil or plants, or could splat on a parking lot and re-release the CO2 to the air when the water evaporates. My guess is the ocean rainfall could most likely be incorporated into the ocean and the CO2 with it (there is way more CO2 dissolved in the oceans than ‘free’ in the atmosphere).

    So how much is that compared to CO2 estimates in the atmosphere? For CO2 in the atmosphere (around 380 ppm at the time I did the calculation) it was estimated that the atmosphere contained about 750 Gt (Gt =gigatons, CO2 expressed as equivalent amount of Carbon). The amount of CO2 that is cycled into and back out of the atmosphere is estimated to be on the order of 150 to 220 Gt per year due to a variety of natural (volcanism, forest fires, vegetation decay, ocean offgassing, etc) and man-made (burning organic fuels, etc) sources. The man-made CO2 totals come to about 6-8 Gt (as carbon) each year, which is only about 3 to 5% of the total emitted CO2. Atmospheric CO2 is also removed via plant growth, absorption into the oceans, etc. The net increase in atmospheric CO2 appears to be around 1.5 ppm per year, which is about 3 Gt/yr (as carbon).

    Note that these estimated rainfall CO2 values are about the same size as the total carbon cycle estimates of 150 to 220 Gt per year. This does not necessarily mean that the estimates (theirs or mine) are incorrect.The rainfall CO2 may show up in other parts of the global estimates such as an overlap of the land rainfall CO2 ending up in the plant growth CO2 estimates. Similarly we know that as ocean water warms, it releases CO2, and since we don’t have very good measurements of that released CO2, it could be that rainfall is just returning some of that unmeasured CO2 to the ocean for a ‘net’ value much lower than my calculation. And of course, my estimates include assumptions about CO2 saturation in rain water, and about the temperature of rain water. Snow or other frozen forms of precipitation may not hold much, if any CO2. Still, even if my estimates are 10 times too high, there is potentially still a lot of CO2 in rain water.

    The trick may not be ‘removal of CO2 from the atmosphere’, but whether it is captured in some way vs recycled back into the atmosphere.

  35. We do know that only half of our emissions are ending up in semi-permanently in the atmosphere.

    Actually, its roughly between 30% (when it is colder) and 70% (when it is warmer).

    Either the oceans, vegetation or soils are absorbing the difference or some combination of all three are.

    Dr. Roy Spencer presents the ocean evidence and …

    bill presents some nice charts showing high latitude NH vegetation/phytoplankton/soils are contributing in some manner.

    The satellite data doesn’t really help since its evidence doesn’t have much of a logical pattern.

    So let’s nail this down since the pro-AGW scientists seem to want to ignore this issue.

  36. “Annual change = (Annual emissions) – K1 * (CO2 – 285) + K2 * (delta SST)”

    Where does the 285 come from? Ice cores?

    The lowest value in the Mauna Loa data set is ~315. Plant Stomatal Index (SI) studies suggest that the ice core CO2 data are wrong. The SI data show that Holocene warm periods routinely experienced 330ppm to more than 360ppm CO2…

    Science 18 June 1999:
    Vol. 284. no. 5422, pp. 1971 – 1973
    DOI: 10.1126/science.284.5422.1971

    Century-Scale Shifts in Early Holocene Atmospheric CO2 Concentration

    Friederike Wagner, 1 Sjoerd J. P. Bohncke, 2 David L. Dilcher, 3 Wolfram M. Kürschner, 1 Bas van Geel, 4 Henk Visscher 1

    1 Laboratory of Palaeobotany and Palynology, Utrecht University, Budapestlaan 4, 3584 CD Utrecht, Netherlands.
    2 Netherlands Centre for Geo-ecological Research, Faculty of Earth Sciences, Free University, De Boelelaan 1085, 1081 HV Amsterdam, Netherlands.
    3 Paleobotany Laboratory, Florida Museum of Natural History, University of Florida, Gainesville, FL 32611, USA.
    4 Netherlands Centre for Geo-ecological Research, Department of Palynology and Paleo/Actuo-ecology, University of Amsterdam, Kruislaan 318, 1098 SM Amsterdam, Netherlands.

    The inverse relation between atmospheric carbon dioxide concentration and stomatal frequency in tree leaves provides an accurate method for detecting and quantifying century-scale carbon dioxide fluctuations. Stomatal frequency signatures of fossil birch leaves reflect an abrupt carbon dioxide increase at the beginning of the Holocene. A succeeding carbon dioxide decline matches the Preboreal Oscillation, a 150-year cooling pulse that occurred about 300 years after the onset of the Holocene. In contrast to conventional ice core estimates of 270 to 280 parts per million by volume (ppmv), the stomatal frequency signal suggests that early Holocene carbon dioxide concentrations were well above 300 ppmv.

    […]

    About three centuries after the initiation of Holocene warming, a 18O minimum in Greenland ice reflects a short cooling event (Fig. 1B). A 150-year climate deterioration has also been deduced from numerous terrestrial and marine biorecords (21). Although exact dating of the non-ice core records is hampered by the occurrence of 14C-age plateaus during the early Holocene, multiproxy analysis suggests that all reported events collectively reflect the Preboreal Oscillation (3). In the Borchert section, the reconstructed CO2 values drop from ~340 to ~300 ppmv at this time (Fig. 1A). A relation between CO2 dynamics and the Preboreal Oscillation had been suspected on the basis of an abrupt rise in the early Holocene 14C curve inferred from German pine dendrochronology (3, 22), but this could not be confirmed by ice core data.

    Our results falsify the concept of relatively stabilized Holocene CO2 concentrations of 270 to 280 ppmv until the industrial revolution. SI-based CO2 reconstructions may even suggest that, during the early Holocene, atmospheric CO2 concentrations that were >300 ppmv could have been the rule rather than the exception (23).

    Wagner et. al. present an atmospheric CO2 reconstruction from SI data showing a rapid increase of atmospheric CO2 from ~260ppm to >340ppm from about 9,930 BP to about 9,685 BP and then a rapid drop inc CO2 from ~340ppm to ~300 ppm three centuries later.

    All of these sharp and fast variations of CO2 concentrations occurred without an ounce of fossil fuel being burned…Apart from a few campfires. There aren’t a whole lot of reasonable explanations other than changes in oceanic uptake caused by temperature changes.

    Plant SI data can be empirically demonstrated to be quantitative accurate…The ice core data cannot be empirically tested for quantitative accuracy.

  37. JamesG (05:18:37)

    Wow, hard to know where to start.

    It is clear that changes in ocean temperature cause net changes in atmospheric CO2, and I do not dispute that. It is also clear that the ocean surface has warmed (slightly) since 1958, but the vast majority (the deep ocean) has not changed much in temperature. The sinking of cold (and CO2 rich) water at high latitudes requires the surface temperature fall enough that the water is slightly more dense that water that lies below, generating deep convection. So the deep ocean only receives an influx of very cold water, and does not respond much to surface warming. The circulation is very slow, and water that up-wells at low latitudes sank at high latitudes centuries ago. This water warms and releases CO2, of course, but the quantity released for any specific increase in temperature depends on the CO2 concentration in the atmosphere when that water was last in contact with the air… which was centuries ago. The relatively small increase in ocean surface temperatures (less than a degree over the last century) is not sufficient to cause that much extra CO2 to be released (I mean, you can get some deep ocean water, warm it up, and see how much CO2 comes out as a function of temperature!). Finally, were the rate in rise of atmospheric CO2 caused mainly by an increase in ocean temperature, then the rising ocean temperatures between 1900 and 1944 (about 0.29C) should have caused a large increase in atmospheric CO2 than was observed, comparable to the increase in CO2 measured between 1958 and present, which had a ocean temperature increase of about 0.36C.

    With regard to assumptions, please see my comment to Frank and Alberto (I only sent the English reply, perhaps one of them would translate my reply to Spanish).

    With regard to local variation in CO2 concentration, there can be some local variation for sure (the air in a growing corn field is lower in CO2 than at Mauna Loa, air down-wind of metropolitan areas will be higher in CO2 than Mauna Loa). The increases in CO2 at Mauna Loa are almost perfectly tracked by measured increases at the south pole, although with a slight off-set (about 5 PPM, I remember). The annual northern hemisphere oscillation (mostly from summer plant growth in the northern hemisphere, but probably with some other seasonal effects) is almost completely absent in CO2 measurements taken at high latitudes in the southern hemisphere.

    Just for the record, I think that global warming predictions are grossly exaggerated by climate models, and I am quite certain they will ultimately be refuted by data, since they grow ever further from the data. But I also think that the data linking atmospheric increases in CO2 (and fluorocarbons, and methane, and ozone) to human activities is very clear.

  38. @Steve Fitzpatrick,

    I just relaized my post sounded like crticism of your work. I didn’t mean it that way. My point was that maybe the “285” in the equation should be a bit higher.

  39. JamesG (05:18:37) :

    ‘on the greater importance of vegetation CO2 demand (I like his doable experiment with deserts) than is accorded by IPCC makes the curve fit of the model highly artifactual. Isn’t there a bit of tautology in the derivation of the model – using results to generate the equation?’

    With regard to ‘a bit of tautology’: Perhaps, but certainly no more than in Dr. Spencer’s original post. The question is, are Dr. Spencer’s hindcast curves close to the measured CO2 record? I think a fair answer is no. I developed the model only to show that a very simple absorption plus temperature model would track the measured CO2 pretty well, certainly much better than Dr. Spencer’s temperature-only model, and account for the short term variation just as well.

    The curve fit may be less ‘artifactual’ than it might at first seem. Any increase in CO2 uptake that is proportional to the increase in CO2 above the starting point would generate pretty much the same curve (and match the historical record reasonably well). The increase in uptake could be dominated by by faster growth of plants, by increased dissolution/neutralization by the ocean, or a combination; it won’t change the final curve much. Both increased absorption/neutralization and increased plant growth rate ought to be almost linear with respect to increased CO2, at least over a modest range of CO2 increase.

  40. JamesG (05:18:37) :

    Thanks for the link on the Nevada facility
    http://www.unlv.edu/Climate_Change_Research/NDFF/co2_treatment.htm

    It is remarkable
    1) that they do not give the data in the standard Keeling curve.
    2) that the meaning is hidden between fumigations , what are fumigations?

    And no data after 2007.

    here is the south pole CO2 http://cdiac.esd.ornl.gov/trends/co2/graphics/South_Pole_CO2.jpg

    Have a look at Beck’s page :
    http://www.biokurs.de/treibhaus/180CO2_supp.htm

    in contrast

  41. Steve wrote:
    [An increase in average ocean surface temperature will cause more CO2 to be emitted from surface water…]

    Steve, this basis for your argument is flawed because the relationship describing the solubility of CO2 in water versus temperature is exponential, not linear.

    For this reason a, say, 2 degree change of SST in low-latitude warm waters does not represent the same change in solubility as a 2 degree change in SST in high-latitude cold waters. It is quite possible for the average global SST to remain constant over time, while the absorption/desorption of CO2 from the ocean changes dramatically due to temperature-driven changes in solubility. The correlations you have described are merely coincidence, because the mechanism you have described is incorrect, or incomplete.

    A better way to approach this would be to use gridded SST data to calculate the solubility change for those specific temperatures in each grid cell over time, and then averaged globally, and compare that to atmospheric CO2. That would be interesting.

    You also wrote:
    [With a bit of luck, continuing flat-to-falling average surface temperatures and ocean heat content will discredit the model predictions before too much economic damage is done.]
    I assume here you are referring to the economic damage Obama and the Democrat-controlled congress intend to inflict upon America with cap-and-trade, and I whole-heartedly agree. A cooling climate will be the best way of eliminating this foolish AGW nonsense.

  42. RobP (06:48:14) :
    ‘Come back in, say, 10 years and see if your predicted values still match observations. Then it may be accepted as a useful model for predicting future CO2 levels, but even then it may not get us closer to knowing how CO2 and temperature are linked.’

    If I am lucky enough to be around in 10 years, I will do just that!

    But please note that I was not trying to make a perfect prediction, I was only trying to point out that a temperature-only driven increase in atmospheric CO2 (as proposed by Dr. Spencer on May 11) is not at all consistent with the 1958 to 2008 record, while a rather simple model (where CO2 emissions and concentration dependent uptake are primarily responsible) is more consistent with the record, and also explains just a well the temperature driven variation in CO2 increase that Dr. Spencer noted. The test of any model (my very simple model or complex climate models) is how well they predict the future, not how well they predict the past, since hind-casts can always be optimized by ‘curve-fitting’. My great frustration with climate modelers is how they ALWAYS argue that we can never have a legitimate test of the model, saying something like: “It would take 100 years, and by then it will be too late, since the ocean will have flooded New York and Washington!”. This kind of non-sense from the modelers ought to bring loud laughter all around, not new laws limiting carbon releases.

  43. Dave Middleton (07:55:17) :
    @Steve Fitzpatrick,

    I just relaized my post sounded like crticism of your work. I didn’t mean it that way. My point was that maybe the “285″ in the equation should be a bit higher.

    What are your thoughts on Spencer’s assumption of equilibrium between ocean and atmosphere for CO2 at an SST anomaly of -0.2ºC?

  44. Pamela Gray (06:41:30) & Pamela Gray (06:57:00)

    Yes, looks like NSIDC is having issues with the satellite data feed again. Or rather, more severe issues of late than normal. At least in the imagery they’re carrying for the Arctic (so far); the Antarctic seems to be bearing up so far. Contrast their image with the latest from IARC-JAXA:

    http://www.ijis.iarc.uaf.edu/cgi-bin/seaice-monitor.cgi?lang=e

  45. A bit OT but related to CO2 policy related to perceived changes in CO2 in the atmosphere: If CO2 sequestration underground for fossil fuel electrical plants becomes a worldwide solution, I think it reasonable to predict that given the enormous volumes involved, there would be growing potential for horrendous accidents of a breach, or even a more modest leak that could fill up a valley and suffocate all non plant residents of the valley. It could also push out formation brines, natural gas, hydrogen sulphide and petroleum.

  46. As I suggested after the Dr. Spencer post, I think there is real potential to understand the solubility pump a lot better if we disaggregate the net absorption term into a term for absorption in the cold oceans and outgassing in the tropical oceans. Averaging global SST adds a lot of noise, because solubility of gases in water are exponential, following the vapor pressure. Warming the the surface of the tropics from 28 to 29C will release only a third as much CO2 as raising the northern ocean temps from 0 to 1C.

    How would I determine the SST anomalies for specific latitude bands? I would start with a roughly equal area split, neglecting the polar caps, so about 35S to 35N.

    JamesG (05:18:37) :

    You are just sniping about all we don’t know about ocean circulation. There is nothing unphyscial about the tropics outgassing more CO2 due to temperature (but slightly less due to higher atmospheric CO2) while the cold ocean waters absorb about the same as before due to the offsetting effect of slightly higher temps with slightly higher CO2. In fact, simple vapor pressure effects from the recent .5C anomally would increase outgassing from 90GT to 94GT and depress absorption from 92GT to 88GT. In the short term, the 40% increase in atmospheric CO2 would increase absorption by 40% and decrease outgassing by 30%. Clearly, the scale of the ocean “breathing” overwhelms the anthropogenic additions and it is only because the upper ocean is saturated that half the CO2 is still waiting to be pumped down the sink.

    The sharp changes in CO2 at northern latitudes looks like a change in the wind pattern, which would change the source and “history” of that air cell. Can anyone comment on the typical seasonal wind patterns of those locations?

    The simplification that the ocean circulation is a conveyor and water going down the conveyor all over the high latitudes moves in lockstep and pops up exactly 800 years later is very misleading. The flow is wildly turbulent and there are certainly flows that take longer and shorter. But I think this is a useful number to keep in mind as an average residence time. It would indicate that we have about 800 years before we would see a significant step up in the outgassing from our current anthropogenic emissions.

  47. If I may interject a completely unscientific observation:

    The CO2 graphs produced at Mauna Loa Et Al. have always looked a little too “sweet” for me. I work with data every day, and you just don’t see anything that smooth and regular unless something has been done to the data to average or smooth it and remove outliers.

    I’m not claiming malfeasance. I just don’t see any indication of volcanic eruptions, Mid 70’s US gas crisis, introduction of catalytic converters, $4 gas, etc. Nothing.

    That says to me that all of the things listed above are completely swamped out by whatever is really causing the increase.

    Someone please enlighten this little pea sized brain.

  48. Joseph (08:08:07) :

    Steve wrote:
    [An increase in average ocean surface temperature will cause more CO2 to be emitted from surface water…]

    ‘Steve, this basis for your argument is flawed because the relationship describing the solubility of CO2 in water versus temperature is exponential, not linear.’

    There’s a bit more going on than just dissolution of CO2 in water. The ocean is heavily buffered by dissolved carbonate and bicarbonate ions, which is evident in a typical pH of ~8.1 for the ocean. A solution of CO2 in water at the concentration which is in equilibrium with the atmosphere (like rain water) has a pH of ~5.6. When more CO2 dissolves in ocean water, some of it combines with water to form hydrogen ions and bicarbonate ions. The hydrogen ions react with carbonate ions to form more bicarbonate. By this ‘neutralization’ reaction, CO2 gas is chemically absorbed, and is not longer just ‘in solution’. The overall concentration of inorganic carbon in seawater (some dissolved CO2, but mostly as bicarbonate and carbonate ions) is many times higher than the equilibrium concentration of dissolved CO2 that would be present in un-buffered water. Of course the equilibrium between dissolved CO2, bicarbonate ion, and carbonate ion is in fact temperature sensitive (which is why the ocean carbon cycle includes temperature driven absorption and desorption), but not as nearly as temperature sensitive as the straight dissolution of CO2 in water.

  49. Joseph (08:08:07) :
    ‘A better way to approach this would be to use gridded SST data to calculate the solubility change for those specific temperatures in each grid cell over time, and then averaged globally, and compare that to atmospheric CO2. That would be interesting.’

    A lot of this sort of thing has already been done, starting with Keeling in the late 1950’s (yes, the Keeling who started collecting Manua Loa CO2 data, and discovered the annual oscillation on atmospheric CO2). Keeling (and others) cruised around the world collecting and measuring the equilibrium CO2 level in ocean water. They measured the concentration of CO2 in air that would be in equilibrium with the water the ship was passing through, as well as the concentration of CO2 in the air surrounding the ship at the same time, or in other words, they measured if the ocean was absorbing or desorbing CO2. They found that the air and the ocean are often far from equilibrium; at low latitudes the ocean is more likely to be adding CO2 to the atmosphere, while at high latitudes the ocean was mostly absorbing CO2.

  50. Dave Middleton (07:34:58) :

    The pre-industrial CO2 concentration could certainly have been be a bit higher than 285PPM, I used 285 because it seems to be the most commonly accepted value.

    I am no expert in the response of plant stomata to changing CO2, but my guess is that there may be some uncertainty in stomata generated CO2 concentrations. However, since the ‘age’ of the deep ocean water that is currently upwelling is likely in the range of ~1000 years, I do not see that inferred CO2 concentrations from >2000 years ago are important. Do the stomata data suggest CO2 concentration much above 285 PPM during the last 1000 or 2000 years?

  51. superDBA (08:49:21) :
    ‘Someone please enlighten this little pea sized brain.’

    Unlikely that people who read this blog have a pea size brain..

    The overall flux in CO2 between ocean and atmosphere is very large compared to human emissions, so it is not so easy to see the (rather smaller) changes in human emissions. You can see variations in the Mauna Loa trend more clearly by looking at the trend with the seasonal signal removed (as best they can). There you can see some obvious effects, like an early 1990’s volcanic eruption and the strong 1998 El Nino. I have never heard anyone suggest that the Mauna Loa record has been fudged; after all, Keeling started in 1958, long before “global warming”, and during a period of slowly falling temperatures.

  52. Steve Fitzpatrick (09:21:18) :

    Dave Middleton (07:34:58) :

    The pre-industrial CO2 concentration could certainly have been be a bit higher than 285PPM, I used 285 because it seems to be the most commonly accepted value.

    I am no expert in the response of plant stomata to changing CO2, but my guess is that there may be some uncertainty in stomata generated CO2 concentrations. However, since the ‘age’ of the deep ocean water that is currently upwelling is likely in the range of ~1000 years, I do not see that inferred CO2 concentrations from >2000 years ago are important. Do the stomata data suggest CO2 concentration much above 285 PPM during the last 1000 or 2000 years?

    I’m fairly certain that there is a paper that shows 300+ CO2 from plant SI within the last few hundred years. I’ll see if I can dig it up…I think it was by Kouwenberg.

  53. Considering a change of temperature equal in the three systems, air, oceans and clay soil, the heat capacity of air is insignificant when it’s compared against heat capacity of oceans and dry clay ground:

    Air ρC = 1,200 J/m^3 K
    Oceans ρC = 4,190,000 J/m^3 K
    Dry Clay Ground ρC = 1,420,000 J/m^3 K

    Specific Heat Capacity is not the same as Heat Capacity.

  54. layne Blanchard (06:02:17) :

    ‘Steve,

    Some things about this model seem counter intuitive to me. If I understand you correctly, (emissions) are anthropogenic emissions only? If yes,

    1. Why would annual emissions be limited to anthropogenic emissions?
    Then,
    2. How could K1 not vary by temperature?
    3. What data supports the assumption of 285 ppm as a point of equilibrium in this absorption?
    4. If no one really knows the actual total emissions (and here I mean anthropogenic as well as natural), how can we determine anything?’

    1. I assumed that at equilibrium (in the absence of human emissions) there would be equal (and very large) emissions and absorption from natural sources. The much smaller human emissions are assumed to be adding to the concentration of CO2 in excess of the natural background.

    2. By separating the temperature effect (K2) from the concentration driven effect (K1) I am implicitly saying that the changes in absorption/desorption due to temperature changes can be treated as a linear temperature effect that is superimposed on the background of “normal” ocean absorption and desorption. This would be far from correct if there were big differences in average SST, but we are only talking about a small (0.36C) change over 50 years in average SST, so the approximation should not be too bad.

    3. The 285 number is a widely accepted value for pre-industrial CO2, and is supported by ice core data from Greenland and Antartica. Could it have been a little higher or lower in 1850? Sure, but I don’t know of any other good values.

    4. I should have said that nobody knows the *exact* emissions. People make reasonable estimates based on coal production records, oil production records, natural gas production records, forest clearing, and production estimates for Portland cement. Different estimates vary a bit, but all agree approximately.

  55. Steve Fitzpatrick (08:20:21) :

    “The test of any model (my very simple model or complex climate models) is how well they predict the future, not how well they predict the past, since hind-casts can always be optimized by ‘curve-fitting’. My great frustration with climate modelers is how they ALWAYS argue that we can never have a legitimate test of the model, saying something like: “It would take 100 years, and by then it will be too late, since the ocean will have flooded New York and Washington!”. This kind of non-sense from the modelers ought to bring loud laughter all around, not new laws limiting carbon releases.”

    That is sort of the point that I am trying to make – since the models can’t be falsified (in any meaningful way short of wait and see), they should be developed independantly of the data or they are simply a case of curve-fitting.

    As I mentioned, you simply wouldn’t get this published in an economics journal. My wife tried by randomly selecting half of her data to build the model and using the other half to test it (quite clever I thought, but I am biased) and it was still turned down.

    Not sure quite what the answer is, but what depresses me the most is the way that I feel we are losing the whole scientific process of theory and refutation. I’m not too worried about AGW alarmism anymore (although that is thanks to the recession not enlightenment), but the use of “science” for political ends and the concommittent use of legal approaches (selection of evidence and manipulation of significance) is a trend I can’t see being reversed.

    Wow, that sounds really depressing – time for a beer!

  56. For example, if ∆T = 0.8°C in each system, the heat stored (Qsto) is:

    Air = 956.16 J

    Oceans = 3.352 x 10^6 J

    Dry Clay Ground = 1.136 x 10^6 J

    I don’t know how is it that some people thinks that air transfers energy to the surface if the surface always is in a higher energy density state than the air.

  57. I find this stuff fascinating. I have often pondered that ocean’s absorption rate coupled with the potential absorption in the huge expanses of Canadian and Soviet forests, would completely dominate any output by man. Has anyone ever calculated the enormous capability of a large forest for Co2 uptake? Not just current intake, but potential intake. A large thank you to Steve Fitzpatrick. It is obvious that he enjoys his work. What could be better than that?

  58. Steve Fitzpatrick (08:20:21) :
    But please note that I was not trying to make a perfect prediction, I was only trying to point out that a temperature-only driven increase in atmospheric CO2 (as proposed by Dr. Spencer on May 11) is not at all consistent with the 1958 to 2008 record, while a rather simple model (where CO2 emissions and concentration dependent uptake are primarily responsible) is more consistent with the record, and also explains just as well the temperature driven variation in CO2 increase that Dr. Spencer noted.

    Spencer’s model is hampered by his initial assumption that the ocean was outgassing for the duration of the record, that necessarily meant a low coefficient for the retention of anthropological CO2. The more reasonable model that Steve used which allows variable absorption to modulate the retention of anthropological CO2 gives a better fit.

  59. On this annual-average CO2-temp graph, it looks like there is a close relationship between temperature and, after a delay, CO2.
    http://www.woodfortrees.org/plot/esrl-co2/isolate:60/mean:12/scale:0.2/plot/hadcrut3vgl/isolate:60/mean:12/from:1958
    The mean of the CO2 causes the seasonal variation to not be visible, but if you remove the mean it’s harder to see the pattern, although it is visible in the peaks.
    http://www.woodfortrees.org/plot/esrl-co2/isolate:60/scale:0.2/plot/hadcrut3vgl/isolate:60/mean:12/from:1958
    Not surprisingly, using the prime number 11 for the mean pulls the calculation out of sync with the annual cycle and the variations become more apparent, although the two curves still resemble each other.
    http://www.woodfortrees.org/plot/esrl-co2/isolate:55/scale:0.2/mean:11/plot/hadcrut3vgl/isolate:55/from:1958/mean:11

  60. On this annual-average CO2-temp graph, it looks like there is a close relationship between temperature and, after a delay, CO2.
    http://www.woodfortrees.org/plot/esrl-co2/isolate:60/mean:12/scale:0.2/plot/hadcrut3vgl/isolate:60/mean:12/from:1958
    The mean of the CO2 causes the seasonal variation to not be visible, but if you remove the mean it’s harder to see the pattern, although it is visible in the peaks.
    http://www.woodfortrees.org/plot/esrl-co2/isolate:60/scale:0.2/plot/hadcrut3vgl/isolate:60/mean:12/from:1958
    Not surprisingly, using the prime number 11 for the mean pulls the calculation out of sync with the annual cycle and the variations become more apparent, although the two curves still resemble each other.

  61. Well the last graph of measured CO2 versus the here constructed model, is a whole lot better fit than the curves that Dr Spencer had in his recent essay of another simple model.

    But then Steve forced that fit, with the choice of his K1 and K2 parameters; so is it really a model or is it just a curve fitting exercise ?

    If it is a real model, the values of K1 and K2 would need to be justified by some physical or chemical rationale based on the processes going on.

    Scientists have sadly discovered to their great embarrasment; that you can fit a model to actual physical data measurement to any level of accuracy you want; simply by ****ing around with numbers.

    Most Physicists (maybe I should say many) are familiar with the great “Fine Structure Constant debacle” involving Sir Arthur Eddington; who “proved” in a paper, that the value of alpha, the fine structure constant was indeed exactly 1/136; the best experimental measurments being close to that number. Sadly, the experimentalists slowly moved alpha closer to 1/137; whereupon Eddington wrote another paper in which he “proved” that indeed alpha was exactly 1/137 (I am sure Anna V is familiar with this story).
    Well that got the community riled up at the good professor who was then dubbed Professor “Adding one”. Well that was just the first fine structure constant caper.
    In the 1960s somebody wrote a paper in which he proved that the inverse of the fine structure constant was in fact the fourth root of a product of pi to some small integer power times several other small integers to small integer powers; something like :-
    1/alpha = (pi^a *b^c*d^e*f^g*h^i)^0.25

    Now alpha^-1 currently has the value 137.0359895 with an error of 0.045 ppm.

    The expression above, where (a) through (i) are all small integers (<20) agreed with this value to better than 2/3 of the standard deviation of the best then available experimental result.

    So clearly the model had to be correct; because you couldn't get that close by just ****ing around with numbers; so everybody embraced this wonderful paper; even though nowhere in the paper was there any informational input from the physical universe. No measured or observed quantity went into the derivation of this formula; it was purely mathematical; yet it purported to to compute one of the most fundamntal constants of Atomic Physics. Man what a pickle to be in. About a month later a computer geek had programmed his computer to seek all such values for a-i, and find any that gave an answer within the standard deviation of the experimental value of alpha^-1. He came up with a list of about eight such numbers, one of which was within about 20-25% of the standard deviation; proving that indeed you can hit a number to better than one part in 10^8 by simply ****ing around with numbers. Subsequently a geometer offered a theory of a multidimensional spherical shell in a lattice of points defined by the above expression, and grid points in the lattice that fell within the shell, were solutions to the fitting problem; the radii of the multidimensional shell being alph+/- one standard deviation. Thus he located all of the possible values of the function that fit the criterion.

    Well if you bought into this farce, even though no-one could find a link to the universe; you ended up with an egg on the face problem. This whole mess took place in Applied Optics or some similar publication as I recall.

    So fitting data to a "model" is a snap. Fitting the model to the reality of the universe is another matter.

    So be careful what you buy into.

    George

  62. RobP (06:48:14) :
    “Devloping a model which shows great correlation to poorly understood data just shows your skill in model tuning and reveals no underlying scientific causation.” and later
    “Come back in, say, 10 years and see if your predicted values still match observations.”

    We don’t need to wait for 10 years, because we can use different time periods for calibrating the model and using it for predictions. Let’s take for example years 1959-1979 to calibrate K1 and K2 and then observe its behavior in 1980-2009. Using several time periods to avoid cherry picking the data, gives more confidence on the findings.

  63. This controversial issue was never a scientifical issue but a political one, so it does not matter if CO2 increases or decreases, THEY will tell the mobs that IT IS THE CAUSE OF CLIMATE CHANGE (it is not longer of “global warming” because of evident reasons), so it does not matter how profound are the arguments against it, they have the money, they have the power, they are the ones in control…and, for sure,…one day they will knock at our doors…

  64. “Do the stomata data suggest CO2 concentration much above 285 PPM during the last 1000 or 2000 years?”

    Yes they do…

    Atmospheric CO2 fluctuations during the last millennium reconstructed by stomatal frequency analysis of Tsuga heterophylla needles

    Lenny Kouwenberg1, Rike Wagner1, Wolfram Kürschner1 and Henk Visscher1
    1 Palaeoecology, Laboratory of Palaeobotany and Palynology, Utrecht University, Budapestlaan 4, 3584 CD Utrecht, Netherlands

    A stomatal frequency record based on buried Tsuga heterophylla needles reveals significant centennial-scale atmospheric CO2 fluctuations during the last millennium. The record includes four CO2 minima of 260–275 ppmv (ca. A.D. 860 and A.D. 1150, and less prominently, ca. A.D. 1600 and 1800). Alternating CO2 maxima of 300–320 ppmv are present at A.D. 1000, A.D. 1300, and ca. A.D. 1700. These CO2 fluctuations parallel global terrestrial air temperature changes, as well as oceanic surface temperature fluctuations in the North Atlantic. The results obtained in this study corroborate the notion of a continuous coupling of the preindustrial atmospheric CO2 regime and climate.
    http://geology.geoscienceworld.org/cgi/content/abstract/33/1/33

    Warm periods in A.D. 1000, 1300 and 1700 had CO2 levels between 300-320ppm. So the “equilibrium” value for the post-Little Ice Age should be 300-320 ppm…If the SI data are more accurate than the ice core data.

  65. David Ball

    Don’t tell anyone else or my sceptic credentials will be ruined, but some years ago I ‘bought’ some amazon rain forest in order to prevent it being logged, with all that entails (there are arguements both ways).

    A little later I was able to confound someone with impeccable green credentials (i.e highly sanctimonious)who was highly impressed (briefly) with me, because he thought my reason for this action was do my bit for reducing co2.

    Positively aglow with new found fervour I checled out my contribution to this great AGW cause and found, as in so many things in this settled science, that things weren’t as clear cut as they first seemed however.

    See this article;

    http://planetearth.nerc.ac.uk/news/story.aspx?id=351

    If you can, try to get hold of the original document from US Science. It seems that the Amazon (and presumably other great forests) may be a source after all. Or perhaps it is a sink. Or…Well you decide.

    Tonyb

  66. Ok so it looks like we’ve all made up our minds about climate change.

    I can’t seem to find any mention of ocean acidification though, surely thats what we should also be concerned with if we keep pumping co2 out at the current rate?

  67. Steve Fitzpatrick (10:41:45):

    layne Blanchard (06:02:17) :

    ‘Steve,

    Some things about this model seem counter intuitive to me. If I understand you correctly, (emissions) are anthropogenic emissions only? If yes,

    1. Why would annual emissions be limited to anthropogenic emissions?
    Then,
    2. How could K1 not vary by temperature?
    3. What data supports the assumption of 285 ppm as a point of equilibrium in this absorption?

    None data supports 285 ppmV as the point of equilibrium between emission and absorption of CO2. Biochemists say the point of equilibrium is up to 600 ppmV when photosynthesis increases linearly with temperature increases independently of O2 concentration. On the other hand, actually, sand (whether saturated or not) absorbs six times more CO2 than the biosphere.

  68. Dave Middleton (12:02:58) :
    ‘ The record includes four CO2 minima of 260–275 ppmv (ca. A.D. 860 and A.D. 1150, and less prominently, ca. A.D. 1600 and 1800). Alternating CO2 maxima of 300–320 ppmv are present at A.D. 1000, A.D. 1300, and ca. A.D. 1700.’

    I do not know how to average the several minima and maxima noted, nor do I know how to compare the relative accuracy/consistency of the stomata derived CO2 concentration with the trapped air in ice cores. One thing is certain, the ice core record shows no similar variation in CO2 on the time scales above, but it does show substantial temperature varaitions during that same period. What to make of this? I just do not know.

  69. George E. Smith (11:19:48) :

    ‘If it is a real model, the values of K1 and K2 would need to be justified by some physical or chemical rationale based on the processes going on.’

    Yes, I agree that a truly robust model would do exactly that, but I was only trying to make a model similar in form to what Dr. Spencer offered. In the case of my model, I did not even try to include the increase in rate of plant growth, because I expected that both ocean absorption and increased plant growth rate would be roughly proportional to the rise in CO2, so that with optimized constants the “ocean only” model ought to fit the historical data pretty well.

    That being said, I do want to point out a couple of things:

    1. Nothing I used as the basis of the model is inconsistent with well established physical behaviors. It is not like I was pulling new physical principles out of the air. There was no combination of roots of prime numbers, no 10 adjustable parameters for a 10th order polynomial curve, or anything similar, and I was not arm-waving about unmeasured or difficult to measure parameters like changing aerosol concentrations.

    2. Even a “curve fitted model” that doesn’t accurately fit the historical data is for certain incorrect (eg. Dr. Spencer’s model), while a curve fitted model that does hind-cast well has at least the possibility of being a fair representation of reality….. if it can make accurate predictions, not accurate hind-casts.

    3. A reasonable expectation is that when you add something to an existing quantity of that same something (in this case CO2 added to CO2 already in the atmosphere) the total quantity will increase. What motivated me to create my simple model was that Dr. Spencer’s model and comments seemed to me quite disconnected from this very reasonable expectation.

    4. Climate models all have the same “curve fit” problem unless they are carefully tested against future data. The climate modelers will say that their adjustable parameters are, as you suggest, all “justified by some physical or chemical rationale based on the processes going on”. But if you search for such justifications hard enough you will probably find them. This does not eliminate the need to make accurate predictions.

  70. John Doe (11:47:58) :

    ‘We don’t need to wait for 10 years, because we can use different time periods for calibrating the model and using it for predictions. Let’s take for example years 1959-1979 to calibrate K1 and K2 and then observe its behavior in 1980-2009. Using several time periods to avoid cherry picking the data, gives more confidence on the findings.’

    Go right ahead, I gave you all the data sources. But remember that the model leaves out plant influences, which are likely to be similar to ocean absorption, but not identical. The influence of increases in plant growth should start at zero, and gradually increase, while the influence of ocean absorption would start at a value above zero and increase from there. In other words, the absorption of the ocean should be linearly proportional to the increase above an assumed “equilibrium” ocean CO2 content (I used 285 PPM, but some folks in this thread claim this value may not be right), while plant growth increases will likely be proportional to the increase from the start of the model (that is, increases above 315 PPM CO2 in 1958), so you would have three parameters to work with, not two. Would this make the model hind-cast more accurately? Probably. Would it make the model forecast more accurately? Maybe. Would I take the time to do it? Nope.

    A very accurate model was not the point. I wanted only to show that it is altogether reasonable for CO2 emissions to increase CO2 concentrations in the atmosphere, and that the measured increases over the past 50 years are not terribly out of line with what you would expect based on the emissions and the capacity of the oceans to absorb CO2.

  71. Steve Fitzpatrick (10:08:12) :

    Hummm,
    1. Given: We can see fluctuations due to natural causes (volcanoes) and CO2 has not “reached a dangerous tipping point” and temperatures have not run away
    2. And: We cannot see major fluctuations in man’s emissions in the graphs.

    Doesn’t than mean that man’s contributions to CO2 must be much, much greater to seriously affect the earth’s temperature? I’m talking observations here, not anyone’s crystal ball. /end rhetorical comment/

    Hey! Let’s pump the CO2 output as high as we can! Then we can find this mythical “tipping point”, then turn it back down.

  72. “daily” unmodified (probably) dplot for Barrow

    Here’s one to ponder. Using the barrow daily data and estimating the day of minimum for each year and then plotting the minima time since 1st January for that year against data (together with SST and Land temp) gives this plot:

    There seems little correlation between yearly temperature patterns and days until minimum.
    But as temp increases days decrease (approx .2 days per year – x axis is in microsoft date).

    So if temp peaks are not driving the time for minimum – what is?

  73. “My only objective is to show that the CO2 released by human activities, combined with slow ocean absorption/neutralization and sea surface temperature variation, is broadly consistent with the measured historical trend in atmospheric CO2, including the effect of changing average SST on short term variation in the rate of CO2 increase.”

    Is that “slow absorption/neutralization” a famous time of the residence of “50-200 years” hypothesized by Houghton and IPCC, contrary to more than 40 studies performed by 6 various methods during 30 years prior to “global warming era” that almost found 5-8 years time-residence with almost all finding time-residence longer than 10-12 years?

  74. Ivan,

    The IPCC’s TAR said:

    CO2 naturally cycles rapidly among the atmosphere, oceans and land. However, the removal of the CO2 perturbation added by human activities from the atmosphere takes far longer.

    Savor that statement for a moment. The IPCC is saying that a molecule of anthropogenic CO2 acts differently than a molecule of “natural” CO2.

    More evidence that the IPCC’s assessment reports are edited by political appointees, not by unbiased scientists.

  75. James G (5:18:37) wrote:

    …what oceanographers actually know about ocean movements is continually confounded and wrong. Witness the number of oceanographers who still say that the gulf stream warms Europe and it could be interrupted by the warming…”

    The Gulf Stream, which is a wind-driven current, does indeed warm northern Europe. Physical oceanographers, including Prof. Wunsch, never claimed it to be a feature of the thermohaline (density-driven) circulation that “climate scientists” mistakenly presumed. The latter should not be confused with the former, no matter where they may be employed.

  76. Ivan:

    The residence time you are taking about (5-8 years) is correct, for a specified group pf CO2 molecules, and many of the best studies were based on the rate of removal of radioactive C14- CO2 from the atmosphere that was formed by atomic bomb tests in the 1950s and 1960’s.

    There is a very large exchange of CO2 each year between the biosphere, the ocean and the atmosphere; about ~20% of the total mass of CO2 in the atmosphere gets swapped each year. Since the pool of CO2 in the biosphere and ocean is vastly larger than in the atmosphere (hundreds of times), a “tagged” CO2 (eg. radioactive C14 tagged CO2) is lost from the atmosphere quickly, and becomes very dilute… essentially lost… in the very large stores of CO2 in the biosphere and ocean, but most all of it is replaced with CO2 that was earlier in the ocean or the biosphere. This is why you can’t use radiocarbon dating for biological samples that formed after the start of the atomic age; the C14 from bomb tests (in the form of CO2) got rapidly picked up by the biosphere and incorporated into plants and animals… messing up the expected C14 concentrations that had historically come only from cosmic ray flux.

    This 5 to 8 year “lifetime” of a CO2 molecule does not mean that ~20% of the total CO2 is removed each year (or even that 20% of an excess of CO2 will be removed). It just means that ~20% of the CO2 in the atmosphere is swapped each year. This exchange of CO2 molecules takes place at the same high rate even when the concentration of CO2 in the atmosphere is found by measurement to be at a constant level.

  77. “”” Steve Fitzpatrick (13:52:21) :

    George E. Smith (11:19:48) :

    ‘If it is a real model, the values of K1 and K2 would need to be justified by some physical or chemical rationale based on the processes going on.’ “””

    Steve, don’t get me wrong; you explained quite well what you were trying to do and I understand that. I should say I was not overjoyed at Roy’s model either; but he also pointed out it was somewhat of a talking point model.

    The point I would make, is simply that all of these statistical processes that seem to be a core part of climatology can serve very well to demonstrate “correlation”; but what is being saught is not correlation; but causation; and correlation does not prove causation.

    George

  78. “”” Joanna Lumley (12:34:51) :

    Ok so it looks like we’ve all made up our minds about climate change.

    I can’t seem to find any mention of ocean acidification though, surely thats what we should also be concerned with if we keep pumping co2 out at the current rate? “””

    Well Joanna, I’ll let you worry about ocean acidification; so far nobody has shown that a small change in pH will lead to any catastrophe. There are som many buffer processes going on in the ocean relating to CO2, HCO3- and CO3- – not to mention Calcium carbonate in shells, that I think the world has far bigger problems to deal with that a small change in ocean pH.

  79. Joanna Lumley (12:34:51) :

    ‘I can’t seem to find any mention of ocean acidification though, surely thats what we should also be concerned with if we keep pumping co2 out at the current rate?’

    When CO2 is absorbed/neutralized by the ocean, the pH does go down a bit. Measurements of ocean water indicate a very small drop over the last 50-80 years; may be 0.03 or 0.05 pH unit, which is not a lot. The global warming terrorists ( I think that is the best name) always point to studies about how corals will all die and clam shells will dissolve. Well, like most AGW stories, the reality is a bit different. Corals actually do very well until the ocean pH corresponds to ~500 PPM in the atmosphere; the growth for some types actually increases a bit (their symbiotic aglae love higer CO2). Above 500 PPM coral growth rates do appear to slow somewhat, depending on type, but they do not die. Much bigger dangers to corals are that people like to dive on them too much, and cause damage, and that some are threatened by pollution from land based sources.

    Some types of free swimming microorganisms (mini-organisms) that fix CaCO3 in the form of aragonite shells (rather than calcite) would do badly in very cold water (near the poles) if the CO2 level reached >900 PPM. The clams are safe, even at 1300 PPM.

  80. Joanna Lumley (12:34:51) :

    Ok so it looks like we’ve all made up our minds about climate change.

    I can’t seem to find any mention of ocean acidification though, surely thats what we should also be concerned with if we keep pumping co2 out at the current rate?

    The general consensus is that oceanic pH has declined by 0.1 since about 1750. A pH decline of 0.1 is well within the natural variability of oceanic acidity…Oceanic pH varies from 7.8 to 8.3 on a roughly 50-year cycle (Preindustrial to Modern Interdecadal Variability in Coral Reef pH
    Carles Pelejero, et. al. Science 30 September 2005: Vol. 309. no. 5744, pp. 2204 – 2207 ). It has varied within that range over almost the entire Phanerozoic Eon (~600 million years) irrespective of atmospheric CO2 concentrations.

    During the Permian period much of current day west Texas was covered by the Capitan Reef. Atmospheric CO2 was 2,000ppm to 4,000ppm and oceanic pH varied between 7.8 and 8.3…Just like it does today.

  81. Here is a good example of how the green agenda contradicts and sabotages itself:

    For many years, greenies have been pushing industry to make plastic packaging and products from plastics that degrade and rot in landfills (for instance, MacDonalds styrofoam packaging).

    The problem with this is that plastics generally are manufactured from petroleum, so making them degrade and rot means that the carbon in them would over time be released to the atmosphere as CO2, increasing CO2 emissions. If the plastics remained inert, they would thus sequester the carbon long term and reduce CO2 emissions.

    Meanwhile, the alternative movement for plastic recycling has become a victim of its own success, with plastics turned in at recycling centers stacking up and overflowing capacity due to oversupply and lack of demand for recycled plastic stock (some of this is due to too much plastic product manufacturing capacity moving offshore to Mexico and China, too far away to enable cost effective use of recycled materials, if Obama does anything with stimulus money, it should be to bring that industry back to the US).

  82. Steve Fitzpatrick (13:16:54) :

    Dave Middleton (12:02:58) :
    ‘ The record includes four CO2 minima of 260–275 ppmv (ca. A.D. 860 and A.D. 1150, and less prominently, ca. A.D. 1600 and 1800). Alternating CO2 maxima of 300–320 ppmv are present at A.D. 1000, A.D. 1300, and ca. A.D. 1700.’

    I do not know how to average the several minima and maxima noted, nor do I know how to compare the relative accuracy/consistency of the stomata derived CO2 concentration with the trapped air in ice cores. One thing is certain, the ice core record shows no similar variation in CO2 on the time scales above, but it does show substantial temperature varaitions during that same period. What to make of this? I just do not know.

    There are two common ways to estimate CO2 concentrations in past atmospheres (before instrumental records began in 1959): 1) Measuring CO2 content in air bubbles trapped in ice cores and 2) measuring the density of stoma in plants. The advantage to the ice core data is that it provides a continuous record of relative CO2 changes going back 100’s of thousands of years…With a resolution ranging from annual in the shallow section to multi-decadal in the deeper section. The advantage to the stomatal data is that the relationship of the Stomatal Index and atmospheric CO2 can be empirically demonstrated.

    The problems with the ice core data are 1) the air-age vs. ice-age delta and 2) the effects of burial depth on gas concentrations.

    The age of the layers of ice can be fairly easily and accurately determined. The age of the air trapped in the ice is not so easily or accurately determined. Currently the most common method for aging the air is through the use of “firn densification models”
    (FDM). Firn is more dense than snow; but less dense than ice. As the layers of snow and ice are buried, they are compressed into firn and then ice. The depth at which the pore space in the firn closes off and traps gas can vary greatly…So the delta between the age of the ice and the ago of the air can vary from as little as 30 years to more than 2,000 years.

    The EPICA C core has a delta of over 2,000 years. The pores don’t close off until a depth of 99 m, where the ice is 2,424 years old. According to the firn densification model, last year’s air is trapped at that depth in ice that was deposited over 2,000 years ago (an oversimplification).

    There are a lot of doubts about the accuracy of the FDM method. I somehow doubt that the air at a depth of 99 meters is last year’s air. Gas doesn’t tend to migrate downward through sediment…Being less dense than rock and water, it migrates upward. That’s why oil and gas are almost always a lot older than the rock formations in which they are trapped. I do realize that the contemporaneous atmosphere will permeate down into the ice…But it seems to me that at depth, there would be a mixture of air permeating downward, in situ air, and older air that had migrated upward before the ice fully “lithified”.

    The DE08 ice core has the lowest delta air-ice age of any core I have looked at…30 years. If I plot the DE08 core using the ice-age and attach it to the Mauna Loa data and overlay that on a temperature curve…I actually get a better CO2 to temperature correlation than the FDM –aged CO2.

    The most recent level of ice suitable for analysis is 1939. The FDM method says that the air in the 1939 ice is from 1969. If the air in the 1939 level is actually a blend of the air from 1909-1969…it might actually be more representative of 1939 than it is of 1969. If that’s the case, we really don’t have any CO2 data from 1939 to 1959…including the sharpest cooling period of the 20th century (1945-1950).

    The ice core data might be more representative of a long wavelength moving average of CO2 values and be a good indicator of the low frequency component of the CO2 cycle; where as the plant TSI data are capturing the high frequency component. It’s also likely that the pressure effects of burial are affecting the gas partial pressures in the ice core air bubbles.

  83. Steve,
    “The residence time you are taking about (5-8 years) is correct, for a specified group pf CO2 molecules, and many of the best studies were based on the rate of removal of radioactive C14- CO2 from the atmosphere that was formed by atomic bomb tests in the 1950s and 1960’s.”

    Is there any experimental or observational evidence that fossil fuel CO2 have longer time residence than 5 or 10 years, or it is only model “fudge” factor?

    “This 5 to 8 year “lifetime” of a CO2 molecule does not mean that ~20% of the total CO2 is removed each year (or even that 20% of an excess of CO2 will be removed). It just means that ~20% of the CO2 in the atmosphere is swapped each year.”

    Do we have any proof or plausible reason to believe that natural flux of about 135 giga tonnes per year is not much more important factor in overall CO2 increase than 6 giga tonnes of human emissions, i.e. that tinny human contribution is not lost in the large natural noise, just like it seems plausible that CO2 greenhouse warming signal is lost in natural climate variability?

  84. As I was travelling around, I was quite late in reply to the note of Dr. Spencer on the correlation (ocean) temperature and CO2 levels. Here Steve Fitzpatrick gives a good reaction…

    There is little doubt that temperature has an influence on CO2 levels: about 3-5 ppmv/°C for short-term variability and not more than 8-10 ppmv/°C for (very) long time variability like the MWP-LIA cooling and the glacial-interglacial-glacial transitions. See for the latter (based on all Vostok data):

    That simply means that temperature is not the driving force for the recent increase in CO2 levels: the increase in temperature since the LIA of maximum 1°C would give an increase in CO2 levels of not more than 8 ppmv. The rest of the 100+ ppmv rise is quite certainly of human origin. See:
    http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html

    If you look beyond the Mauna Loa period, the discrepancy between temperature and CO2 levels becomes even more clear:

    In the 1945-1975 period, SST cooled slightly, but CO2 levels (from high resolution ice cores: Law Dome 8 years) increased in ratio with the emissions…

    Ice cores are reliable storages for ancient atmospheres, but their resolution fades with layer thickness, inverse with maximum time span. The most recent period of about a century was covered by three ice cores with very high resolution at Law Dome. The most recent completely closed ice there was from 1980, spanning an overlap of about 20 years with the atmospheric data at the south pole (the ice age – gas age difference is of no interest here, the gas age was measured in firn in this case). The south pole data, are within the borders of the ice core accuracy during the overlap:

    There is no sign that there is a loss of CO2 levels in ice during burying: CO2 levels in firn and already closed bubbles in ice at closing depth are equal and the glacial/interglacial ratio between temperature (proxy) and CO2 levels doesn’t diminish over 800,000 years of ice layers in the deepest cores, which would happen if there was some (vertical) migration of CO2.

    At last: I had a firm discussion with an author of a similar study of stomata index data: there is a bias in the stomata data, as stomata are formed at CO2 levels in spring, which are higher than average, including locally enhanced levels (from rotting vegetation of the previous year). This is more or less compensated by calibrating the SI data (resolution +/- 10 ppmv) with… ice core data of the past century. The main problems is that it is very difficult to know what happened with the local CO2 levels over the centuries (in contrast to CO2 levels at the south pole) when local/regional CO2 sources were added/removed.

  85. Ivan (16:12:25) :

    Ivan, the year by year variability of the natural CO2 cycle is +/- 1 ppmv
    The emissions are nowadays about 8 GtC/yr or about 4 ppmv/yr
    The increase nowadays is about 2 ppmv/yr
    Thus nature is a net absorber of CO2 at a rate of about 2 ppmv/yr

    See: http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em.jpg

    Thus the natural variability of the CO2 cycle is -2 +/- 1 ppmv/yr sink capacity. There was no net addition of CO2 by nature over the past 50 years in average of each year.

    It doesn’t matter if 10, 100 or 1,000 GtC is circulating through the atmosphere, absorbed and released by the oceans and vegetation. All what counts is how much the difference is at the end of the cycle: negative in all cases…

    About residence time: The residence time is governed by the exchange rate of about 150 GtC on 800 GtC residing in the atmosphere or about 20% per year, hence the about 5 year half life time.

    If we should stop emitting any fossil CO2 today, next year, we would see a drop of about 2 ppmv (4 GtC, as the pressure difference between atmospheric CO2 and oceanic CO2 didn’t change, thus the same removal rate as today), the year after that, the drop is only 1.6 ppmv (as the pressure difference now is lower),… Thus the excess CO2 decrease is not governed by the 150/800 ratio, but by the 4/800 ratio of removal which is related to the average partial pressure difference between atmospheric CO2 and oceanic (and alveoles) CO2. See Feely e.a. for a very good explanation of the atmosphere/ocean pressure differences:
    http://www.pmel.noaa.gov/pubs/outstand/feel2331/exchange.shtml

  86. Ivan (16:12:25) :

    ‘Is there any experimental or observational evidence that fossil fuel CO2 have longer time residence than 5 or 10 years, or it is only model “fudge” factor?’

    The CO2 from fossil fuels has the same residence time as the radioactive C14 generated by atomic bomb tests, as you said, between 5 and 10 years, no longer. What builds up is the overall level of CO2. The C12/C13 isotope ratio changes over time are completely consistent with a slowly increasing fraction of fossil fuel based CO2 in the atmosphere as the quantity of fossil fuels consumed each year have increased. I know that Dr. Spencer has asked why we do not see shifts in this ratio when there are shifts in the rate of CO2 increase driven by short term ocean temperature variation (like El Nino). I have not researched this, so I can’t respond directly to Dr. Spencer’s question. However, I want to point out that if Dr. Spencer were correct (ie. that ocean temperature increase is the main cause for increases in atmospheric CO2), then his temperature driven model would have given a much better fit to the historical data; the fit was really quite poor. Occam’s razor says the simplest explanation is usually the right one. The fact that the simplest explanation also generates a much better fit to the data only reinforces Occam’s observation.

    ‘Do we have any proof or plausible reason to believe that natural flux of about 135 giga tonnes per year is not much more important factor in overall CO2 increase than 6 giga tonnes of human emissions, i.e. that tinny human contribution is not lost in the large natural noise, just like it seems plausible that CO2 greenhouse warming signal is lost in natural climate variability?’

    The very big natural fluxes should be close to “in balance” over any period of more than a few years. There is of course natural variation, which is directly linked to temperature changes in the surface of the ocean, driven by a number of different causes, including El Nino and major volcanic eruptions (which dim the sun and cool the ocean for a couple of years). The effect of a continuous relatively small addition of CO2 to the large natural pool will never be evident over the noise of natural variation in the very short term, but it should cause a gradual increase over time. The entire system can only return to a more-or-less equilibrium state when the processes that remove CO2 from the atmosphere increase their rates in response to the slowly increasing concentration of CO2, until the increases in removal balance the rate of addition of CO2. Ocean absorption/neutralization is one of the natural responses to addition of CO2 to the atmosphere, and increased plant growth rates at higher CO2 levels is another. No doubt there are a number of other process that remove CO2 from the atmosphere, such as increased weathering of rock due to higher CO2 dissolved in rain, which also increase when CO2 concentration in the atmosphere rises.

  87. The “Correlation: Model Increase vs. Mauna Loa Increase” graph is shown with a linear fit but to me it looks more like a quadratic fit would be more appropriate. This would indicate an asymptotic effect. Perhaps that means the biosphere can handle more than we give it credit for. But don’t misunderstand me, I am all for reducing human contributions.

    I think it would be interesting to see a correlation with the average weight of Americans vs. CO2 emissions (breathing harder due to poor physical consition).

  88. Steve Fitzpatrick (18:08:26)

    “The very big natural fluxes should be close to “in balance” over any period of more than a few years. ”

    Who is to say what is natural and what is not. Is not man a part of nature. We emit CO2 to live well. Animal and plant respiration releases CO2 by consuming oxygen to produce energy they need to live.

    If we did not release CO2 from those so called “fossil fuels” the world would eventually run out of atmospheric CO2 as all of earths CO2 will end up as limestone. There is much less CO2 in the atmosphere today than hundreds of millions of years ago as a result of this process.

    Since most of the last 600,000 years has seen man living in an ice age (90%), and with warmer periods during interglacials than today, the alarm over todays CO2 levels, given mans contribution is only 4% of the total being emitted seems baseless, especially at the low level of understanding of most of the processes that drive climate (sun, cloud formation, precipitation, etc).

    Not saying CO2 can not cause warming, just saying cold kills more swiftly than warmth does. Less CO2, lower crop yields, less food for humans, cooler temperatures. An ice age would kill 80% of the human population. If mans CO2 helps delay the next one, I am all for it.

    If it means that some coastal cities will slowly see property eroded, well, building newer cities inland would be the manner in which man adapts to a warmer climate. In the last ice age, the property where most coastal cities are in today were well inland. The mythical Atlantis may very well have been consumed by rising sea levels as the last ice age ended. Man survived. Climate changes, always has and always will, and man and other species adapt to it. Those that don’t become extinct. That’s evolution.

  89. Thank you for responding TonyB. You are one of the few who do. An interesting article in that the forests ( a huge portion of the South American continent ) can sometimes radiate Co2, rather than just be sinks for Co2. We have a lot more to learn about how these things collectively work together. I believe that nature is far more resilient than anyone gives her credit. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We have the pine beetle currently decimating the forest of south eastern British Colombia. The beetle has been stymied by the colder temperatures this winter (die back occurs when it is colder than ~35C for a couple of days). There is a lot of people who make the claim that the beetle has been successful due to global warming. My question to them is how many times do you think that these types of infestation have occurred over the millennia. Countless times. Is the forest still there? Yes it is. Are ecosystems as fragile as fellows like David Suzuki would have you believe? Forest fires have always cleansed the forest and prepared it for rejuvenation. Certain pine trees will only release their seed from the pinecone in temperatures only achievable in a fire. The forest always grows back. Nature is in a constant state of flux. This is why the term “nature preserve” makes me laugh. Mankind feels that nature should maintain the status quo. Yet her adaptive capabilities are what is so amazing. As Neil Peart said; “Changes aren’t permanent, but change is”.

  90. @Steve Fitzpatrick
    It is rather unfortunate the your model does not attempt to encompasse the earlier 20th century warming period between 1910 and 1945, as human CO2 emissions were roughly only 25% of the later warming period (1975 – 1998).
    Surely any decent model must reflect this effect.
    How does your model cope with the warming between 1910 and 1945. Can anyone else shed any light on this strange effect.

  91. “How does your model cope with the warming between 1910 and 1945. Can anyone else shed any light on this strange effect.”

    The only thing “strange” about this well- within-normal warming phase is that anyone should think it “strange”.

  92. Air ρC = 1,200 J/m^3 K
    Oceans ρC = 4,190,000 J/m^3 K
    Dry Clay Ground ρC = 1,420,000 J/m^3 K

    Specific Heat Capacity is not the same as Heat Capacity.

    very cool post

  93. Dave Middleton 15:38:18

    “There are two common ways to estimate CO2 concentrations in past atmospheres (before instrumental records began in 1959): 1) Measuring CO2 content in air bubbles trapped in ice cores and 2) measuring the density of stoma in plants.”

    As a physicist I may be missing something, but the work done in Germany over the past 180 years seems to give a reasonable assessment of atmospheric CO2 in the northern hemisphere.
    This web site is a good place to look
    http://www.biokurs.de/treibhaus/180CO2_supp.htm

  94. Ronaldo (02:42:02) :

    The same problem with the work of Ernst Beck as with the stomata data: high variations within a day and over the months due to huge local/regional sources and sinks. Biased to too high values as extreme high levels at night (early morning / late evening) are not fully compensated by more photosynthesis during the day. Several measurements which are instrumental to the 1942 “peak” are within rice fields, near agriculture/towns/factories…

    The values measured over/near the oceans in general are far lower and near all ranges of measurements encompass the ice core values. Neither ice core data (8 years resolution), nor stomata data (5 year resolution) show the 1935-1950 peak value found by Beck. For a comprehensive comment on Beck’s interpretation of the historical measurements, see:
    http://www.ferdinand-engelbeen.be/klimaat/beck_data.html

  95. The Engineer (00:29:20) :

    I have compared the pre-Mauna Loa CO2 data (from different ice cores) with the emission values in the period 1900-1959. The ratio between increase in the atmosphere and emissions is about 58%:

    Compare that to the influence of temperature on CO2 increase in the same period:

    The conclusion only can be that temperature is not the cause of the CO2 increase: even with a cooling about halve the temperature scale (due to the included part of the 1945-1975 period), CO2 levels still go up, while the emissions and increase are nicely coupled.

    For the full 1900-2004 period it is even more clear:

    and

    Temperature modulates the increase speed (hence the nice correlation between the derivative of the increase and temperature variations), thus the “noise” around the trend, but the bulk of the trend is caused by the emissions…

  96. Steve Fitzpatrick (08:50:48) : on 22 05

    Can you point me to a reference or two where people have actually titrated natural seawater with air with a known CO2 and SO2 concentration and measured the pH change? Granted, it migh take some time to equilibrate and biota might die in the flask, but I simply cannot see that the minute amount of CO2 in the air has the capability of lowering pH by 0.1, as is often claimed in lterature.

    These claims are often based on phase diagrams using bicarbonate, calcium etc. Having used similar phase diagrams in other fields, I know that small errors in the determination of coefficients can accumulate to major errors. So I regard them as indicative rather than quantitative, especially with natural systems as opposed to synthetic look-alikes.

  97. Ronaldo (02:42:02) :

    Dave Middleton 15:38:18

    “There are two common ways to estimate CO2 concentrations in past atmospheres (before instrumental records began in 1959): 1) Measuring CO2 content in air bubbles trapped in ice cores and 2) measuring the density of stoma in plants.”

    As a physicist I may be missing something, but the work done in Germany over the past 180 years seems to give a reasonable assessment of atmospheric CO2 in the northern hemisphere.
    This web site is a good place to look
    http://www.biokurs.de/treibhaus/180CO2_supp.htm

    Yes, Beck’s compilation.

    During these threads I always ask the question and no satisfactory answer is given:

    Ice cores are taken in regions where there is snow and ice, i.e. very few CO2 sources. AIRS animations show that contrary to the “well mixed” theory, CO2 follows wind patterns. Beck’s compilation shows that over land where we have many accurate measurements, even away from cities and populations, CO2 has large natural variations, as large as we see now in the homogenized Mauna Loa Data.
    Why to we believe the 280 ppms given in the ice cores as anything other than the measuremnt of CO2 in ice regions? Where there are no or few CO2 sources?

    Also I have never seen a preprint or publication open to the public with the raw data from Mauna Loa and all the corrections introduced by hand. What are the raw monthly values before sanitization?

  98. @ Ronaldo (02:42:02)

    Thanks for the link to Beck’s paper. I knew there were other chemical studies of pre-Keeling CO2; but I was unaware of Beck’s work.

    So it looks like there are three ways to estimate a pre-Keeling CO2 background…Plant SI and chemical methods show that the modern CO2 level is not anomalous and only the ice core data suggest a background below 320-340ppm.

  99. Ferdinand Engelbeen (17:18:53) :

    […]

    At last: I had a firm discussion with an author of a similar study of stomata index data: there is a bias in the stomata data, as stomata are formed at CO2 levels in spring, which are higher than average, including locally enhanced levels (from rotting vegetation of the previous year). This is more or less compensated by calibrating the SI data (resolution +/- 10 ppmv) with… ice core data of the past century. The main problems is that it is very difficult to know what happened with the local CO2 levels over the centuries (in contrast to CO2 levels at the south pole) when local/regional CO2 sources were added/removed.

    So the 270-280ppm from the ice cores is the annual average and the 320-340ppm from the plant SI data represent the springtime bloom? So the annual CO2 seasonal range was +/- 100ppm?

    What is the seasonal amplitude variation of the Keeling Curve? It’s less than 10ppm.

    SI studies have been performed on samples within the time span of the Keeling data and they match the Keeling data.

    See: “Stomatal frequency responses in hardwood-swamp vegetation from Florida during a 60-year continuous CO2 increase1” by Wagner (American Journal of Botany, March 2004)…

    In a stomatal frequency analysis of leaf remains of Quercus nigra, Acer rubrum, Myrica cerifera, Ilex cassine, and Osmunda regalis that were preserved in precisely dated peat deposits of north-central Florida, the stomatal index decreased as a response to an atmospheric CO2 increase from 310 ppmv to 370 ppmv over the past 60 years.

    The SI data match the current observations…The ice core data do not significantly overlap the Keeling data. So they can’t really be directly compared. The SI data and the chemical data both show past atmospheric CO2 levels to be far closer to the modern observations than the ice core data do. And both can be directly compared with and calibrate well with the modern observations.

  100. Added La Jolla to the timing of CO2 ,minima plot expecting to see a delay from Barrow to La Jolla. But from the data I would suggest that the two values are not connected.
    La Jolla frequency of measurement improves in 1985 so before then accuracy of minima is suspect.

    Until 1999 it may be possible to say the delay is about 8 days. After 1999 the delay is possibly zero!

    No-one has yet suggested a reasonable cause for this sharp dip. And the dip is greater than the yearly increment.

    It occurs mid summer in NH too late for spring too early for autumn.
    It does not correspond to plankton blooms.
    It occurs (currently) 218 days after jan 1st in both locations 4850km apart (north/south)

  101. anna v (05:11:26) :

    Ice cores are taken in regions where there is snow and ice, i.e. very few CO2 sources. AIRS animations show that contrary to the “well mixed” theory, CO2 follows wind patterns. Beck’s compilation shows that over land where we have many accurate measurements, even away from cities and populations, CO2 has large natural variations, as large as we see now in the homogenized Mauna Loa Data.
    Why to we believe the 280 ppms given in the ice cores as anything other than the measuremnt of CO2 in ice regions? Where there are no or few CO2 sources?

    Also I have never seen a preprint or publication open to the public with the raw data from Mauna Loa and all the corrections introduced by hand. What are the raw monthly values before sanitization?

    Anna, the yearly average CO2 data in 95% of the atmosphere (over the oceans and above about 1,000 m over land) don’t differ more than 5 ppmv of each other. Within one hemisphere the difference is less than 2 ppmv. Between Barrow (87N) and the south pole (90S) less than 5 ppmv:

    The delay between NH and SH points to a source in the NH, while the ITCZ delays the mixing of CO2 into the SH. The increase in the atmosphere is tightly coupled (at about 55%) with the cumulative emissions:

    This looks like causation, as I don’t know of any natural process which follows the emissions at such an incredible constant ratio…

    Many of the historical data were taken near (even in) agricultural projects (rice ans soja fields), near and in towns, etc. Callendar rejected all these data, just because of that reason, as these show the local variability within about 5% of the atmosphere, not the more or less global values. Seasonal variations at that time were near undetectable (most methods were accurate to +/- 10 ppmv). His selection (pre-defined, not post-defined!) resulted in mainly sealevel and coastal data which match with the ice core data, measured 40-60 years later than Callendar’s work.

    The problem with Beck’s interpretation is the same as with many temperature measurements: wrong place, heavely contaminated by local sources/sinks. Ice cores simply reflect the (smoothed!) variability of global CO2 levels over time, most of the historical data only reflect local variability.

    Uncorrected hourly averages of 40 minute 10-second voltage measurements are online available for four baseline stations at:
    ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/ from N to S including Barrow, Mauna Loa, Samoa and the south pole.

    For daily, monthly and yearly averages only selected data are used, as one is interested in background data, not contaminated by local vegetation or volcanic degassing (Mauna Loa) or mechanical problems (south pole). All deselected data still are available, but “flagged” for different reasons. Again the rules for deselection (and calibration) are strict and predefined. The only case where post-data correction is applied is when problems are encountered with the calibration gases.

    For one year (2004), I have used all available data and only selected data: there is no difference in average and slope…
    A comprehensive explanation of the calibration and selection procedures at Mauna Loa (and other baseline stations) is available here:
    http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html

  102. Dave Middleton (05:49:16) :

    So the 270-280ppm from the ice cores is the annual average and the 320-340ppm from the plant SI data represent the springtime bloom? So the annual CO2 seasonal range was +/- 100ppm?

    That is possible at any location on land, but doesn’t represent the global CO2 levels… The very first measurements that Keeling made in his long CO2 measurements life, were at Big Sur state park (California). The diurnal variation was about 60 ppmv. He measured d13C levels of the same (flask) samples and could conclude that the change was caused by respiration/photosynthesis of vegetation. That was the main reason for him to look at less contaminated places like the south pole and Mauna Loa (the latter with some caution…).

    Stomata data are by definition at places where a lot of vegetation is present. While the resolution is moderate (+/- 10 ppmv), the spring bias (according to one of the authors, the SI index is predefined by the CO2 levels of the previous growing season, then we have a summer/autumn bias…) can be compensated for by calibrating the SI data with the ice cores/atmospheric data over the recent period. The main problem is that one need to have a good indication of what happened with local/regional CO2 levels if the local/regional landscape changed from swamp to forest or increasing/shrinking forest/forest area ratio over longer periods, related to temperature. And that information is lacking.

    See the SI calibration of querqus (oak) in the Netherlands:

    Still some problems to solve before that method is robust enough to give reliable figures of the past…

  103. I have many comments today.

    First, Steve: nice post, a very useful model for thinking about the role of the oceans. The model may be right or wrong, but I think it helps move our thinking forward on the matter, and I think that is a material service. My gut says this model will actually hold up pretty well over time.

    Second: On NevadaFACE. Yes, they track ambient CO2, but for purposes of fertilization, not climate tracking, and their recent measurements are close to Mauna Loa. I see no reason to demote Mauna Loa based on FACE data, useful as it may be as an additional data point.

    Third: Change in atmospheric CO2 cannot be explained by SST alone. If we allow that the ocean temps drive CO2, then atmospheric CO2 should have been stable over the last decade as temperatures moderated. Instead, CO2 continues to increase, suggesting that human activity is most probably a material driver of atmospheric CO2.

    Fourth: CO2 emissions have soared in the last ten years, Oil consumption is up about 10% decade over decade and Chinese coal consumption has increased by more than US total consumption in just the last four years. So CO2 is entering the atmosphere at a record pace. However, rather than seeing an exponential growth in temperature, which an AGW argument might posit, we see flat to declining temperatures, which suggests that either CO2 is not a dominant force or immaterial.

    Fifth: I personally am not too concerned the need for peer-reviewed journals. Over time, blogs like this one–properly managed–will carry more weight than academic journals (odd, you may think), because they have certain distinct advantages: i) they are timely, ii) they are relevant (because they are timely); iv) open and therefore amenable to google searches (which means they are available for incorporation into public presentations at conferences and the like, and iv) they are visible and read, which makes their contributors and editors public figures and authorities.

    Finally: This was an excellent, excellent thread today. Great contributions showing real thought. No name calling, all quality.

  104. Is there any way to use actual AIMS CO2 data instead of model derived CO2 data? The noisy data in both sets (CO2 vs temp) would lend itself much better to analysis and removes the assumptions that global atmospheric CO2 is increasing each and every day based on modeled output of Mauna Loa data input.

  105. Steven, why do you believe CO2 is increasing? If you look at the AIMS data, there are seasonal variations that do not appear to correlate well at all with the step function we see from Mauna Loa (IE the noise is different than the starting and end points between the two sets of data). If the noise is different at multiple points, the fact that starting and end points are similar (4 data points) loses its strength as a correlation that CO2 is increasing because of human influences. Most people don’t realize that the typical number quoted as the amount of CO2 in our atmosphere is a derived statistic partially based on data input then modeled heavily to get output.

  106. Pamela Gray (10:03:14) :

    Why do you think that “global” CO2 data are based on a model? The global data used are the simple averages of a few baseline stations at sealevel, but every station that is situated in a surrounding with a limited amount of huge local/regional sources/sinks is suitable. All baseline CO2 levels over the globe, representing 95% of the atmosphere are within 5 ppmv for yearly averages, but all show near the same slope over the past 50+ years, only with a slight delay for the NH-SH lag.

    The AIMS data at specific points like Mauna Loa or Barrow show the same values in the same period. That means lower values at Barrow in summer and higher in winter. The local yearly averages are less than 2 ppmv of each other. The variation you see in AIMS simply is the seasonal variation, far larger in the NH than in the SH, but the trend in both hemispheres is nearly the same.

  107. Steve; “The CO2 from fossil fuels has the same residence time as the radioactive C14 generated by atomic bomb tests, as you said, between 5 and 10 years, no longer.”

    Why than IPCC defines it to be “50-200 years”? What you basically are saying is that although natural fluxes are so large, we should accept that fossil fuels drive little increments in overall CO2 because C12/C13 ratio is changing what can theoretically be caused by fossil fuel burning, but we don’t know, because many other things can cause that (as Spencer and many others point out).

    “The effect of a continuous relatively small addition of CO2 to the large natural pool will never be evident over the noise of natural variation in the very short term, but it should cause a gradual increase over time.”

    In the light of previous discussion that sounds more like a statement of faith, not science, doesn’t it?

  108. Ferdinand Engelbeen (08:30:34) :

    Thank you for the links, except they are not really in a digestible form, are they?
    I am too old a dog to learn the new tricks of making plots with excell or whatnot.

    I think that the argument that one should measure CO2 where there are no CO2 sources is specious, and defeated by the Mauna Loa location itselfwhich is right on top of a volcano which emmits CO2. I cannot see any volcano measurements, for example, in the link you gave for supposedly raw data. Vegetation and plankton are contamination but volcanoes not?

    I am patiently waiting for the near surface measurements from the Japanese satellite.
    As you know I think that the whole CO2 measurements is dominated by the Keeling mentality and they will probably correct away until they get the same Keeling curve. We are in great need of independent measurements all over the globe and analysis that does not presuppose “well mixed”. The Airs animation does not show “well mixed”

  109. continuing.

    I think the way they measure in Mauna Loa and consequently all other mimicking spots, is crazy:

    http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html

    At Mauna Loa we use the following data selection criteria:

    1. The standard deviation of minute averages should be less than 0.30 ppm within a given hour. A standard deviation larger than 0.30 ppm is indicated by a “V” flag in the hourly data file, and by the red color in Figure 2.

    OK translate this to a temperature measurement and see what it means: throw out measurements that have a 1C difference from day to day( a cloudy day)

    2. The hourly average should differ from the preceding hour by less than 0.25 ppm. A larger hour-to-hour change is indicated by a “D” flag in the hourly data file, and by the green color in Figure 2.

    a cloudy day

    3. There is often a diurnal wind flow pattern on Mauna Loa driven by warming of the surface during the day and cooling during the night. During the day warm air flows up the slope, typically reaching the observatory at 9 am local time (19 UTC) or later. The upslope air may have CO2 that has been lowered by plants removing CO2 through photosynthesis at lower elevations on the island, although the CO2 decrease arrives later than the change in wind direction, because the observatory is surrounded by miles of bare lava. In Figure 2 the downslope wind changed to upslope during hour 18. Upslope winds can persist through ~7 pm local time (5 UTC, next day, or hour 29 in Figure 2). Hours that are likely affected by local photosynthesis are indicated by a “U” flag in the hourly data file, and by the blue color in Figure 2. The selection to minimize this potential non-background bias takes place as part of step 4. At night the flow is often downslope, bringing background air. However, that air is sometimes contaminated by CO2 emissions from the crater of Mauna Loa. As the air meanders down the slope that situation is characterized by high variability of the CO2 mole fraction. In Figure 2, downslope winds resumed in hour 28. Hour 33 in Figure 2 is the first of an episode of high variability lasting 7 hours.

    The wind is from the Sahara, throw out the temperature measurement

    4. In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step, in which we fit a curve to the preliminary daily means for each day calculated from the hours surviving step 1 and 2, and not including times with upslope winds. All hourly averages that are further than two standard deviations, calculated for every day, away from the fitted curve (“outliers”) are rejected. This step is iterated until no more rejections occur. These hours are indicated by an “A” flag in the hourly data file, and by the purple color in Figure 2, also indicated as “spline” in the legend. Spline is a curve fitting technique. Rejected hours occurring during times with upslope winds are given a “U” character in the data file.

    Any large temperature fluctuations should be thrown out.

    Don’t I wish they had done that in the surface temperature measurements !! We would have no AGW !!

  110. anna v (10:53:51) :

    The hourly average data are more a question of volume (8,600 lines per year) than difficult to see. Mauna Loa only has serious trouble if there is a real eruption, in general if the wind is from the venting places, that translates in a huge variability of measurements within an hour (average +4 ppmv), which is one of the several reasons to reject the data. Real background data have no detectable change over a day…

    The same for frequent upslope winds in the afternoon: depleted (at -4 ppmv) of CO2 by lower based vegetation. In both cases the average difference with real “background” CO2 levels is less than 4 ppmv and the yearly average is not more than 0.1 ppmv different if you include or exclude the outliers. Thus there is no correction applied at all, only good data where we are interested in are retained. With a few exceptions: if too many days are lost (for any reason) within a month, the remaining days are used with a correction based on the seasonal slope of the previous years at the same days to calculate the monthly average (see the discussion on this blog some time ago).

    Further, the continuous Mauna Loa data are confirmed by three independent flask measurements at the same spot and a fourth series at the bottom of one of the Hawai islands. Measured by different persons in different labs from different organisations by different methods. The series are in average within +/- 0.12 ppmv of each other. The Keeling curve meanwhile is near the same, measured from near the north pole to the south pole in 10 baseline stations, 70+ similar stations at least contaminated places, flight measurements, buoiys and ships on sea. See:
    http://www.esrl.noaa.gov/gmd/ccgg/iadv/

    Besides that, some 400+ stations which monitor over land for local/regional CO2 fluxes, a quite impossible task, which may be better done by the Japanese satellite…

  111. The global number for CO2 takes into account modeled sinks. It is a modeled number, not measured data, although measured data at sources is put into the calculation.

  112. anna v, I too am waiting for the Japanese data. Should be interesting. And since it is Japanese data, many will try to find something wrong with it. Which is a good thing. Wish people would try to find something wrong with their own data as well.

  113. Ivan (10:45:12) :

    Why than IPCC defines it to be “50-200 years”? What you basically are saying is that although natural fluxes are so large, we should accept that fossil fuels drive little increments in overall CO2 because C12/C13 ratio is changing what can theoretically be caused by fossil fuel burning, but we don’t know, because many other things can cause that (as Spencer and many others point out).

    That is because the about 5 years is CO2 residence half life time, the 50 years is excess CO2 half life time, two life times which are completely independent of each other and hardly influence each other. The first is influenced by how much of the total CO2 in the atmosphere, whatever the source, is exchanged with oceans and vegetation each year (about 150/800), the second depends on how much the difference per year is between natural releases and natural sinks of CO2 (currently -4/800, quite a difference…).

    Thus if you add 8 GtC/year to the atmosphere, you will find hardly a few % of that “human” CO2 as molecules back in the atmosphere after a few years, but as there is no net addition of nature to the total atmosphere, only a net loss of 4 GtC/year, the human addition is for near 100% responsible for the 30% increase of the past 1.5 century.

    In the light of previous discussion that sounds more like a statement of faith, not science, doesn’t it?

    If you add a small stream of water to a fountain which circulates 1,000 times more water from the reservoir to the fountain and back, the small addition eventually will cause an overflow of the reservoir, not the circulation itself…

  114. Forgot to add:

    There are only two big sources of low 13C carbon on earth: fossilised vegetation and recent vegetation. All other sources are higher in 13C (-deep- oceans, carbonate rock, volcanic vents,…). This excludes the oceans as main source, as CO2 from the oceans should INcrease the 13C levels in the atmosphere, while we see a DEcrease (as well as in the upper oceans themselves). Recent vegetation is not the cause either: from the oxygen use we can see how much CO2 is released or absorbed by vegetation decay/growth. There is a deficit of oxygen use by fossil fuel burning, thus vegetation is a net absorber of CO2, releases O2 and uses preferentially 12C, thus increasing the 13C level of the atmosphere. But we see a DEcrease…
    Thus neither oceans, nor vegetation are CO2 sources, both are CO2 sinks. See further:
    http://www.sciencemag.org/cgi/content/abstract/287/5462/2467
    and
    http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf

  115. Pamela Gray (11:43:00) :

    The global number for CO2 takes into account modeled sinks. It is a modeled number, not measured data, although measured data at sources is put into the calculation.

    No it is a simple calculation:

    Natural sources + emissions – natural sinks = increase in the atmosphere

    The emissions are calculated from the inventories of fossil fuel production/sales, the increase in the atmosphere is measured. That gives:

    Natural sources – natural sinks = increase – emissions = 4 – 8 = -4 GtC/yr

    Without any knowledge of any natural source or sink, without any model of the absolute flows, we know from the measured increase of CO2 and the emissions inventory that the natural sinks are 4 GtC/year larger than the natural sources. Thus that nature added nothing (in total mass, not in lot of molecules) to the atmosphere.

    That was the case, at least in the past 50+ years, with a year-by-year variation of about +/- 2 GtC (mainly from the temperature influence on the sink rate) and increasing at a rate of about 55% of the emissions…

  116. Pamela and Anna V

    I am convinced Ferdinand has an inbuilt ‘Beckometer’ as he pops up as soon as the name is mentioned :)

    We have expressed our interest in the ‘real’ levels of co2 before.

    Having done a thorough study of the methodology behind the tens of thousands of historic studies, investigated the social context in which they were taken and corresponded numerous times with Ernst I would say I am inclined to agree with Becks work at a level of certainty of around 60%

    If it were anyone else than Ferdinand disagreeing with his work I would place my certainty at 75%.

    The three bits of evidence used to disprove higher historic levels are ice core samples at 280- Ferdinand has supplied me with highly credible material confirming their accuracy. However, I remain unconvinced that a few dozen ice cores which need a highly complex process before they can be ‘interpreted’ is more accurate than tens of thousands of co2 studies made at the time by highly credible people.

    The second element is the work by Callendar to determine historic co2 levels-which were thoroughly disproved at the time as being too low but subsequently have acquired a factual status they do not deserve.

    Lastly of course there are the ML figures which may or may not be measuring the ‘same’ thing as Becks people were.

    Is it possible for co2 to fluctuate from say 380 in around 1940 to only 315 in 1957? If it is, the Beck figures are feasible, if its not the Beck figures are less credible, although the issue of whether everyone is measuring the ‘same’ thing still remains.

    Are you aware this was discussed last year?
    . https://wattsupwiththat.com/2008/07/25/beck-on-co2-oceans-are-the-dominant-co2-store/#more-1843

    Beck has written another paper on the subject which I am sure he would be willing to post here. Anyone interested in suggesting a rerun with Becks latest material? It can be judged against what we have learnt over the last year from articles such as this one which I thought very interesting.

    Tonyb

  117. Anna v,

    Good that CO2 data are not similar to temperature data. Take the Mauna Loa station or the south pole station or any other baseline CO2 station.

    The current rate of change of CO2 over a year is about 2 ppmv or 0.005 ppmv/day. Not detectable with the current equipment.

    The largest seasonal changes are in Barrow and Alert in May/June: 6 ppmv/month or about 0.2 ppmv/day, borderline measurable over a day for these two stations, for all other stations not detectable.

    Thus we have CO2 levels which should be near stable during 24 hours, if it were real background data. In general it is not even necessary to have a value each hour, once every week would be sufficient for monthly and yearly averages… Can you imagine a temperature measasurement which is that stable?

    Thus anything that deviates from the straight line over a day of CO2 measurements is an outlier and suspected to be contaminated by local sources/sinks. If someone is interested in volcanic vents, please measure in the middle of the vent. If someone is interested in sugar cane uptake of CO2, please measure in the midst of such a field. But as we are interested in background data over the seasons and over a year, please throw out everything that deviates from the expected straight line over a day…

    The other way out is more interesting: The data of Giessen (Germany) play an important role in Beck’s historical “peak” around 1942. Giessen now has a modern station, providing 1/2 hour average CO2 measurements. Compare a few typical days at Giessen with a few typical days at Barrow, Mauna Loa and the south pole:

    Where the BRW, MLO and SPO data are unfiltered and contain all local outliers (!) and the Giessen data show the normal diurnal variation in summer, largely depending on wind speed. What do you think will happen if one uses the Giessen data as “global” (which is what Beck did)? Seems the equivalent of measuring temperature on an asphalted parking lot in Arizona in summer…

  118. “Thus if you add 8 GtC/year to the atmosphere, you will find hardly a few % of that “human” CO2 as molecules back in the atmosphere after a few years, but as there is no net addition of nature to the total atmosphere, only a net loss of 4 GtC/year, the human addition is for near 100% responsible for the 30% increase of the past 1.5 century”.

    But, Revelle and Suess calculated 1957 that with life time of 5 years CO2 would be contaminated with little more than 1% of fossil fuel CO2. Today, that would be maybe 3 or 4%, not 30% as IPCC asserts . And how do we know that “there was no net addition by the nature”?

    “That is because the about 5 years is CO2 residence half life time, the 50 years is excess CO2 half life time, two life times which are completely independent of each other and hardly influence each other.”

    What is “excess CO2”? Atmospheric life time of CO2, bot natural and fossil fuel is, as Steve agreed, 5-10 years. There is no “ordinary” and “excess CO2”. There is only CO2. I think that “excess CO2” concept was introduced by Houghton 1990. Using “evasion buffer factor” (never proved, but only hypothesized) he was able to assert that human CO2 have much longer period of decay than CO2 ordinary life-time. All that theory is based on rejection of Henry’s law and its replacement with very dubious concepts such as “evasion buffer” . In order to ascribe CO2 rise to fossil fuels ocean have to have 10 times smaller dissolving capacity than Henry’s law says.

  119. Re: Steve Fitzpatrick

    Thank you for your post – much appreciated.

    I must, however, point out the following misrepresentation (as I see it) of Dr. Spencer:

    1) Steve Fitzpatrick: “Dr. Roy Spencer […] suggests that much of the increase in atmospheric CO2 could be due to warming of the oceans […] he presents a few graphs that he claims are consistent with ocean surface temperature change contributing more than 80% of the measure increase in CO2 since 1958.”

    2) Steve Fitzpatrick: “[…] surface temperature changes do not explain 80% to 90% of the increase in atmospheric CO2 since 1958, as suggested in Dr. Spencer’s May 11 post.”


    Also, it should be pointed out that:
    a) Your model really combines ocean & land dynamics.
    b) Dr. Spencer’s ~6 month lag was based on SST, whereas your calculations are based on dSST/dt. (Keep in mind the phase relationship between dSST/dt & SST.)


    Thank you for these clarifications – related to (a) – shared later:

    Steve Fitzpatrick (06:45:36) “The point of the post was not to show a perfect model, but to note that the measured change in atmospheric concentration of CO2 is consistent with addition of CO2 from human activities combined with a partial removal of this extra CO2 at a rate that is proportional to the increase in atmospheric concentration, combined with a sea surface temperature driven variation. It does not matter if the slow removal is by physio-chemical absorption by the ocean or by plant absorption or both. So long as the total increase in removal rate is approximately proportional to the increase in CO2, then the concentration of CO2 in the atmosphere ought to evolve roughly as the model shows; which is also how the measured concentration has evolved.”

    Steve Fitzpatrick (13:52:21) “I did not even try to include the increase in rate of plant growth, because I expected that both ocean absorption and increased plant growth rate would be roughly proportional to the rise in CO2, so that with optimized constants the “ocean only” model ought to fit the historical data pretty well.”

    – –
    Comment regarding RobP (06:48:14) & JamesG (05:18:37)

    Steve’s model is actually very similar to Dr. Spencer’s.
    (Time-integrate Dr. Spencer’s model to see this.)

    A main difference is the one suggested by Ferdinand Engelbeen in the earlier thread – i.e. using dT/dt instead of T (where T=temperature).

    Both models stimulate learning & discussion.
    (The purpose of this blog is not identical to that of a science journal.)

    George E. Smith (15:02:13) “[…] all of these statistical processes that seem to be a core part of climatology can serve very well to demonstrate “correlation”; but what is being saught is not correlation; but causation; and correlation does not prove causation.”

    Causation is not the only target of relationship investigations. People are interested in knowing what variables are related for a lot of legitimate reasons.

    A few weeks ago I was vaguely aware of disjointed claims of relationships supported by mysterious filtering techniques. Fortunately, Dr. Spencer’s recent post triggered a series of deep insights.

    cohenite (05:52:11) “[…] where is the link to the Dr Roy paper from May 11?”

    https://wattsupwiththat.com/2009/05/12/spencer-on-an-alternate-view-of-co2-increases/

    – – –
    Fuelmaker (08:47:09) “The sharp changes in CO2 at northern latitudes looks like a change in the wind pattern, which would change the source and “history” of that air cell. Can anyone comment on the typical seasonal wind patterns of those locations?”

    I hope someone will address your question. The (remarkable) relationships I’ve studied so far involve interannual wind variations.

    – – –
    The Engineer (00:29:20) “[…] the warming between 1910 and 1945. Can anyone else shed any light on this strange effect.”

    Related:
    “Dynamics Of Climatic And Geophysical Indices”
    http://www.fao.org/docrep/005/Y2787E/Y2787E03.HTM

  120. Ivan,

    If you add 8 GtC per year to one reservoir of a dynamic system of carbon flows and you measure only 4 GtC increase in that reservoir, then you can be sure that the sum of all other flows in/out that reservoir (regardless how huge these are) is negative: 4 GtC/year is removed to other reservoirs in the system. That means that nature as a whole doesn’t add anything to that reservoir, even if there is a lot of exchange between these reservoirs.

    If you have a lake where there is a (more or less) constant flow in and flow out, there will be some variability of the lake’s level with precipitation. Now add a small flow to the lake with red colored water. The red color will fade in proportion of its volume vs. the total volume which is flowing in and out, while the level will increase independent of the height of the in/outflows, until the outflow increased high enough compared to the inflow to compensate for the extra addition. In that case, the extra increase in lake level is 100% caused by the small inflow, even if the color is thinned to less than 1% by the much larger in/outflows.

    That is the difference between the lifetime of an individual molecule and the lifetime of an increase in total mass. The first is 5 years, the second is about 40 years (yes, far less than what the IPCC says, see Peter Dietze at:
    http://www.john-daly.com/carbon.htm )

    The first lifetime is about exchanges between molecules. After a year of exchanges the amount of “human” CO2 is reduced with about 150/800 GtC. But that doesn’t change the total amount of CO2 (whatever the source) in the atmosphere. The total amount in reality increased by 4 GtC, but the increase is for (near) 100% caused by the addition of 8 GtC from human emissions. Stop all emissions at once and next year you will see a drop of about 4 GtC of the 800 GtC present, out of the atmosphere. That is the base of the “excess” lifetime of about 40 years (half life) or 55 years (e-folding time).

    What happens if humans add 100 GtC to the pre-industrial atmosphere at once and let nature do its work thereafter? You will see that the human fraction diminishes very rapidely, while the total amount of CO2 is reducing to the old levels at a much slower rate.
    Even if after some time the human fraction drops to near zero, the total mass still is above the “pre-industrial” level, still 100% caused by the initial addition. All based on realistic exchange rates:

    Where FA is the fraction of “human” CO2 in the atmosphere, FL is the same for the upper ocean layer, tCA total carbon in the atmosphere and nCA the natural part of it.

    This has nothing to do with buffer factors but with transfer speed between the atmosphere and the oceans, which is a matter of not only temperature (and algue growth) and CO2 levels in the atmosphere/oceans, but wind speed is the most important factor in transfer rate. See Feely e.a. at:
    http://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtml

  121. Re: superDBA (08:49:21)

    Data rejection protocols are used (as discussed above by others) — Worthy of note:
    ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/README_insitu_co2.html

    Excerpt:
    ” … – No code applied. Data are considered ‘background’. […]
    .V. – Large variability of CO2 mixing ratio within one hour
    .D. – Hour-to-hour difference in mixing ratio [greater than] 0.25 ppm
    .A. – Automatic selection based on residuals from a spline curve
    .U. – Rejected, diurnal variation (upslope) in CO2 (Mauna Loa only)”

    The original sites were deliberately chosen with an aim of getting info on “background” CO2 – so they certainly were not intended to collectively constitute a representative sample of Earth — On the contrary, they constitute a completely biased sample.

    I have found Ferdinand Engelbeen’s emphasis on measurement-context enlightening — i.e. it is important to bear in mind the historical development & purpose of different monitoring networks.

    – – –
    JamesG (05:18:37) “We should also be wary of any data that isn’t available in raw, unadjusted form.”

    Agreed.

    The obsession some agencies seem to have with posting only undefined (or vaguely-defined) ‘anomalies’ is a substantial impediment to thorough analysis & sensible conclusion. To be clear: It is unacceptable.

    To avoid being misunderstood:
    It is ok to post adjusted data as long as:
    1) Raw data are also posted (along with important notes).
    2) Adjustment procedures are fully & unambiguously explained with careful attention to detail, such that adjusted-data can be readily reproduced by interested parties using the raw data.

    – – –
    Pamela Gray (09:27:30) “The noisy data […] would lend itself much better to analysis”

    Interesting point Pamela (regarding which I see you have elaborated at Pamela Gray (10:03:14)). Any bright Stat 101 student should be able to see that variance estimates could be off by an order of magnitude or more.

    However, we should keep in mind Ferdinand’s comments about historical monitoring-network context.

    The upshot:
    Analyses that fail to assess the variability of parameter estimates with varying spatiotemporal scale & localization are at-best incomplete and at-worst severely misleading.

    – – –
    Steve Fitzpatrick (18:08:26) “The effect of a continuous relatively small addition of CO2 to the large natural pool will never be evident over the noise of natural variation in the very short term, but it should cause a gradual increase over time.”

    Ivan (10:45:12) “In the light of previous discussion that sounds more like a statement of faith, not science, doesn’t it?”

    It’s necessary to take a multi-scale view to see that common sense (not a leap-of-faith) has driven Steve’s statement: The first part of the statement applies to seasonal timescales; the latter applies to annual & annual-subharmonic timescales.

    – – –
    anna v (10:53:51) “The Airs animation does not show “well mixed””

    On seasonal timescales – you are absolutely right. But as Ferdinand is pointing out, the story is quite different at higher timescales. Imagine the effect of time-integration – at annual & annual-subharmonic bandwidth – on the AIRS movies.

    – – –
    anna v (11:18:47) “Any large temperature fluctuations should be thrown out.”

    Worse actually – since “large” is iteratively redefined.
    [“This step is iterated until no more rejections occur.”]

    Thank you for sharing your analysis – I will not put off reading that page any longer.
    http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html

    Measuring “background CO2” clearly involves some messy technical details.

    Fortunately, the annotated raw hourly data is publicly-available (online) for cross-reference, so the drawing of sensible conclusions is not precluded.

  122. Geoff Sherrington (02:09:57) “[…] cause of the fine structure within-year wriggles that are on graphs commonly seen from Barrow, Mauna Loa and the South Pole? They are often atributed to vegetation growth in the Northern Hemisphere, but I have trouble seeing this effect translated to the South Pole.”

    South Pole is in antiphase with NH since SH photosynthesis is most active in SH summer = NH winter.


    HappyDayz (04:07:10) “Perhaps the real question is why there isn’t an even bigger dip during the southern summer, given the size of the Pacific.”

    Land vegetation — NH has more of it than SH.


    bill (02:05:30) “I have likened it to a NH vacuum cleaner sucking CO2 and then suddenly being switched to blow. […] So what starts suddenly blowing CO2 in august?”

    Keep in mind that photosynthesis & respiration are occurring simultaneously at the annual height of biological activity – and also that background CO2 is rising. You don’t need a sudden reverse in your analogy. Consider the possibility that the timing of the striking net switch is due to the annual sharp-decline in photosynthesis – which is not necessarily exactly phase-synchronized with respiration due to intra-seasonal differences in limiting factors affecting the 2 different processes. The respiration & photosynthesis curves are not necessarily clean, smooth sine waves with seasonally-unvarying phase-difference.


    bill (02:05:30) “this turnover from -ve slope to +ve slope occurs within about +-1 week of the same day (assuming 365.25 days per year). Wouldn’t one expect a progressive +ve / -ve change?”

    Keep in mind that temperature does not control the Earth’s axial tilt; photosynthesis depends on light.


    bill (02:18:26) “[…] the propagation of the blip is fast […]”

    Keep in mind that plants do not photosynthesize before they leaf out. As soon as the threat of a cold-snap has passed, vegetation starts capitalizing.


    bill (05:56:43) “No-one has yet suggested a reasonable cause for this sharp dip. And the dip is greater than the yearly increment. It occurs mid summer in NH too late for spring too early for autumn. It does not correspond to plankton blooms. It occurs (currently) 218 days after jan 1st in both locations 4850km apart (north/south)”

    Sounds like photosynthesis – by vegetation – on land. Are you really surprised to see that peak in the NH in July?


    bill (14:15:55) “So if temp peaks are not driving the time for minimum – what is?”

    Factors limiting photosynthesis include daylight, the hydrologic cycle, cold snaps, & nutrient availability (which vary temporally & spatially). Respiration is affected by a similar set of factors, but the role of daylight would be in its effect on temperature — and keep in mind that microbes are not as constrained as plants on very short timescales: Annual plants die and perennials go dormant when the most seriously-limiting factor reaches critical (no turning back) – whereas microbes can burst into activity on a warm late-summer or fall day (or series of days). I don’t know net specifics for Barrow, but your analyses provide clues.

    Question: Are the temperature anomalies in your plots local or global?


    bill (14:15:55) “But as temp increases days decrease (approx .2 days per year […]).”

    Did you do an F-test? And also check that the model assumptions are met? And test the effect of adding a low outlier at the beginning of the series & a high one at the end? i.e. Should we really buy into this ‘trend’?

  123. Thanks for these notes:

    Joseph (08:08:07) “It is quite possible for the average global SST to remain constant over time, while the absorption/desorption of CO2 from the ocean changes dramatically due to temperature-driven changes in solubility.”

    Fuelmaker (08:47:09) “Averaging global SST adds a lot of noise, because solubility of gases in water are exponential, following the vapor pressure.”

    Dave Middleton (15:38:18) “The ice core data might be more representative of a long wavelength moving average of CO2 values and be a good indicator of the low frequency component of the CO2 cycle; where as the plant TSI data are capturing the high frequency component. It’s also likely that the pressure effects of burial are affecting the gas partial pressures in the ice core air bubbles.”

    Dave in Delaware (07:27:20) “Rainwater has the ability to wash CO2 out of the atmosphere, and may be an important mechanism in the ocean – atmosphere exchange. […] even if my estimates are 10 times too high, there is potentially still a lot of CO2 in rain water.”

    Dave in Delaware, can you share any gem-links &/or gem-literature references?

    – – –
    RobP (10:43:37) “[…] the use of “science” for political ends and the concommittent use of legal approaches (selection of evidence and manipulation of significance) is a trend I can’t see being reversed.”

    David Ball (20:54:26) “[…] nature is far more resilient than anyone gives her credit.”

    John Wright (02:56:40) “I have long been convinced that carbophobia will bring down Obama.”

    Interesting. For clarity: I’ll not object if toxic pollution is eliminated and pavement is replaced with forests, but I’m certainly not in favor of what seem like extremely misguided initiatives [such as those mentioned by Gary Pearse (08:46:02)].

    – – –
    Re: Steven Kopits (09:23:26)
    Well-said.


    Re: Ferdinand Engelbeen
    Thank you for everything you have posted. Your contribution is instrumental.

  124. Ivan (14:41:07) :

    All that theory is based on rejection of Henry’s law and its replacement with very dubious concepts such as “evasion buffer” .

    Quite correctly, since Henry’s law does not apply when the solute gas reacts with the solvent which is the case with CO2 and sea water.

  125. Thanks to Anthony Watts for putting my post on WUWT.

    Thanks to all for your comments on my post, especially Ferdinand Engelbeen, who offered a lot of very clear explanations.

    I hope that my post helped stimulate some serious thought about addition of CO2 to the atmosphere. But most of all, I hope that my post helped to show that not everything in the “science mainstream” about CO2 is crazy, wrong, or politically driven. Much very good science has been done and continues to be done on CO2 addition to the climate system.

    Those of us who honestly doubt the catastrophic climate forecasts made by climate models/modelers must be willing to acknowledge that some of the data and assumptions that go into the climate models are reasonable (CO2 will continue rising due to burning fossil fuels), and some are rubbish (half of future warming is already “in the pipeline”). Some of the predictions are reasonable (CO2 and other infrared absorbing trace gases should increase the average temperature of the earth by some amount due to their radiative effects), and some are rubbish (the average temperature will increase 4C in 90 years, and seal level will rise by several meters).

    I think it more important to discredit the rubbish, not the reasonable.

  126. Phil. (08:28:51) :

    Dave Middleton (07:55:17) :
    @Steve Fitzpatrick,
    I just relaized my post sounded like crticism of your work. I didn’t mean it that way. My point was that maybe the “285″ in the equation should be a bit higher.

    What are your thoughts on Spencer’s assumption of equilibrium between ocean and atmosphere for CO2 at an SST anomaly of -0.2ºC?

    I don’t know. I think there are too many assumptions built into any equation that tries to derive the anthropogenic vs. natural contributions of CO2 or toward temperature changes.

    I do know that there is a lot of evidence that modern CO2 levels are wholly unremarkable compared to the Upper Pleistocene and there is absolutely no evidence that the warming from 1978-2005 was anomalous in any way, shape, fashion or form.

  127. Ferdinand Engelbeen (13:23:52) :

    A circular argument if there ever was one:

    The largest seasonal changes are in Barrow and Alert in May/June: 6 ppmv/month or about 0.2 ppmv/day, borderline measurable over a day for these two stations, for all other stations not detectable.

    Thus we have CO2 levels which should be near stable during 24 hours, if it were real background data. In general it is not even necessary to have a value each hour, once every week would be sufficient for monthly and yearly averages… Can you imagine a temperature measasurement which is that stable?

    Thus anything that deviates from the straight line over a day of CO2 measurements is an outlier and suspected to be contaminated by local sources/sinks.

    You can say the same about temperatures between two days, if you make the C scale large enough. You can eliminate any variations in trends with this sort of argument and throw away outliers.

    The other way out is more interesting: The data of Giessen (Germany) play an important role in Beck’s historical “peak” around 1942. Giessen now has a modern station, providing 1/2 hour average CO2 measurements. Compare a few typical days at Giessen with a few typical days at Barrow, Mauna Loa and the south pole:

    Where the BRW, MLO and SPO data are unfiltered and contain all local outliers (!) and the Giessen data show the normal diurnal variation in summer, largely depending on wind speed. What do you think will happen if one uses the Giessen data as “global” (which is what Beck did)? Seems the equivalent of measuring temperature on an asphalted parking lot in Arizona in summer…

    But that is exactly my argument CO2 is not uniform over the world. You should not use Glessen and you should not use Mauna Loa as representative of the CO2 global. Neither is. It is just a local measurement, even presuming it is correctly corrected.

    Beck’s compilation highlights this, CO2 is not uniform over the world, by showing us more data than the sanitized ones of Keeling ( I think he is in all publications of global data).

    BTW You should measure the Arizona parking lot temperature, but it not as global. It should be properly integrated into a global temperature if it is possible to do this. Certainly that is what satellites are doing.

  128. Ferdinand’s plot http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_background.jpg is very interesting.

    1) it highlights the need for a checking like the checking done for surface stations by Anthony: whether the instruments are situated away from machine outlets for example etc.

    2) it tells us that CO2 where we live can be twice the PC level so why is there no runaway warming in Giessen during the day? So much CO2 should have turned it tropical. After all the greenhouse gases work on the column according to density, they do not make a blanket at Mauna Loa height.

    3)The basic premise that CO2 should be constant is refuted, because measurements show it is not. Should has no place in research. IIt is a wrong premise on which to hang global CO2 measurements.

    I wait for the Japanese satellite measurements, since the US satellite was lost.

  129. I am on a roll this sunday, to avoid housecleaning :).

    Lets think about the Arizona parking lot. The Sahara is a huge parking lot for temperatures. Should we exclude it from global averages? We get 45 C winds from the Sahara some years in Greece, and generally if it blows from there, we are hot. Should we stop measuring the temperatures in Greece too? Correct for the winds from the Sahara?

    The whole global measurement of quantities needs a clear rethink from scratch, and the leaders in the field are doing a disservice to their science, and to science in general, by not organizing a workshop with their best brains to work on these inconsistencies.

  130. bill (02:05:30) :

    You say:
    Barrow and Resolute have the least filtered changes (sharpest dips) and so the July / August dip must be caused near these latitudes.

    In your analysis of the Northern Hemisphere high latitude CO2 monitoring stations, I suggest that you also look at the data from Zeppelinfjellet in Svalbard and the Summit Station from Greenland. Notice that these three high latitude stations all have the same signal and that the signal is independent of elevation. Next obtain a time record of the Arctic Ocean sea ice areal extent for the same time period as your CO2 data. Plot the sea ice data for the Arctic Ocean area only (the ocean has an area of 13.23 M sq km, so assume that the ocean is total frozen when the NH winter sea ice extent exceeds the area of the ocean). Scale the plot of your capped sea ice areal extent (I suggest you try 13.23 M sq km equates to 383 ppm for spring 2004 for example) and plot this curve with the CO2 time data for Barrow, Zeppelinfjellet & Summit.

    There appears to be a relationship between Arctic Ocean sea ice areal extent and the strength of the NH summer CO2 drawdown signal.
    Interesting No?

  131. Ron de Haan (07:23:25) :

    Without any comment:
    1. http://www.rightsidenews.com/200905224868/energy-and-environment/northern-ireland-climate-change-committee-hears-evidence-no-man-made-global-warming.html

    Ron, I was very surprised (and pleased) by this. The information seems to come from Dr Shreuder himself
    http://www.tech-know.eu/NISubmission/

    I would very much doubt that we’d hear anything about it through official or semi-official channels here (although our much criticised skeptic Environment Minister Sammy Wilson will feel vindicated). NI has a lot to gain from its high research intensity on renewables as 95% of our energy is imported, and climate change and all that this brings with it is a driver for this.

  132. anna v, 23-05-2009 (21:23:00) :

    You can say the same about temperatures between two days, if you make the C scale large enough. You can eliminate any variations in trends with this sort of argument and throw away outliers.

    The difference between CO2 and temperature behaviour at any point on earth is emormous: there is very little variation even over a short time span for CO2 levels for any point on earth, as long as you are sufficiently far away from huge sources and sinks. In general that is everywhere above the inversion layer and above sea level.

    While the oceans release and absorb a lot of CO2, continuous at some places (equator and poles) and seasonally at the mid-latitudes, the speed of exchange is slower than the mixing with the overlaying air. This gives a nice even, flat CO2 level everywhere measured over the oceans, as well as for the current levels as for the historical levels.

    That doesn’t mean that there are no variations at all. But there are no diurnal variations and a (relative) slow change over the seasons, more in the NH than in the SH, and only a slight increase over a year. The differences in yearly averages over latitudes and altitudes are small, less than 2 ppmv (filtered or not) within a hemisphere and less than 5 ppmv between the hemispheres, with a similar trend of 60+ ppmv since the south pole measurements started.

    In contrast, temperature shows huge diurnal variations, day by day variations and huge differences with altitude and latitude.

    Thus it is possible to use weekly samples of CO2 at any place, away from huge local sources/sinks: that will give the same CO2 trend as measurements done at any shorter time frame… But the shorter time frame gives us the luxury to select these data which are certainly not contaminated by local sources/sinks…

    No place on earth (except in general at the south pole) is 100% contamination free (but the south pole encounters severe mechanical problems at -80°C and an ice storm). Thus with a priory selection criteria, it is possible to select only these data which are off interest. For e.g. Schauinsland (Germany), not a baseline station, only 10% of the data were selected (above the inversion layer, sufficient wind speed). For Mauna Loa about 60% of the data are used for averaging (that still are 5,000 datapoints per year, more than sufficient to see the real trend).

    But that is exactly my argument CO2 is not uniform over the world. You should not use Glessen and you should not use Mauna Loa as representative of the CO2 global. Neither is. It is just a local measurement, even presuming it is correctly corrected.

    Beck’s compilation highlights this, CO2 is not uniform over the world, by showing us more data than the sanitized ones of Keeling ( I think he is in all publications of global data).

    BTW You should measure the Arizona parking lot temperature, but it not as global. It should be properly integrated into a global temperature if it is possible to do this. Certainly that is what satellites are doing.

    Once again, CO2 levels are quite uniform over the whole world for over 95% of the atmosphere if averaged over a year. It is not uniform in 5% of the atmosphere over land below about 500 m. Why should you make it difficult for yourself with the same problems as for temperature measurements, by averaging CO2 levels from problematic places, if measurements at a few places far away from local sources/sinks give you the values and trends in 95% of the atmosphere?

    See what happens with CO2 levels over land with the Cabauw (The Netherlands) tall tower intake:

    At 200 m height, the variability is already much lower and above 1,000 m mostly gone:

    So, don’t use the Giessen data at all for “global” inventories, neither use the Arizona temperature data for global temperatures. The historical Giessen data, as well as other land based CO2 data are simply worthless for an interpretation of global CO2 levels: the diurnal and day by day measurements are far too irregular (one sigma = 66 ppmv!), as influenced by local/regional forests, town and industry. Only by looking at the lowest values at high wind speed, one can have an impression of the real background data of that time, which include the range of ice core CO2 measurements:

    The most important problem with Beck’s data is that not one series is longer than a few years, many are single spot or a few weeks at places of utmost difference in suitability for a global survey. Beck’s combination of observations without any interpretation of suitability is the equivalent of using a series of temperature measurements taken in Oslo (with stringent quality control), followed by a series from Rome, taken on a hot asphalt roof, followed by mid-winter measurements from Helsinki. His interpretation then is that there was a peak in global temperature in the middle of the trend…

  133. Ferdinand Engelbeen (04:36:17) :

    We will not agree on this. I do not defend Beck’s time series. Just that he has illuminated the differences in CO2 values form the politically correct ones.

    Humans live below 500 meters and below the inversion layer. Either we are interested for temperature and CO2 at what is happening where we live, or we go to 5000 meters with satellites and measure temperature and CO2.

    To take temperatures below 500 meters and CO2 at 5000meters is completely mixing apples and oranges for global averages.

    The satellite temperatures as seen in http://discover.itsc.uah.edu/amsutemps/ have little to do with the temperatures on the land, the ones we live through. The same is true for CO2 up so high.

    As I said, global quantities have to be rethought from the beginning.

  134. p.s.
    I also think that to be talking of avoiding sources and sinks is bizzare. It is like saying we should avoid the sun and the oceans for temperature measurements.

  135. anna v (07:01:54) :

    p.s.
    I also think that to be talking of avoiding sources and sinks is bizzare. It is like saying we should avoid the sun and the oceans for temperature measurements.

    Anna, nobody says that we shouldn’t measure CO2 at all places. In fact it is done, there are over 400 stations on land where CO2 levels and fluxes are measured, that latter at different heights, in some cases including airplanes. But these have a specific task: trying to measure how much CO2 is absorbed or released at certain places and if possible over larger regions. See e.g.:
    http://www.chiotto.org/workplan.html

    I have used the Modtran program of the Archer’s ( http://geosci.uchicago.edu/~archer/cgimodels/radiation.html ) to see how much absorbance a doubling of CO2 has in the first 1,000 m, compared to the full 70 km air column. It was in the order of 10% (0.3 W/m2). As the difference in CO2 level is only in the first few hundred meters on land and in average some 50 ppmv near ground, the influence on temperature (if any…) is negligible.

    The current and historical levels measured over land in average gives an increase of less than 1 ppmv if taken together with the rest of the atmosphere for global averages, assuming 95% of the atmospheric CO2 is reflected in ice cores for the historical values. Why should one bother about the global impact of local variability (except to know more about the fluxes of uptake and release)?

    And there was no political interest at all when Calendar put his criteria for selection of historical CO2 measurements forward, most people then saw more CO2 and thus higher temperatures as beneficial… He deselected CO2 data like those of Giessen and Poona (rice and soy fields), because of their purpose for agricultural tests. Well he was right, as the ice core CO2 measurements 50 years later revealed.

  136. anna v (07:00:00) :
    Ferdinand Engelbeen (04:36:17) :

    We will not agree on this. I do not defend Beck’s time series. Just that he has illuminated the differences in CO2 values form the politically correct ones.

    Humans live below 500 meters and below the inversion layer. Either we are interested for temperature and CO2 at what is happening where we live, or we go to 5000 meters with satellites and measure temperature and CO2.

    To take temperatures below 500 meters and CO2 at 5000meters is completely mixing apples and oranges for global averages.

    The satellite temperatures as seen in http://discover.itsc.uah.edu/amsutemps/ have little to do with the temperatures on the land, the ones we live through. The same is true for CO2 up so high.

    That is your fundamental error, the temperature where we live is governed by what happens at the top of the atmosphere which in turn does depend on the CO2 concentration there.

    As I said, global quantities have to be rethought from the beginning.

    It is you who needs to do some rethinking since your understanding of the physics of the atmosphere seems to be lacking.

  137. Phil. (09:49:41) :

    That is your fundamental error, the temperature where we live is governed by what happens at the top of the atmosphere which in turn does depend on the CO2 concentration there.

    Can you estimate the change in the total CO2 optical thickness due to the changes at the top of the atmosphere?

  138. I do happen to be a physicist, [sarcasm on] and I am sure that the 1 molecule of anthropogenic CO2 in in the rest of the million , at the densities of air at the top of the atmosphere surely will heat up and boil the earth.[ sarcasm off] It is the CO2 in the column of air that has any real greenhouse effect, and it is denser at the lower parts not up high in the sky.

    Now of course if you mean clouds and the albedo or trapping clouds generate, that is another story but has nothing to do with CO2 at the top of the atmosphere. Temperatures at 5km are correlated but not the cause of temperatures at surface, either.

  139. anna v (10:42:08) :

    I do happen to be a physicist, […] It is the CO2 in the column of air that has any real greenhouse effect, and it is denser at the lower parts not up high in the sky.

    Anna, as said (and calculated) before, the variability of CO2 in the first few hundred meters above land has negligible influence on the IR absorbance of CO2 in the total air column at the same place. And it has negligible influence on the yearly average CO2 levels in 95% of the atmosphere.

    I don’t understand why you insist that it is necessary to measure and average all garbage levels of CO2 near ground over land, if the influence is negligible on near global levels, where the yearly average mixing ratio is near the same from sea surface to 12,000 m (and higher) and above 1,000 m over land, from the north pole to the south pole?

  140. Steve Fitzpatrick (20:02:54) :

    Those of us who honestly doubt the catastrophic climate forecasts made by climate models/modelers must be willing to acknowledge that some of the data and assumptions that go into the climate models are reasonable (CO2 will continue rising due to burning fossil fuels), and some are rubbish (half of future warming is already “in the pipeline”). Some of the predictions are reasonable (CO2 and other infrared absorbing trace gases should increase the average temperature of the earth by some amount due to their radiative effects), and some are rubbish (the average temperature will increase 4C in 90 years, and sea level will rise by several meters).

    I think it more important to discredit the rubbish, not the reasonable.

    I fully agree! Too many discussions in sceptic circles in recent times are focused on the reasonable like that humans are/aren’t responsible for the increase of CO2 in the atmosphere, while there is a lot of evidence that humans are responsible (and no evidence at all of the contrary). That undermines the credibility of all sceptics on items where the whole AGW theory is on shaky grounds like the (weak) influence of aerosols, and thus the (weak) influence of GHGs and the negative feedback caused by clouds, while all models see clouds as a positive feedback…

  141. Re: anna v (10:53:51)

    Anna, the truth is you can copy, paste, & plot data in Excel in mere seconds. I’ve guided 100s of online-students through the basics. It’s a breeze — the students never believe it will be initially – but they get all the basics (for all the basic types of graphs) in a single sitting. I don’t mind coaching in small increments…

    Scatterplot example:
    1) Find the data-webpage you want. For example:
    ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_gl.txt
    2) Scroll down to the data.
    3) Select & copy the data (just like you select & copy text).
    4) In Excel – using menus: Edit > Paste Special > Text > OK.
    5) Data > Text-to-Columns > Next > Finish.
    6) Highlight columns C & D (i.e. date & CO2, in this example).
    7) Insert > Chart > XY (Scatter) > Finish.

    This will produce a (rather unaesthetic) graph in less than one minute.

    By double-clicking and right-clicking all over the graph (in every conceivable place) you will discover that cosmetic adjustments are an absolute breeze.

  142. Re: Philip Mulholland (01:48:09)

    Very stimulating post – thank you.

    Cautionary Notes:
    1) The arctic is ringed by a massive boreal forest.
    2) Since a number of relevant variables follow seasonal cycles, it will be tricky ascribing proportions of causation (due to all of the confounding).

    Certainly this is an interesting puzzle to work on …

    – – –
    Re: Steve Fitzpatrick (20:02:54) & Ferdinand Engelbeen (14:13:44)

    I share your concerns about attacking the real flaws rather than unreal ones. With power comes responsibility.

    – – –
    Ferdinand Engelbeen (04:36:17) “[…] there is very little variation even over a short time span for CO2 levels for any point on earth, as long as you are sufficiently far away from huge sources and sinks. In general that is everywhere above the inversion layer and above sea level.”

    “above sea level” includes “above the inversion layer”
    This is the whole atmosphere.

    Did you intend something more like, “In general that is everywhere not at the inversion layer and not at sea level.”?

    Clarification (via web-link(s) perhaps) of the vertical structure of mixing dynamics will be appreciated.

  143. Raising temperatures means retaining more heat. To do that one has to raise the heat capacity. Since the atmosphere is a macrosopic phenomenon, I refuse to believe that in energy terms it is not following thermodynamics. Total heat retained depends on density and volume. That is the only way in thermodynamic terms CO2/H2O can increase temperatures in the lower atmosphere. Hand waving photons and practically 1 anthropogenic CO2 in a million up high in the rare atmosphere is just that, hand waving, and the reason that, as expounded in another thread, theGCM models’ predictions for behavior of troposphere and stratosphere do not work. Thermodynamics trumps in volumes.

  144. Paul Vaughan (14:34:32) :

    Thanks for the tutorial.

    In my youth I was known as the fastest histogram producer in the lab, and that was when we carried cards to the computer room.

    My reluctance to get hands on the data probably stems from laziness in getting into the real nitty gritty of errors and such.

  145. Paul Vaughan (14:34:32) :

    Re: anna v (10:53:51)

    Anna, the truth is you can copy, paste, & plot data in Excel in mere seconds. I’ve guided 100s of online-students through the basics. It’s a breeze — the students never believe it will be initially – but they get all the basics (for all the basic types of graphs) in a single sitting. I don’t mind coaching in small increments…

    Scatterplot example:
    1) Find the data-webpage you want. For example:
    ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_gl.txt
    2) Scroll down to the data.
    3) Select & copy the data (just like you select & copy text).
    4) In Excel – using menus: Edit > Paste Special > Text > OK.
    5) Data > Text-to-Columns > Next > Finish.
    6) Highlight columns C & D (i.e. date & CO2, in this example).
    7) Insert > Chart > XY (Scatter) > Finish.

    Finally I was tempted. Step 7 did not work. It just gives two points, with irrelevant values .

  146. Paul, it worked in my 2000 office excel on my laptop. On the home recent office my daughter is still trying to see what is wrong.

  147. Anna The steps are all correct. However excel has a nasty habit of chosing the type of plot itself if it finds data it doesn’t like.

    Step 7 would better be
    Select the data you want to plot (NOT the whole columns). Ensure that there are no text entries apart from the column titles in your selection. Also there should be no blank entries immediately under the titles.

    Sometimes excel will swap what is plotted against what. Right click the graph, select [select data]. The click swap row/col. this may fix it!

  148. anna v (22:05:11) :
    Raising temperatures means retaining more heat. To do that one has to raise the heat capacity. Since the atmosphere is a macrosopic phenomenon, I refuse to believe that in energy terms it is not following thermodynamics. Total heat retained depends on density and volume. That is the only way in thermodynamic terms CO2/H2O can increase temperatures in the lower atmosphere. Hand waving photons and practically 1 anthropogenic CO2 in a million up high in the rare atmosphere is just that, hand waving, and the reason that, as expounded in another thread, theGCM models’ predictions for behavior of troposphere and stratosphere do not work. Thermodynamics trumps in volumes.

    As I stated above you appear to have no understanding of the physics of the atmosphere, your being a physicist notwithstanding. The influence of CO2 has nothing to do with its contribution to the atmosphere’s heat capacity, rather it is due to the enhanced rate of radiative heat transfer to/from the atmosphere.

  149. Paul Vaughan, 24-05-2009 (16:58:48) :

    “above sea level” includes “above the inversion layer”
    This is the whole atmosphere.

    Did you intend something more like, “In general that is everywhere not at the inversion layer and not at sea level.”?

    Clarification (via web-link(s) perhaps) of the vertical structure of mixing dynamics will be appreciated.

    Sorry, with my school English, I sometimes mix up things… “Sea level” in this case means from the surface on, everywhere on the oceans, except if there is an inversion, which is quite seldom over the oceans. But even then, the exchange rate between the oceans and atmosphere, positive in the tropics, negative near the poles, is slow enough to show little variation in the atmosphere from the poles to the equator (about 10 ppmv measured by ships). With the slightest wind, there is fast mixing with the air layers above.

    That is different over land: at night there is often an inversion at a few tens to a few hundreds meters, especially in shielded valleys and at low wind speed. Then we see a buildup of CO2 from plant respiration and human sources (heating, traffic, industry), while during the day more sunshine/wind increases turbulence and we see that the CO2 levels drop to or slightly below “background”. Averaging continuous measurements for daily values thus gives a positive bias, as the buildup at night is fully seen, but the uptake by plants during the day is dispersed over more air layers…

    In Diekirch, Luxemburg, the local weather station made CO2 measurements averaged over half hours during a few years. The results are here:
    http://meteo.lcd.lu/papers/co2_patterns/co2_patterns.html with a lot of interesting findings.

    Vertical profiles were done in Scandinavia and other places (even pre-Mauna Loa, but with physical impossibilities: higher levels at higher altitude than near ground) by e.g. Bert Bolin over the oceans (including regular commercial flights), see:
    http://www.icsu-scope.org/downloadpubs/scope13/chapter03.html Fig. 3.2
    This Scope paper includes more interesting items.

    Modern flights for vertical profiling (mostly over land) were done at several places at the same place as tall towers (200 m), a good start is at:
    http://www.esrl.noaa.gov/gmd/ccgg/iadv/ where you can compare ground level and tall tower/airplane data
    Over the ocean, there is one in the above at American Samoa at ground level and nearby Cook Islands for regular flight measurements.
    Individual projects are in The Netherlands and other places:
    http://www.chiotto.org/cabauw.html

    And of course, we have the satellite data, averaging over mid-troposphere:
    http://airs.jpl.nasa.gov/story_archive/Pre-Release_CO2_Data_Available/
    It would be interesting to compare the calculations of Bolin and Keeling of 1963 (Fig. 3.3 in with the Scope paper) with the AIRS satellite data…

  150. The increase in CO2 prior to AIRS (and even now) is the result of model output with some of the data literally coming from receipts at the gas pump. The sinks are nearly ALL modeled inputs. Therefore the outcome of this notion of increased CO2 is an assumption, not data. The AIRS data does indeed show that CO2 has increased overall since it first started measurements. However, it is a HUGE jump to say that CO2 is increasing. It could just be in an oscillated stage. It is another HUGE jump to say that the increase is human-caused. The validity is weak. The current model of increasing CO2 driven by humans has yet to be proven valid (does it measure what it says it measures). The AIRS satellite doesn’t five a rat’s ass about the source of CO2 increases (or decreases). The reliability (can it be replicated) is also weak in that sinks are not totally understood and that different models of sources and sinks can produce different CO2 numbers. The number often quoted as showing that human-driven CO2 is increasing is a hypothesis, not a theory. It may become one but I don’t see proof of it yet.

    Let us HOPE beyond hope that the CO2 numbers being spit out by AIRS is not “massaged” data to include modeled sinks prior to its publication. We kinda don’t like that here at WUWT.

  151. Pamela Gray (15:25:59) :

    Yes.

    Let us HOPE beyond hope that the CO2 numbers being spit out by AIRS is not “massaged” data to include modeled sinks prior to its publication. We kinda don’t like that here at WUWT.

    At this day and age of computers and grants flowing for climate change models one becomes suspicious, particularly as it took so long for the data to come out.

    It reminds me of a children’s shadow play , Karagiozis, killing the dragon:
    “Come forth curse’d snake
    If you don’t come forth
    I’ll come forth and come forth you”

  152. Phil. (07:27:26) :

    anna v (22:05:11) :
    Raising temperatures means retaining more heat. To do that one has to raise the heat capacity. Since the atmosphere is a macrosopic phenomenon, I refuse to believe that in energy terms it is not following thermodynamics. Total heat retained depends on density and volume. That is the only way in thermodynamic terms CO2/H2O can increase temperatures in the lower atmosphere. Hand waving photons and practically 1 anthropogenic CO2 in a million up high in the rare atmosphere is just that, hand waving, and the reason that, as expounded in another thread, theGCM models’ predictions for behavior of troposphere and stratosphere do not work. Thermodynamics trumps in volumes.

    As I stated above you appear to have no understanding of the physics of the atmosphere, your being a physicist notwithstanding. The influence of CO2 has nothing to do with its contribution to the atmosphere’s heat capacity, rather it is due to the enhanced rate of radiative heat transfer to/from the atmosphere.

    I have to reply to this, because it is climate modelers who do not understand physics.

    Physics theory has many axiomatic systems based on solid data. Partially they overlap.

    Classical mechanics and quantum mechanics.
    Neutonian mechanics and general relativity.
    Thermodynamics , statistical mechanics,and quantum statistical mechanics.

    Thermodynamics knows nothing of statistical mechanics or quantum statistical mechanics. It works very well macroscopically, (which is what weather and climate are. macroscopic observations) and includes all macroscopic radiative effects, viz black body radiation etc. Thus within thermodynamics the atmosphere can be fully described because there is no need it involve quantum mechanics ( no coherence and it is macroscopic). Heat capacity is the way in thermodynamics that ability of matter in bulk to retain energy in kinetic and radiative form is described.

    Theoretically one could describe a bulk system by quantum statistical dynamics, but this means that the whole caboodle should be consistently described with statistical ensembles and probability functions and minimizations, an impossible task.

    Climate theorists have confused the knowledge gained by quantum studies of how molecules interact with energy, useful to know, useless in bulk, to make a mixture of pure thermodynamic background with quantum statistical joints. This cannot be done consistently, and it is why the models fail among other things. CO2 is distributed in the bulk, and is not sitting up in the troposphere or where have you playing ball with photons, neither H2O at that. The photons it plays ball with are from the bottom up, and there is very much less matter the higher one goes.

    Would you make a hot water bottle filled with CO2 at atmospheric pressure?

    at troposphere pressure?

  153. Pamela Gray (15:25:59) :

    Pamela and Anna V,

    Sorry, but what you are telling now is pure fiction. The CO2 data at Barrow, Mauna Loa, south pole and some 70+ other places on earth, plus flight measurements, buoiys and ships over the oceans, far away from huge sources and sinks are measurements not the outcome of any model. The average measurement error is better than 0.1 ppmv for one series and parallel series from flask measurements at the same place are within 0.12 ppmv. The increase at the south pole (the oldest series) since 1959 is 60+ ppmv. All other stations and flights, representing over 95% of the atmosphere show the same increase over time. Why do you think that the increase is the result of a model?

    The only place where a simple “model” is used is when at any station there is a lack of data, due to mechanical problems or volcanic eruptions, etc. Then one uses a curve fitting algorithm that uses the seasonal curve of the previous three years + the remaining good data of a month to estimate the monthly average. In that case the hourly data are flagged with an *A* flag. In all cases both the real measurements (if available) and the calculated trend are presented in the tables, but when possible only real data are used for daily, monthly and yearly averages. Even if at one station there is a problem with the data, the other stations simply go on with monitoring and show the usual, emissions related increase, modulated by temperature variations.

    The emissions all are based on fossil fuel production/sales inventories, which are kept under supervision of the tax income departments. I don’t think that these are interested in underestimating the sales, but some under the counter sales may give a slight underestimate… Every type of fuel has its own efficiency when burned. That is used to estimate the CO2 emissions + cement manufacturing + forest clearing. All together, the human emissions are about twice the measured increase of CO2 in the atmosphere over the past 150 years.

    This all has nothing to do with any model or any detailed knowledge of the carbon cycle: if you add twice the amount which is seen as increase in the atmosphere, one can be sure that it is the addition which is responsible and nature as a whole is a net sink for human CO2, no matter where it is absorbed.

  154. In addition,

    I have plotted both the raw hourly data of Mauna Loa without any selection criteria, thus including all outliers for 2004:

    and with the usual selection criteria:

    For 2004, 8784 hourly average data should have been sampled, but:
    1102 have no data, due to instrumental errors (including several weeks in June).
    1085 were flagged, due to upslope diurnal winds (which have lower values), not used in daily, monthly and yearly averages.
    655 had large variability within one hour, were flagged, but still are used in the official averages.
    866 had large hour-by-hour variability > 0.25 ppmv, were flagged and not used.

    As one can see in the trends, despite the exclusion of (in the above second graph) all outliers, the difference in trend with or without flagged data is minimal (less than 0.1 ppmv), only the number of outliers around the seasonal trend is reduced and the overall increase in 2004 in both cases is about 1.5 ppmv.

    Further, no need to hope that the AIRS (or any new satellite) data will give different results: the satellite data are calibrated with… station data like Mauna Loa, expanded with in-flight and balloon measurements. As good as the satellite temperature data were calibrated with surface and balloon data… Once the calculation method is established, the trends can differ, as satellites span (near) the whole world. But in the case of CO2, that will show much better regional resolution of sources and sinks (good for modelling the carbon cycle), but will not change the overall trends, as that is, averaged over a year, the same everywhere in 95% of the atmosphere…

  155. Paul Vaughan (17:09:00) : 23 05

    This is a rather trivial answer to a serious question.

    If you read the early Keeling reports you will find that even on Mauna Loa there are huge differences in CO2 concentration as winds change direction etc. These are thrown out to give a “pure” background reading that might be specific to Mauna Loa. Yet, the lower altitude CO2 concentrations, to my knowledge, have no fine structure on an annual basis. So how the heck can a diverse mixture of miscellanous-sourced CO2 go globally to the South Pole (where there is no land vegetation for a radius of 3500 km), without annual bumps getting mixed out of existence?

    Have you ever seen movies of the winds that can blow in the Antarctic and wondered how those delicate little annual wriggles can be preserved?

    I’d call them an artistic licence, to make them more credible by resembling the (filtered) Mauna Loa set.

    Taking it further, to say that the CO2 level at the Soth Pole, at Barrow and at Mauna Loa is essentially the same concentration (after purification) might simply mean that places of high variability have been excluded from the data. Because CO2 is a rather dense gas and because a lot is cycled near the ground surface, this is where I’d expect the highest CO2 concentrations to be found, volcanos excluded, in the atmosphere. This is where there will be the greatest “greenhouse effect”, this is where CO2 levels should be measured for correlation with temperature if you have idle time to do that. Next thing, someone will be measuring CO2 at commercial jet cruise levels and correlating that with global temperatures.

    Unreal, man.

  156. Geoff Sherrington (04:28:34) :

    Geoff,

    The variability of CO2 is mainly over land in the first tens of meters, up to a few hundred meters. Not above the sea surface and not over land above 500-1000 m. See e.g. the measurements of the tall tower of Cabauw (The Netherlands) at different heights:
    http://www.chiotto.org/cabauw.html

    Even if you double the current CO2 levels in the first 1,000 m, the influence of the doubling is only 10% of the IR absorption (0.4 W/m2) over the full air column at the same place. Thus the typical bias of average 30-60 ppmv over land in the first tens of meters has little to no influence on local, regional or total land warming, and none over the oceans.

    Again, throwing out the outliers of Mauna Loa (to both sides) doesn’t influence the average increase, it only smooths the variability around the seasonal trend (see the graphs in the previous message). You can do the same work for Barrow, Samoa and the south pole data, as all the unfiltered hourly averages (calculated from 40 minutes of 10-second raw voltage sampling) are available at: ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/

    And there is a good correlation between inflight (and station) CO2 data and SST: SST governs the variability around the increase rate of CO2. The opposite is more difficult to estimate/prove… Inflight data of commercial flights (Scandinavia-Boston) showed in the begin 60-ies the same values as Mauna Loa, south pole and other places, be it more smoothed for seasonal variation than Mauna Loa.

  157. Because CO2 is a rather dense gas and because a lot is cycled near the ground surface, this is where I’d expect the highest CO2 concentrations to be found, ……

    Unreal, man.

    Certainly, anyone referring to the density of CO2 gas and implying that means that it should therefore be more concentrated near the ground should automatically be disqualified from a serious scientific discussion.

  158. Ferdinand Engelbeen (08:00:46) “[…] except if there is an inversion”

    That’s what I figured you meant — I wanted to be sure. Thanks not only for the clarification, but also for the great links. I am very much interested in ALL of the spatiotemporal variability at all scales …and busy digging into covariates …

    – – –
    Re: anna v (23:37:49) & (05:45:25)

    A nice thing about Excel is that it provides a number of ways to do the same thing …

    Another way to access the “Chart Wizard” (graphing) dialog-box is via a menu-button with an icon that looks like a mini-barchart.

    [If you hover your mouse over it – & pause movement – Excel will display “Chart Wizard”.]

    If you hover your mouse over Excel graphs – & pause movement – Excel will display the name of the graph feature over which you are hovering.

    If you right-click while hovering over your graph’s “Chart Area”, you can access a variety of dialog-boxes.

    To check what type of graph Excel has made:
    1) Right-click while hovering over your graph’s “Chart Area”.
    2) From the menu that appears, choose “Chart Type”.

    [You want “XY (Scatter)” for the CO2 vs. Date scatterplot.]

    A few side-notes:

    Note that there are tabs across the top of Excel dialog-boxes —- Suggested: Spend a few minutes checking out the various adjustables on each tab when you have a minute.

    And note that Excel uses letters to represent columns and numbers to represent rows.

    Next:

    To check what data Excel has (actually) graphed:
    1) Right-click while hovering over your graph’s “Chart Area”.
    2) From the menu that appears, choose “Source Data”.
    3) In the dialog-box that appears, select the “Series” tab.

    If you (accidentally) made the mistake of (for example) only highlighting cells C1 & D1 (instead of cells C1 through D351, which is what you want), you will see a blank “X Values” box and something like “=Sheet1!$C$1:$D$1” in the “Y Values” box ….when what you want to see are:

    …in the X Values box:
    =Sheet1!$C$1:$C$351

    …in the Y Values box:
    =Sheet1!$D$1:$D$351

    Suggested:
    Check to see if you have a “XY (Scatter)” plot with the preceding X & Y Values.

    Just small steps… I can share some more tips later. With patience, it all works out – it always does.

  159. Re: Geoff Sherrington (04:28:34)

    We need to keep in mind that “‘background’ CO2 concentration at Mauna Loa” is just that.

    Questions:
    a) Have you watched the AIRS movies?
    b) Are you suggesting the South Pole CO2 record is pure fabrication?

  160. Paul Vaughan (16:58:48)

    “Certainly this is an interesting puzzle to work on …”

    Indeed so, care to join me?

    Your caveats are well founded and I know that the standard explanation is that the “massive circumpolar boreal forest” forms the biological sink for carbon dioxide. However, when Ferdinand Engelbeen last year posted graphs showing this northern hemisphere CO2 summer draw-down signal, I was intrigued. There are two hats I can wear, geoscience (professional) and bioscience (amateur) and I viewed this atmospheric response from my geoscience experience and not the standard bioscience perspective. After all, thinking outside the box is what scientists are suppose to do :-)

    I decided to build on the analysis that Ferdinand presented, so I looked at CO2 data for high and low latitudes from both hemispheres, continental and oceanic locations, & high and low elevations. From inspection it is clear that there is a global pattern to these data. Consider the CO2 variation as a signal, its greatest amplitude and sharpest onset is at the highest Arctic latitudes (e.g. Barrow, Alaska), the signal propagates south through the atmosphere; Mauna Loa, Hawaii is later in time and smaller in amplitude than at Barrow, and it then reverses phase into the Southern Hemisphere, down to the South Pole which has the smallest amplitude excursion of all.

    What follows is a description of my simple scoping study to establish if there is a correlatable relationship between the atmospheric CO2 summer minima at Barrow, Alaska and the extent of the Arctic Ocean open water (ice free) area.

    Establishing the open water area of the Arctic Ocean is relatively straight forward using published sea ice data, if we accept the idea of a progressive northward zonal melt of sea ice as the boreal summer advances. My assumption that the southerly located frozen waters of the Sea of Okhotsk, Bering Sea, Hudson Bay & Baltic Seas all melt before the Arctic Ocean ice does, is of course a simplification prone to error, but one that could be corrected by using a detailed latitudinal melt history database.

    In order to determine the strength of the CO2 summer draw-down signal for Barrow we must establish the notional carbon dioxide concentration that would exist if the summer sink was inactive. The CO2 data base for Barrow extends from 1974 to 2006, by using an appropriately designed filter it is possible to preserve only the winter data and discard all the summer values. By this means the preserved winter data can be curve fitted and the equation of this curve used to establish (despite the rising annual trend) the hypothetical inactive sink summer CO2 concentration at Barrow for all of the 33 years in the record. A cross-correlation of the strength of the CO2 draw-down signal at Barrow versus the Arctic Ocean open water extent for July thru November, fitted with a simple linear trend curve, has an R squared value of 0.664.

    The amplitude of the annual Antarctic CO2 signal is small compared to the Barrow data. The South Pole CO2 record is for a high elevation continental location, when compared with the time series data for Syowa, a low elevation coastal Antarctic station, both of these CO2 data sets are coincident in amplitude and phase. The simple sinusoidal form of these data matches the phase of the annual variation in areal extent of the Southern Ocean sea ice. This phase locked match with the surrounding Southern Ocean sea ice area suggests a local geochemical cause for the austral summer CO2 draw-down, rather than a distant land-based biochemical cause for this signal.

    Questions & Comments
    1. Why, if the Arctic Ocean is the predominant cause of the Barrow CO2 signal, is the Southern Ocean signal so weak in Antarctica?
    The Arctic Ocean is a geographical feature with a defined southern coastline, whereas the Southern Ocean is unbounded in its northern latitude. The Arctic is a “ponded ocean” and occurs at higher latitude and has lower surface water salinity than the open Southern Ocean does. The key question therefore is “Does surface water salinity affect the rate of CO2 uptake by cold ocean waters?”

    2. Is it possible to determine the total mass of CO2 removed from the Arctic atmosphere by the boreal summer draw-down?
    A comparison of the Barrow sea level data with the high altitude Greenland Summit data suggests that the summer CO2 draw-down affects the full vertical extent of the Arctic atmosphere.

    The R squared correlation coefficient of 0.664, noted above, gives support to the idea that the ice free Arctic Ocean sea water acts as a CO2 sink during the northern hemisphere summer, in addition to the currently recognised biological sinks.

  161. Philip Mulholland (15:16:17) :
    We seemed to have been thinking along similar lines:
    A co2 plot from many locations (with ch4 at barrow thrown in)

    A plot of days from 1st jan for Barrow CO2 to reach minimum

    Note that la jolla pier clifornia has almost identical date.
    Note also that the Barrow minima have no greatly changed over the record despite temperature and seaice changes. The La Jolla data is daily the barrow is houly. Plot of last 3 barrow minima:

    The minima at barrow occur Not at minimum Ice Not at Autumn onset Not at algal Bloom time

    If it were sea ice then why is it so strong over 2/3rds the globe christmas island shows a significant dip.

    South Pole station has a ripple 6 months out of phase with NH.

    The signal is as strong in central kazakhstan (land locked) as at Barrow (coast)

    I cannot explain it!

  162. Philip Mulholland (15:16:17) “Does surface water salinity affect the rate of CO2 uptake by cold ocean waters?”

    Sure – since it has a very serious effect on freezing temperature.

    Otherwise, the references the chemically-minded WUWT participants have posted haven’t (so far at least) emphasized a (strong) role for salinity in the CO2 equilibrium chains. (I’m eager to learn if anyone has any details &/or links to share to shed deeper illumination on the role of salinity.)


    Philip Mulholland (15:16:17) “Is it possible to determine the total mass of CO2 removed from the Arctic atmosphere by the boreal summer draw-down?”

    I know people who work on this, but I haven’t been following their work; however, recent WUWT threads have made me curious enough to make some enquiries next opportunity I have.

    One thing I can share is that ~15 years ago when I was involved with a research group that was modeling biogeochemical (earth-water-atmosphere) cycles, the modelers were happy if they got fluxes to within a factor of 2.

    There’s a lot of spatiotemporal variability in the field (& sampling resources are not infinite), so it is unrealistic to expect precise estimates – in many biogeophysical modeling contexts.


    I am also curious about the roles of fresh water (on land) and rocks …and wind …i.e. beyond vegetation, oceans, & temperature …but for the near term (i.e. until I have time to consult a *lot* of literature and run a lot of analyses) I’ll be content to lump everything together, with the possible exception of wind.

  163. HEAD’S UP:
    anna v’s instincts about cleansed data are justified – unbelievably so.

    Anyone curious to know more?

  164. Paul Vaughan (00:52:35) :
    Anyone curious to know more?

    Yes

    But are you saying that the raw hourly data is fake

    I assume this is made by gas analysers.

    There are various other methods which I have posted somewhere (climateaudit?) 2 flasks/single flask etc.

    It is inconceivable that someone is going to go through 10MBytes of hourly data to falsify the output.
    If you look at the output here:

    I have not removed their flagged errors (the vertical lines are all for total failures (-99 or -999) there are other flags used to remove spurious “errors” – these have not been acted on.

    Are you suggesting someone sat down and did this?

    On this plot I have added filtering to remove the randomness so that the plot is usable. But there is still large variability.

    I again would be suprised if this error ridden data is falsified.

    So pleae tell!

  165. Here is a suggested exercise – using Alert, Nunavut (formerly part of NWT = Northwest Territories – but now separate), Canada:

    Compare the following 2 (monthly-resolution) time series:
    1) http://cdiac.ornl.gov/ftp/trends/co2/altsio.co2
    2) ftp://ftp.cmdl.noaa.gov/ccg/co2/flask/month/alt_01D0_mm.co2

    Break the analysis down as follows:
    a) annual timescale – i.e. applying 12 month bandwidth moving-average.
    b) seasonal timescale – i.e. using differencing to isolate seasonal variations from the (secular) trend.

    Cautionary Notes:
    i) Be (very) careful with missing values (which are summarized differently in the 2 series).
    ii) Be sure to note any systematic patterns in the errors.

    My reaction to what I found went something like this:
    “This is outrageous – unfathomably & unbelievably so!”

    The adjustments seriously & systematically distort seasonal variations.

    …So why are the files listed as “Data“?
    http://cdiac.ornl.gov/trends/co2/sio-keel.html

    The ‘adjustment’ procedures are mentioned in the files (at the bottom), but this does not change the fact that:

    Labeling model-output as “Data” is grossly misleading.

    For those who skipped the analysis, r^2 (seasonal) is less than 0.33 and model assumptions are severely violated.

    Why replace real data with loosely-related severely-seasonally-biased modeled “data”, particularly if defending the model assumptions is like asserting that a colorful polka-dot pattern is indistinguishable from solid-grey?

    If one breaks the analysis down by month, 8 of the 12 r^2s go below 0.1 and 4 even go below 0.01.

    …and earlier in the discussion attention was already drawn to “daily” data which are not actually daily, so …

    Some websites begin appearing ‘fishy’, ‘sketchy’, & untrustworthy. “Data” labeled as data should not be assumed to be unbiased representatives of data. Rigorous interrogation is clearly warranted.

  166. Paul Vaughan (20:48:22) :

    My reaction to what I found went something like this:
    “This is outrageous – unfathomably & unbelievably so!”

    The adjustments seriously & systematically distort seasonal variations.
    ……
    Why replace real data with loosely-related severely-seasonally-biased modeled “data”, particularly if defending the model assumptions is like asserting that a colorful polka-dot pattern is indistinguishable from solid-grey?

    If one breaks the analysis down by month, 8 of the 12 r^2s go below 0.1 and 4 even go below 0.01.

    thanks for this info. So Pamela Pamela Gray (09:27:30) : was right about modeling being inputed to the data.

    My suspicion flags were raised when on a search of bibliography I found all the publications were Keeling + somebody else, a graduate student most probably. One could call it the Keeling effect. Lets hope the Japanese do not catch it.

  167. Paul Vaughan (20:48:22) :

    Here is a suggested exercise – using Alert, Nunavut (formerly part of NWT = Northwest Territories – but now separate), Canada:

    Break the analysis down as follows:
    a) annual timescale – i.e. applying 12 month bandwidth moving-average.
    b) seasonal timescale – i.e. using differencing to isolate seasonal variations from the (secular) trend.

    —-

    OK, so try this third breakdown of the data:

    Instead of using a running 12-month “average” to smooth data – Use a “5 month seasonal” average: This month is averaged against LAST YEAR’s month, last year (month-2 , month-1, month, month+1, month+2)

    Too much impact of just last year’s influence?

    Alternative 4: Add one more year to the check:

    Smooth this month with (year-2: month-1, month, month+1) and (year-1: month-1, month+1)

  168. Paul Vaughan (01:42:05) :

    Philip Mulholland (15:16:17) “Does surface water salinity affect the rate of CO2 uptake by cold ocean waters?”

    OK, so look at it this way: We know that real-world (air!!!!) temperatures have “only” gone up by 1/4 of one gree. 1/2 of one degree at maximum in 1998.

    So, given the aera of the ocean, assume the ocean went up an equal amount. (Or, get actual ocean near-surface (top 10 meters) temperature data for every ten years since 1945 from anti-submarine records of the Atlantic and Arctic. )

    Can the real-world changes in real-world ocean surface temperatures explain the change in CO2 levels measured? If not, what is the difference? Is the measured difference inn CO2 concentration GREATER or LESS than what is predicted from the change in ocean temperatures, and – if it is – can that difference be CALCULATED from what is known about man’s ACTUAL carbon mining and drilling?

    (No “gueses” or “assumptions” about jungle-clearing effects – cutting trees to clear land only exposes new land for growing NEW trees and vegetation and grass on the newly cleared land. )

  169. Re: RACookPE1978 (10:10:33)
    It depends on what one is investigating. For example, to look at annual timescale, use 12mo time-integration. (In detailed analyses, all timescales are investigated.)

    Re: RACookPE1978 (10:19:02)
    Clarification: Philip was asking about the effect of salinity.

    Re: anna v (09:31:57)
    The real data is available online too …but one has to be careful because some websites have posted modeled “data”. Watch out for seasonal structure that looks “too perfect”. In the example I gave, the NOAA data ‘seems’ real, but the CDIAC “data” is a ridiculous representation of seasonal variation.

  170. correction: data ‘are’ & ‘seem’ (not ‘is’ & ‘seems’) [Data – plural – (vs. 1 datum)]

Comments are closed.