Loehle on Hoffman et al and CO2 trajectories

Craig Loehle sends word of a new publication that looks at CO2 trajectories in the context of Hoffman et al. Excerpt posted below. A link to the full paper follows.

THE ESTIMATION OF HISTORICAL CO2 TRAJECTORIES IS INDETERMINATE: COMMENT ON “A NEW LOOK AT ATMOSPHERIC CARBON DIOXIDE”

Craig Loehle, PhD, National Council for Air and Stream Improvement, Inc., Naperville, Illinois

Atmospheric Environment doi:10.1016/j.atmosenv.2010.02.029

Figure 2: Projected exponential, quadratic, and saturating models compared to IPCC scenario values. Over the calibration period 1958-2009 the 3 models and data are indistinguishable from each other, but then diverge.

Abstract

A paper by Hofmann et al. (2009, this journal) is critiqued.  It is shown that their exponential model for characterizing CO2 trajectories for historical data is not estimated properly. An exponential model is properly estimated and is shown to fit over the entire 51 year period of available data. Further, the entire problem of estimating models for the CO2 historical data is shown to be ill-posed because alternate model forms fit the data equally well.  To illustrate this point the past 51 years of CO2 data were analyzed using three different time-dependent models that capture the historical pattern of CO2 increase.  All three fit with R2 > 0.98, are visually indistinguishable when overlaid, and match each other during the calibration period with R2 > 0.999.  Projecting the models forward to 2100, the exponential model comes quite close to the Intergovernmental Panel on Climate Change (IPCC) best estimate of 836 ppmv.  The other two models project values far below the IPCC low estimates. The problem of characterizing historical CO2 levels is thus indeterminate, because multiple models fit the data equally well but forecast very different future trajectories.

Discussion

Three equally plausible models give very different expectations for future CO2 trajectories under business as usual assumptions. No inference is possible at this time as to which model is “right” because the three models are virtually identical in the CO2 data

period (Fig. 2) and the understanding of the carbon cycle in this context is not precise enough. The factors governing CO2 in the atmosphere may or may not lend themselves to long-term predictability even if they were understood better. It is clear, however, that simply using an exponential model because it fits the data represents an incomplete analysis, as other models fit equally well. The IPCC “best estimate” of 836 ppmv in 2100, which is equivalent to extrapolation of the exponential model, is indeterminateand could just as easily be 569.8 or 672.5 ppmv (or even 747.7 ppmv by Hofmann et al., 2009), as found using equally likely models that fit the same data. These much lower “best estimate” values affect the IPCC “high” estimate, which is derived from the base estimate exponential model by adding a growth term (based on higher economic growth rates and other factors). Because projections of future climate depend on future CO2 (and other greenhouse gas) levels, a future value below the IPCC low estimate would preclude the more extreme climate change forecasts made by the IPCC.

PDF of the entire paper is available at: http://www.ncasi.org/publications/Detail.aspx?id=3282

Advertisements

141 thoughts on “Loehle on Hoffman et al and CO2 trajectories

  1. I think he’s saying that the earth is still an imperfect sphere and won’t turn into square as predicted by the IPCC based on the current methods of guesstimation available to modern $cience and Fat Albert & Co., Inc.

  2. What Loehle demonstrates is just that trying to model anything without knowing all the variables is pointless.
    Isn’t this just a re-iteration of what we’ve known all along?
    A welcome re-iteration but re-iteration nonetheless.
    DaveE.

  3. The paper doesn’t explain the “saturating” model beyond some equations. Anybody know what is meant by “saturating?”

  4. Is the exponential a constant proportional rate (like a compound interest curve) or does the proportion increase itself increase over time? My understanding has long been than the IPCC gives an exponential rate of ~one percent per year, but recently it’s more like half that, although it might be increasing.

  5. Which only goes to show the utter absurdity of such so called climate science.
    In this case curve fitting in to extrapolate without considering the physical and chemical basis. Of course the curves are similar towards the origin as anyone with a basic mathematical education would know.
    Ignoring the fact that due to Calendar and Keeling the stated base curve starts out far too low, it is unlikely that CO2 levels have dropped below the low 300 ppm during the Holocene. Since accurate measurements were first taken chemically from the 1830’s onward most scientists thought that the normal level was about 400 ppm until Keeling using Calendar’s work devised his meretricious curve.
    Try here http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/
    Nor given the short residence time of CO2, about 5 years for bulk, could levels rise much above 400 ppm or so.
    Try here http://www.co2science.org/issues/v12/v12n31_co2science.php
    So vast amounts of money are wasted playing silly games twiddling curves which have no foundation in the real world.
    Kindest Regards

  6. From
    http://solarcycle24com.proboards.com/index.cgi?action=display&board=globalwarming&thread=1142&page=1#43863
    The possibility that
    more CO2 COOLS the stratosphere and by that the earth?!
    ” . . .a book written by sir John Houghton
    http://tinyurl.com/yl8oohc
    I found it very interesting and want to share some of the text in it.
    As page 14 in 2.3: “The tropopause is a surface situated a a height of ~10 km in mid latitudes which divides the region below ( the troposphere), in which convection is the dominant mechanism of vertical heat transfer, from the stratosphere which is much more stable stratified and where radiative transfer is dominant.”
    Convection. NOT radiation.
    4.8 “…. in the 20-60 km region. In this part of the atmosphere the temperature
    distribution is mainly determined by a balance between radiative cooling in the
    infrared from carbon dioxide band and heating by absorption of solar radiation by ozone.”
    more CO2 COOLS the stratosphere and by that the earth.”

    Either the warmists or
    the coolists must have things
    bass ackwards–
    although the ultimate analysis
    does appear extremely complex to me.

  7. Any model developed without an underlying theory to back it is equivalent to technical anaylsis of the stock markets. I believe bullpucky is the scientific term for this? i.e. I am in total agreement with your assertions.
    I did a similar technical forecasting exercise when I was at the University of Arizona (doing a Masters). I took alumin(i)um consumption in the US and fit 8-9 models to the past 50 years or so of data. All fit with a very high R^2 (0.95 or better). The “predictions” (projections?) varied so widely as to be farcical.
    At the end of the day the engineers’ approach is as good as any… grab a wide crayon and draw a line through the data, overshooting somewhat into the future. It’s as good a forecast as any. Dang, I am too young to be so cynical…

  8. I think the curve matching should also fit CO2 records for other sampling locations if the exercise is to model global CO2. Too much effort it seems is applied to only one of several high quality CO2 records.

  9. DR
    Henry’s Law is only valid at equilibrium concentrations between ocean and air. If the equilibrium is artificially increased on the airside by injection of fossil CO2, then the relaxation to equilibrium, according to Fick’s diffusion law, will take place in 55 years.
    Not sure what an exponential fit is doing here, it’s all about future emission trajectories (eg coal policy) of India and China.
    http://members.casema.nl/errenwijlens/co2/lutzco2model.htm

  10. A Polynomial projection of an order of 5 or less and without extrapolation, would be a better choice as it would not permit the interference of the “scientist’s” intent.

  11. The paper is on the proper track: Determine what levels of CO2 might be attained in the atmosphere and when and if CO2 will ever approach or exceed the upper limit for animals (Man) to properly and healthily resperate on a long term basis.
    That is a sound question to ask and investigate.

  12. Neither Henry’s law law nor Fick’s laws strictly apply to the take up of atmospheric CO2 by the oceans partly because because CO2 reacts with water to produce carbonic acid and partly because ‘dissolved’ CO2 is constantly sequestered by biological and chemical action. Ocean surface and deep water mixing also play their part so it is not possible to provide any accurate calculation from first principles.
    Kindest Regards

  13. Mike Odin (17:25:46) :
    does appear extremely complex to me.
    You have no idea of how complex this planet actually is.
    If you want to truly know what an anchievement our planet has created, you have to incorporate all things that achieved the balance we are currently enjoying for life. Leave out one small possiblilty and the model will fail.
    Just this amount of certain gases, this amount of pressure, this amount of salt, this amount of water, this amount of rotaion, this amount of sun, etc.

  14. DCC (17:19:29) :
    The paper doesn’t explain the “saturating” model beyond some equations. Anybody know what is meant by “saturating?”

    A saturating curve is one which approaches some constant ultimate limit.
    For example, a capacitor (condenser) being charged in series with a resistor develops a charge (and thus a voltage) that increases linearly at first and then “slows down” in a “diminishing returns” fashion, eventually reaching the voltage of the battery. (See here for a graph of the behaviour.) In other configurations, the voltage might ramp up in a curve at first, straighten out for a bit, and then “turn over”, giving an S-shaped approach to the final value. This has the general appearance of a “logistic curve” (see here for a graph).
    There are an endless number of functional forms that describe such effects, depending on what is being examined. The “bottom line” is that the increase eventually stops.
    /dr.bill

  15. What this paper shows that it is at this moment not possible to conclude which model is right. The models used by the IPCC are based on the Bern carbon cycle and give a quite simple exponential build-up. However, many things from nature are missing from this simplified box model, e.g the use of biofuels( wood, agricultural remains, manure burning, massive forest fires). The exponential can be replaced by other functions, which also fit closely or even more closely to the observed CO2 levels. The IPCC scare is based partially on the extrapolation of these fitted curves, and Loehle is showing that the IPCC model seems to be on the upper level. The other possible models give much lower predictions for 2100 CO2 levels, and therfore, less care.
    But above all, this paper shows that the science is not settled, far from that.

  16. The underlying message to this is…
    & I apologise for shouting
    MAYBE WE JUST DON’T KNOW
    DaveE.

  17. Here is the 2009 paper:
    http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VH3-4V7628D-3&_user=1412102&_coverDate=04%2F30%2F2009&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1264463148&_rerunOrigin=google&_acct=C000052645&_version=1&_urlVersion=0&_userid=1412102&md5=0fa12ddb8ba7570ca5475d63edb5a6f0
    The problem with the current paper is they give no plausible reasons for quadratic or saturating models. The 2009 paper explains why we should expect exponential growth. Read it. The current author is just doing curve fitting. If he did a best fit for the early data would he predict the recent data? There are not many processes that would give a quadratic relation. Economic growth is exponential not quadratic, although you could do a best fit for a parabola over any period you pick. In fact, think about what would happen if we ran the quadratic model backwards! It predicts CO2 going up as we go back into the pre-industrial age! (Although I can’t make out his a value, there seems to be a typo, or maybe my browser is missing a font.) And why would atmospheric CO2 saturate? It is a mix, not a solution. What would buffer it? Ocean CO2 might saturate, which is a concern. Or, does he think economic output is going to level off soon?

  18. DCC (17:19:29) :
    The paper doesn’t explain the “saturating” model beyond some equations. Anybody know what is meant by “saturating?”

    meybe something similar to the Michaelis Menten kinetics.
    http://en.wikipedia.org/wiki/Michaelis%E2%80%93Menten_kinetics
    “The Michaelis–Menten equation relates the initial reaction rate v0 to the substrate concentration [S]. The corresponding graph is a rectangular hyperbolic function; the maximum rate is described as vmax (asymptote).”
    So it would be a rectangular hyperbolic function that goes asymptotic for large values of x (=time in this case)
    But that is just my interpretation.

  19. Greenhouses typically run at 1,000 to 2,000 ppmv CO2 without harm to workers. CO2 levels approaching 1% become noticeable to humans, which is about 25 times current atmospheric abundance (~390 ppmv).

  20. A section of a sine wave with a wave length of 322 years will give you a much better (and realistic) fit of the Scripps background data than any of the above models.

  21. Fossil fuel resources are finite. The ocean’s capacity for CO2 is huge, kinetics limit its uptake but throw in variables like economics and plant and animal life, and saturation is clearly not out of the realm of possibility.

  22. One tenth of one percent atmospheric CO2 would wreak havoc on the minds of CO2 worriers. It could cause a huge increase in CO2 derangement syndrome and lead to massive new taxes.

  23. This paper is most pertinent to debunking some of the claims made by James Hansen in his popular book “Storms of My Grandchildren” (most definitely NOT a peer-reviewed scientific tome. Here is a quote from his book that is absolute rubbish.
    “How much climate responds to a specified forcing-specifically how much global temperature will change-is called “climate sensitivity”…climate sensitivity is reasonably well understood, on the basis of Earth’s history. Paleoclimate(ancient climate) records show accurately how Earth responded to climate forcings over the past several hundred thousand years…What is clear is that human-made climate forcings added in just the past several decades already dwarf the natural forcings…Carbon dioxide increased from 280ppm(parts per million) in 1750 to 370ppm in 2000, and to 387ppm in 2009…The impact of this CO2 change on Earth’s radiation balance can be calculated accurately, with an uncertainty of less than 15 percent. The climate forcing due to the 1750-2000 CO2 increase is about 1.5 watts.”
    First, the claim that climate sensitivity is “reasonably well understood” has been addressed by Dr. Roy Spencer on several occasions. I would say that alone is enough to call Hansen’s claim into question. But now, if you take Hansen’s words from his book above in the context of this paper that Anthony has just posted, it clearly shows that Hansen’s assumptions cannot be valid.

  24. Yeah! It goes up to cool down and then comes back down. Is it so or I am becoming crazy of too much post normal science?

  25. In the movie Avatar they had 1,000 foot tall trees and continuously pleasant warm weather. The CO2 levels would theoretically be very high on Pandora.
    Here on planet earth in Colorado, we are getting buried in our second global warming blizzard in less than a week. Another 20 inches of snow forecast tonight. If I could get the RV out of the driveway, I would do my share to pump out more CO2.

  26. this is where the fight will be won….
    23 March: Business Week: Carbon Market Rift Over Hungary May Shrink Trading (Update2)
    The United Nations carbon market, the world’s second largest, is at risk of shrinking until regulators close a loophole that allowed Hungary to sell credits that are invalid in Europe…
    “What we’ve got here is faulty drafting by the commission,” Henry Derwent, head of the group whose members include Morgan Stanley and Barclays Plc, said in an interview…
    Still, Hungary doesn’t rule out more recycling. Nor does Lithuania, said environmental adviser Laura Dzelzyte…
    Investors are now “questioning the authenticity” of what they are buying, said Paul Kelly, chief executive officer of JPMorgan’s EcoSecurities unit. Secondary trading of CERs may come to a “grinding halt” as traders question their validity, said Abyd Karmali, managing director and head of carbon emissions at BofA Merrill Lynch.
    The emergence of recycled CERs is the third incident in less than a year to raise red flags in carbon markets…
    http://www.businessweek.com/news/2010-03-23/carbon-market-rift-over-hungary-may-shrink-trading-update1-.html

  27. Anyway…Who cares! Except for Al the Magnificent, he surely knows it, he invented the internet and he is a nobel laurate!

  28. Mike (18:28:01) :
    Loehle is just pointing out that the existing data can be fitted very precisely by any number of simple functions, three of which he gave as examples. I’m sure that I could do equally well with a half-dozen other functions before breakfast. The conclusion is that no conclusion can be reached, and that the long-term extrapolations of Hoffman et al are “terminally iffy”.
    /dr.bill

  29. Computer models will tell you anything you want them to tell you.
    One of their weaknesses is that they need precise input figures (and strengths if you actually have precise figures). Atmospheric CO2 measurements are not precise to start with. The way they are presented as a single figure in parts per million, is scientifically incorrect. They should be presented as an average, with a range, and a coefficient of variation.

  30. So, in short, extrapolating future levels of CO₂ out to 90 years in the future based on 50 years of measuring it in the past is nothing more than, “your guess is as good as mine.”

  31. The problem of observability, relating to the determination of a unique state representation of a system given a set of observables, has been researched exhaustively by control engineers, particularly in the last 50 years with the advent of “modern control theory”. Mathematically speaking, when the system is overall “unobservable”, there are infinitely many representations which can duplicate any partially observed behavior.
    I have tried to explain to people here and elsewhere why the CO2 accumulation paradigm offered by the AGW enthusiasts is non-physical, but I have despaired that I can explain the mathematical reasoning to those who do not know, and do not want to know, the theory. The climate scientists are busy reinventing the wheel, and have an awful lot of catching up to do.

  32. Okay the trend follows for 50 years for 3 algorithmic models.. this just indicates that each part of each model needs to be shown as to how it is operating in reality. The concerning thing about this is that an explanation to the way it occurs is what makes each models validity make sense not just the fact that an equation is thrown in and we see what comes out (as fun as that is). This is where there is no description of validity from this journal article as to plausability. They say from a purely statistical view point that each model is as likely as the last… but in reality we know this is not true. There is a reason each model may have more merit given as I already mentioned, each component of the equation being somehow describable in reality as how it is interacting.
    What about increasing time span? Or also more importantly, what is the described mechanism which CO2 is within the time span being measured and how specific do we understand why the other variables are stable enough to view CO2 as the only signal in the algorithm and it is in fact what is measured?
    I’d say the article is spurious at best.

  33. “ill-posed” has a certain ring I like… much better than “robust” and “unprecedented” which have become so over-worn.

  34. Bart
    but I have despaired that I can explain the mathematical reasoning to those who do not know>>
    There’s an old cartoon of two rats in a cage. One of them says sarcasticaly to the other “oh yeah, well if there’s no God, who cleans the cage?”
    no math required.

  35. Marvin (21:54:40)
    “The concerning thing about this is that an explanation to the way it occurs is what makes each models validity make sense not just the fact that an equation is thrown in and we see what comes out (as fun as that is). This is where there is no description of validity from this journal article as to plausability.”
    You can not create a model and show that it fits past data and call it “settled”. The article shows that fitting past data is not proof. At best the IPCC has an unproven theory. This is the point of the article.

  36. Steve Goddard said:
    “Here on planet earth in Colorado, we are getting buried in our second global warming blizzard in less than a week. Another 20 inches of snow forecast tonight. If I could get
    the RV out of the driveway, I would do my share to pump out more CO2”
    I have some family in the NH that have said something similar.
    Unfortunately I have yet to see the evidence that CO2 is indeed a greenhouse gas. (from actual test results made during actual experiments). I think the cooling and warming is probably close to evens. So adding CO2 would probably not work. I saw some trailer on a program on TV where they wanted to make Mars warmer by pumping up CO2. I am afraid that won’t work either.

  37. Joe (22:45:50)
    “The article shows that fitting past data is not proof. At best the IPCC has an unproven theory. This is the point of the article.”
    The article did not give virtue to any of their algorithms because they treat all statistics as being the same. To know better than this requires better experiments and also a knowledge of what you are studying. It is not to the benefit of the article to intentionally misunderstand what statistics they are using and imply that all the results are as plausible as eachother, because they’re not as I already explained. If they know better and can correctly identify each component of what they’re explaining then which algorithm they choose will make more sense to use because of its actual physical relation to the climate. If this is not possible then further experimentation is required. Statistics are only as useful as what you understand them to relate and if it’s easy to move the trend and create a fiction then you don’t understand what it is measuring at all. A rather preposterous journal article because it is prejudicial for the sake of it rather than to contribute anything.
    I read it and noted it explains nothing and does so on purpose.

  38. Re: Marvin (Mar 23 21:54),
    In mathematical fittings there are the so called ” complete set of functions” with which you will be able to fit any curve of past data you want. Any curve.
    Example is the complete set of Fourier functions.
    When you manage to fit your curve, there will be constants in front of sinusoidal terms with varying frequencies. This does not mean that these variations have a physical meaning, (if you are fitting a physical function). It is just an artifact of the fit. Usually about five terms are enough to fit anything . ( give me four constants and I can fit an elephant, give me five and I will make him flap his ears, attributed to Von Neumann).
    If one starts with physics derived functions and makes a fit, it is the same as having a complete set of functions and using five of them, if you have five free parameters to fit.
    Thus at best, with a physics argument build up with four or more parameters, it is a hypothesis that has to be proven by the future data that have not been used in the fit.

  39. This report gives us more evidence that the IPCC models cannot be relied upon to predict future climate metrics. Are the people who produce them fools, or just finding the evidence their paymasters want based on wrong assumptions?
    The IPCC report states that the time CO2 stays in the atmosphere before being removed is somewhere between 5 and 200 years. Their models are based on this, although they give NO scientific reference for that 200 year residence time(RT).
    In 2009, Dr. Robert H. Essenhigh, Professor of Energy Conversion at The Ohio State University, published a paper in the international peer-reviewed journal Energy & Fuels, about the residence time of anthropogenic CO2 in the air.
    He finds that the RT for bulk atmospheric CO2, the molecule CO2(12), is ~5 years, in good agreement with other cited sources (Segalstad, 1998), while the RT for the trace molecule CO2(14) is ~16 years. Both of these residence times are much shorter than the IPCC claim them to be.
    The other reason the IPCC model over-states the increase in CO2 concentration is that it assumes that temperatures will increase catastrophically in the future, so reducing the ability of the sea to act as a CO2 sink.
    The ice core data shows that changes in temperature effect CO2 levels. However the CAGW hypothesis, which is being foisted upon us by the IPCC cabal of climate statisticians, has cause and effect the wrong way round.

  40. Mike (18:28:01) :
    ” The 2009 paper explains why we should expect exponential growth. ”
    Where is the evidence that observed changes in atmospheric CO2 are caused by man or “economic growth” ?

  41. Everyone dreaming with CO2 concentrations above 500 ppm should wake up to life. We are now in the XXI century, the Earth is round and finite.

  42. anna v (00:44:49) :
    What you say is quite funny I loved the quote from Von Neumann.
    I think in the sense that they are actually trying to measure physical attributes makes purely statistical theories without other sorts of evidence don’t give proof of anything. This is why understanding and measuring the effects and taking note of potential causality gives us a grounds to credit a certain statistical function as being correct over another clever but flawed method. This is not to presume that the IPCC given statements of ppmv are correct from their selected journal research. What I’m saying is that the given journal article does not presume to understand anything at all and hence forth drives a spurious understanding of what it is to make a model about the process as if it were confusing.
    It seems to me that from what you said about the 4 parameters give rise to a lack of any precision being available and that it is essentially useless. I have to understand this better before I can comment on it.
    I haven’t looked into the IPCC and how their evidence supports their equation. It might be just as bad as this article (none at all). In any event it doesn’t excuse the fact that this article deliberately obfuscates attempting to understand by employing statistical trickery.. unless of course the point is that the IPCC have no evidence for their claim other than the lonesome formula.

  43. And Malthus thought populations would continue to grow exponentially forever, too.
    As demand for fossil fuel grows, it will be met by depleted supply. Prices will rise and demand will be attentuated. At worst, fossil fuel use (ie-CO2 production) will approach some asymptotic maximum.
    As if CO2 makes any practical difference anyways.

  44. Friends:
    Loehle’s analysis is similar in its nature and its findings to ours:
    Ref. Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005)
    We considered mechanisms in the carbon cycle and used model studies to determine if natural (i.e. non-anthropogenic) factors may be significant contributors to the observed rise to the atmospheric CO2 concentration as recorded at Mauna Loa since 1958.
    We used three basic models to determine if each of the three mechanisms alone could be responsible for the observed rise (i.e. each model assumed its mechanism dominatesthe carbon cycle). We called the three basic models the A Model, the P Model and the M Model.
    The A Model is the much respected model of Ahlbeck that is based on a postulated linear relationship of the sink flow and the concentration of CO2 in the atmosphere.
    The P Model is a power equation of the type often used in process (chemical) engineering and its use assumes that several different processes determine the flow of CO2 into the sinks.
    The M Model is derived from biology, or rather biochemistry, because we were mindful that the absorption of CO2 takes place at least partly in the biosphere. It is a formulation of the Michaelis Menten (MM) description of enzyme action.
    We used each model to emulate all the rise in atmospheric CO2 concentration as being a result of the anthropogenic emission.
    And
    We used each model to emulate all the rise in atmospheric CO2 concentration as being a result of a variation to some natural emission.
    Hence, we obtained 6 models with 3 assuming an anthropogenic cause of the rise and 3 assuming no significant anthroogenic contribution to the rise.
    Each of these models matches the available empirical data without use of any ‘fiddle-factor’ such as the ‘5-year smoothing’ the UN Intergovernmental Panel on Climate Change (IPCC) uses to get its model to agree with the empirical data.
    The perfect fit obtained by each model results from the calculated equilibrium CO2 concentration values. Each model indicates that the calculated CO2 concentration for the equilibrium state in each year is considerably above the observed values. This demonstrates that each model indicates there is a considerable time lag required to reach the equilibrium state when there is no accumulation of CO2 in the atmosphere. Some processes of the system are very slow with rate constants of years and decades. Hence, the system takes decades to fully adjust to the new equilibrium. So, the models each indicate a smooth rise of atmospheric CO2 concentration as the system adjusts towards equilibrium.
    This slow rise in response to the changing equilibrium condition also provides an explanation of why the accumulation of CO2 in the atmosphere continued when in two subsequent years the flux into the atmosphere decreased (the years 1973-1974, 1987-1988, and 1998-1999).
    And it removes the quantitative discrepancy between 13C:12C isotope variation and that expected from a simple assumption of CO2 accumulation in the atmosphere (i.e. the IPCC assumption).
    However, each model indicates a different future atmospheric CO2 concentration because they each provide different equilibrium concentrations.
    So, each of the models in our paper matches the available empirical data without use of any ‘fiddle-factor’ such as the ‘5-year smoothing’ the UN Intergovernmental Panel on Climate Change (IPCC) uses to get its model to agree with the empirical data.
    But, if one of the six models of this paper is adopted then there is a 5:1 probability that the choice is wrong. And other models are probably also possible (as Loehle demonstrates). And the six models each give a different indication of future atmospheric CO2 concentration for the same future anthropogenic emission of carbon dioxide.
    Data that fits all the possible causes is not evidence for the true cause. Data that only fits the true cause would be evidence of the true cause. But the above findings demonstrate that there is no data that only fits either an anthropogenic or a natural cause of the recent rise in atmospheric CO2 concentration. Hence, the only factual statements that can be made on the true cause of the recent rise in atmospheric CO2 concentration are
    (a) the recent rise in atmospheric CO2 concentration may have an anthropogenic cause, or a natural cause, or some combination of anthropogenic and natural causes,
    but
    (b) there is no evidence that the recent rise in atmospheric CO2 concentration has a mostly anthropogenic cause or a mostly natural cause.
    Hence, using the available data it cannot be known what if any effect altering the anthropogenic emission of CO2 will have on the future atmospheric CO2 concentration. This finding agrees with the statement in Chapter 2 from Working Group 3 in the IPCC’s Third Assessment Report (2001) that says; “no systematic analysis has published on the relationship between mitigation and baseline scenarios”.
    And Loehle’s analysis finds the same.
    Richard

  45. Why is everyone so hung up in these models? The IPCC said themselves in the 2001 report that the models cannot predict anything. Because its a chaotic system, unlinear, with multivariable coupled feedbacks?
    So the IPCC says they cannot predict. And since the IPCC says so, it is a concensus?
    Problem fixed.

  46. Tenuc, the residence time for a molecule of CO2 doesn’t matter much considering the entire carbon cycle in which manmade inputs are increasing the total in the atmosphere. The question of acceleration depends on natural factors like the sea surface temperatures which depend on clouds (i.e. weather) which are not modeled well at all.
    The other thing that looking at history tells us (besides the lag you mentioned) is that there is no infinite feedback from CO2 warming and ocean warming and there may not be any. I have yet to see a model which rises and then stops when temperature starts to get limited by weather. Without that type of realism, it doesn’t really matter if man doubles, triples or does anything else to CO2.

  47. @Marvin:
    What you seem to be having trouble understanding is that the article wasn’t trying to prove anything, other than nothing CAN be proven.
    The fact is that the IPCC made a claim that CO2 was rising exponentially. It then plotted an eponential on the known data and said “Oh, that’s a good fit, we were right all along”. It then became so confident of that data fit it extrapolated it out for 100 years. Then, it made predicitions of effects due to such an increase based on…. a similar approach perhaps?
    Now, what this article points out is the basic fallacy of this approach in two ways:-
    1] It shows that the exponential that was fitted to the data was not necessarily the correct exponential – thus demonstrating that no faith can be put in the IPCCs predictions of CO2 levels in 100 years nor in the possible outcomes of such high levels.
    2] It goes on to show that non-exponential fits are also possible with equal reasonableness from a mathematical point of view. Thus the fact that an exponential CAN be fitted to the data is not proof in itself that the theory is correct.
    We can go on to infer certain other things about the quality of the IPCC process when such dubious methods are used to firm up theories as fact and make predictions based on these erroneous facts, since they have allowed such obviously flawed predictions to persist. We can also infer something about the belief systems of those that continue to believe in the veracity of the statements made by the IPCC when they have been persistently shown to be wrong in so many areas and yet people like yourself will continue to defend them.
    The IPCC is not about science it is about belief. You, along with the rest of the IPCC’s supporters are entitled to believe that mankind is destroying the planet. What you are not entitled to do is use spurious science and lies to back up your fraudulenbt claims in the hope of enforcing your beliefs on those that do not share your beliefs.

  48. Obviously the solution is to see which of the three models applies to periods prior to the calibration period, and also to use the models to predict climate in the past…
    Beyond just having a model, one should have a causal theory to explain why a particular model behaves as it does. Doesn’t matter how much pattern matching you do, if you dont have a physically plausible causal reason for your model, you are just playing cargo cults.

  49. RockyRoad (18:42:23) :
    Greenhouses typically run at 1,000 to 2,000 ppmv CO2 without harm to workers. CO2 levels approaching 1% become noticeable to humans, which is about 25 times current atmospheric abundance (~390 ppmv).
    Do you know how much of a Temperature increase in the Greenhouse the 1,000 to 2,000 CO2 level produce over normal Atmoshpere?

  50. There is an error in the quadratic fit equation in the paper. One of the first things any referee should do is plot out the curves given by the author to see if they match the lines shown in the figures shown.
    I plotted out Loehle Eq. (2), his quadratic model, with his values for the constants a, b, and c. The resultant line did not match up at all with that shown in his Fig. 3, it’s way to low. If I adjust his a=45318.5 to say a=45500 I get a closer match to the line in Fig. 3, but it’s still not perfect.

  51. Re: Mike Lorry–you say a causal model is needed. The CO2 trend is not due to PHYSICS, it is mainly due to the rate at which humans consume fossil fuels (with some physics related to sinks). To pretend that the exponential model is physics is disengenuous. A saturating (logistic type) model matches what is happening to the human population. Other functions might be derived based on increasing efficiency of energy use over time, perhaps giving a quadratic type function as I used. I was not suggesting any model as an alternate truth (as several commenters noted above) but merely to point out that the extrapolation done by the IPCC is under-determined by the data.

  52. I think the real battleground on this paper will be over the phrase “three equally plausible models”.
    Are they “equally plausible” in any sense other than their results over the 51 yr calibration period?

  53. R-squared of 0.999? Great within sample fit? Wildly different outside-sampe forecasting properties? Ever worked with time series data before? I hope when I die I go to some place where I can know for certain all these “underlying population parameters” frequentists keep talking about. Then I can compare the actual damage statistics has caused science with the benefits its created.

  54. Isn’t the better (not great, but better) method to run the calibration period from, say 1958 – 1990, and then see which model does a better job of “predicting” 1991 – 2009?

  55. To get to the IPCC’s CO2 numbers one needs to burn at least 240 million barrels of oil per day and 24 billion tons of coal a year.

  56. We have lots of excellent discussions on WUWT on the problems of the historical (and contemporary) temperature record, with the blatant cherry picking, discontinued use of surface stations awkwardly failing to reveal the “true warming”, strange “adjustments” that always tend in one direction, “homogenisation”, “smearing”, blatantly unsuitable surface stations and all the rest.
    You only have to mention the cooks who stir this particularly obnoxious broth (Hansen, Stieg, Mann, Jones, Briffa and the rest, to know that there is big time fraud (or breathtaking incompetence, if you are extremely charitable) involved.
    But we have had comparatively little discussion on WUWT as to the accuracy of the CO2 record and predictions. This paper and the comments above make a useful contribution, I think.
    But there are major doubts about the accuracy of the “correct” level of (pre-industrial) CO2 that is commonly peddled by the warmists.
    See:-
    http://www.freerepublic.com/focus/f-bloggers/1806245/posts
    http://www.friendsofscience.org/assets/documents/FoS%20Pre-industrial%20CO2.pdf
    http://www.physicsforums.com/archive/index.php/t-163931.html
    And many others.
    Also, the recent “record” (which is effectively taken as a given in Loehle’s paper) is far from being unassailable in its accuracy. See:-
    http://www.americanthinker.com/2009/12/greenhouse_gas_observatories_d.html
    As the NOAA themselves put it:-
    “CO2 is derived from measurements but contains no actual data. To facilitate use with carbon cycle modeling studies, the measurements have been processed (smoothed, interpolated, and extrapolated) resulting in extended records that are evenly incremented in time.”
    Sounds familiar.
    So no doubt the IPCC’s prediction will be spot on. Because a computer on top of Mauana Loa has most likely been programmed to give just the result that the “scientists” predicted all along.

  57. Fred (04:38:22) :
    “What you seem to be having trouble understanding “.
    “The IPCC is not about science it is about belief. You, along with the rest of the IPCC’s supporters are entitled to believe that mankind is destroying the planet. What you are not entitled to do is use spurious science and lies to back up your fraudulenbt claims in the hope of enforcing your beliefs on those that do not share your beliefs.”
    I’m not an AGW supporter, I’m a skeptic. I’m just not a stupid skeptic who borderlines being a denier and I don’t pretend to understand what I don’t yet. If this journal article makes more sense than what I think it offered after all then I’ll learn that but certainly not from you it seems. You eludicated nothing and are acting as bad as CAGW cheerleaders but you’re on the opposite side. If you go back to what I said and re-read it and you can actually understand what I wrote then have at it and explain under the premise. Otherwise, the rest of your rhetoric did nothing but make me angry that the moderator let your lies through.. must have been a close call with the ad hominem.
    You wrote back to me exactly what I already said but I put it into a context and a framework which you did not address at all leaving out all detail I required. Anyone who correctly addresses my post I will reply to and will also thank if they add factual details but others won’t.

  58. Eric (skeptic) (03:38:17) :
    Tenuc, the residence time for a molecule of CO2 doesn’t matter much considering the entire carbon cycle in which manmade inputs are increasing the total in the atmosphere.”
    Stated with absolute confidence and smug authority, a veritable continuum of logical contradiction and begging of the question.

  59. “Do you know how much of a Temperature increase in the Greenhouse the 1,000 to 2,000 CO2 level produce over normal Atmoshpere?”
    Probably not very much because
    a) Greenhouses maintain a high temperature by preventing heat loss by convection and
    b) If CO2 had these properties we would use the gas as insolation in homes.

  60. anna v (00:44:49) :
    I first saw the Von Neumann quote about modeling elephants (a slightly different version) in this paper by Antonino Zichichi:
    “The mathematics involved is a system of strongly coupled, non-linear differential equations, where the solution can only be arrived at by a series of numerical approximations. In these approximations you need to introduce a number of free parameters. Von Neumann was always warning his young collaborators about the use of these free parameters by saying: If you allow me four free parameters I can build a mathematical model that describes exactly everything that an elephant can do. If you allow me a fifth free parameter, the model I build will forecast that the elephant will fly.
    The mathematical models for Meteorology and Climate change have a lot more than five free parameters.”
    METEOROLOGY AND CLIMATE: PROBLEMS AND EXPECTATIONS
    http://www.pcgp.it/pcgp/dati/2007-05/18-999999/ZICHICHI_METEOROLOGY%20AND%20CLIMATE.pdf

  61. P. Segre: the problem is the number of significant digits. In Mathematica many digits are carried and perhaps I should have put them all in the paper. try these:
    {a -> 45318.539465496644`, b -> -46.7684328499448`,
    c -> 0.0121468549016735`}

  62. Tenuc (00:46:05) :
    In 2009, Dr. Robert H. Essenhigh, Professor of Energy Conversion at The Ohio State University, published a paper in the international peer-reviewed journal Energy & Fuels, about the residence time of anthropogenic CO2 in the air.
    He finds that the RT for bulk atmospheric CO2, the molecule CO2(12), is ~5 years, in good agreement with other cited sources (Segalstad, 1998), while the RT for the trace molecule CO2(14) is ~16 years. Both of these residence times are much shorter than the IPCC claim them to be.

    Tenuc,
    Essenhigh (and Segalstad) talk about residence time, that is the average period that any molecule CO2 from any source (humans, atomic bomb tests 14C) resides in the atmosphere before being exchanged with the oceans or vegetation that is about 5 years (based on ~150 GtC exchange per year).
    That has nothing to do with what happens if you add some extra amount (whatever the source) of CO2: that needs about 55 years half life time to go down towards equilibrium (based on the current sink of only 4 GtC/year).

  63. Segalstad refuses to work for the IPCC, unless the IPCC change their uncientific “Objective paragraph”.
    This is something all Scientists should do, IMHO.
    As long as they work for the IPCC they contribute to this unscientific process, and will further decrease the respect for Science among the general public.
    Segalstad on CO2;
    http://folk.uio.no/tomvs/esef/esef0.htm

  64. So I presume that the full paper contains a rigorous proof that his three modesl are in fact “:equally plausible.”
    So what does that mean anyway ?
    The whole concept of “multiple choice” examination questions; (the lazy professor’s exam), is that each of the five (maybe) “answers” is equally plausible to someone who does not know the correct answer.
    If you’ve ever looked at a multiple choice exam paper; you quickly realize, that the offered answers are anything but equally plausible, and an astute examinee, can easily eliminate many of the answers.
    If they truly were equally plausible, then of course, a raw answer score, would have to have 20% removed from it (to make it more robust); since a chimpanzee can guess 20% (of five answer quations).
    So if you start with 100 questions, do you simply subtract 20 from the raw score, or do you multiply the raw score by 0.8 ? to eliminate the dartboard answers.
    In all my 20+ years of formal institutional schooling; I never ever once sat a multiple choice examination paper. I was always expected to be able to produce the answers out of nothing; like you would have to do on a desert island without Google.
    Well I did actually take some multiple choice “tests”; at least two; maybe three. They were the standard so called IQ tests. Did those things originate at Stanford ?
    Anyway they were stupid tests; and many of the questions had multiple correct answers. For example ; a square with a dot in the north west corner morphs into a square with a dot in the south east corner. Apply the same transformation to a square with a solid dot in the north east corner, plus an open dot (circle) in the south west corner. They offer you a square with a solid dot in the south west corner, and th circle now in the north east corner; along with some diversionary options. I can think of at least two transforms that are valid, but yield different answers; and those IQ tests were full of that kind of nonsense.
    But MC tests are quick and easy for lazy teachers.

  65. So. Segelstad says;
    http://folk.uio.no/tomvs/esef/esef5.htm
    “The implication of the approximately 5 year lifetime is that about 135 GT C (18%) of the atmospheric CO2 pool is exchanged each year. This is far more than the about 6 GT C in fossil fuel CO2 now contributed annually to the atmosphere.
    The isotopic mass balance calculations show that at least 96% of the current atmospheric CO2 is isotopically indistinguishable from non-fossil-fuel sources, i.e. natural marine and juvenile sources from the Earth’s interior. Hence, for the atmospheric CO2 budget, marine equilibration and degassing, and juvenile degassing from e.g. volcanic sources, must be much more important, and burning of fossil-fuel and biogenic materials much less important, than assumed by the authors of the IPCC model (Houghton et al., 1990). “

  66. Martin Brumby (08:24:28) :
    But we have had comparatively little discussion on WUWT as to the accuracy of the CO2 record and predictions. This paper and the comments above make a useful contribution, I think.
    Martin,
    There is little discussion on current CO2 levels since measurements at the end 1950’s started at the South Pole and Mauna Loa. Simply because these measurements are the best one can have regarding accuracy and rigorous calibration checks. One can only wish that temperature measurements were subject to the same quality procedures…
    The measurements are up to date for a few stations for daily, monthly and yearly averages. These data are “cleaned”, that means that measurements which are clearly influenced by local contamination (to both sides) aren’t used for averaging. Hourly averages, including all outliers, are available for a few stations some time later, as the calculated data may need recalculation, based on recalibration of the standard gases used for calibration of the apparatus. Including or excluding the outliers doesn’t change the trend, only the variability around the trend is changed.
    The full procedure is on line here:
    http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html
    Similar CO2 levels (within a few ppmv for each hemisphere) and near identical trends are found for near all (10 baseline and 70+ other) stations, as far as possible away from local sources/sinks. See more about that:
    http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html

  67. As to their curve fitting; you can just about always fit any finite set of data that represents some continuous function to a whole raft of mathematical equations. There’s no reason to believe that any point outside that range would fit the expression.
    A good example, would be the parametric set of curves given by:
    x = cos (theta); y = cos(n.theta) for integral values of n (+ve or zero).
    The whole set of data points fit inside a box bounded by:
    x = +/- 1, y = +/-1
    Those sets of data can be exactly represented by a set of polynomials:
    Y= Tn(x) which are actually valid for x ranging over +/- infinity.
    T0 = 1
    T1 = x
    T2 = x^2-1
    And so on
    The parametric form predicts no values outside the +/-1 box for the Tchebychev Polynomials.
    I actually have a patent on an optical low pass (spatial) filter derived from the Tchebychev Polynomials; we use it to make deliberately fuzzy lenses with a controllable amount of fuzziness. You might even own one of those lenses and not know it.

  68. Marvin:
    I understand the annoyance you state at (08:56:50) .
    Perhaps reading my post at Richard S Courtney (03:14:39) may answer your queries. It begins by saying:
    “Loehle’s analysis is similar in its nature and its findings to ours:”
    And ends saying:
    “And Loehle’s analysis finds the same.”
    Anyway, I hope a look at my post wil help to clarify the issues for you (I provided it in the hope of clarifying the issues because you were one of several posters who seemed to be having difficulty with them).
    Richard

  69. “”” anna v (00:44:49) :
    Re: Marvin (Mar 23 21:54),
    In mathematical fittings there are the so called ” complete set of functions” with which you will be able to fit any curve of past data you want. Any curve.
    Example is the complete set of Fourier functions.
    When you manage to fit your curve, there will be constants in front of sinusoidal terms with varying frequencies. This does not mean that these variations have a physical meaning, (if you are fitting a physical function). It is just an artifact of the fit. Usually about five terms are enough to fit anything . ( give me four constants and I can fit an elephant, give me five and I will make him flap his ears, attributed to Von Neumann).
    If one starts with physics derived functions and makes a fit, it is the same as having a complete set of functions and using five of them, if you have five free parameters to fit.
    Thus at best, with a physics argument build up with four or more parameters, it is a hypothesis that has to be proven by the future data that have not been used in the fit. “””
    So true Anna, and there’s a whole slew of such functions besides the Fourier functions. Bessel functions, Legendre Polynomials, etc etc. You only need a set of functions to be “orthogonal”.
    And roughly that seems to mean that; Integral [Fn(x).Fn(x)] from zero to infinity (or some other range) is zero, but Integral [Fn(x).Fm(x)] is finite when (m) is not equal to (n).
    I presume that the Tchebychev Polynomials are also orthogonal functions, at least for -1<x<+1, since in that range they have a cosine parametric form.
    The ability to fit data to some mathematical curve for some finite data set, does not prove anything outside that fitted range.
    A horizontal straight line can be a good fit for the elevation of a point on the ground; right up to the edge of the Grand Canyon. Beyond that range, the fit isn't nearly as good; and one should be cautious about presuming what lies beyond that fitted edge.

  70. re: George Smith: The exponential model IPCC uses is NOT from physics. It is a fit to the historical rise in CO2 resulting from burning fossil fuels. It has no theoretical or causal basis. Thus any other empirical formula is equally well-based. You should think through why you are so annoyed before venting.

  71. Craig Loehle,
    Having looked at the process characteristics of the CO2 increase since 1900, my conclusion is that the natural CO2 system acts as a very simple first order dynamic equilibrium process. The equilibrium point is influenced by temperature (about 4 ppmv/K on short term, about 8 ppmv/K for very long term processes like glacial-interglacial changes and MWP-LIA change).
    Over the past near 110 years, the increase in the atmosphere follows the emissions at a near constant rate of about 55% over the full period:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg
    That the increase in the atmosphere follows the emissions with such an incredible (non-natural!) accuracy is only possible if the process involved is quite linear and the increase of the emissions is quite exponential, without a sign of “fatigue” of the sink rates.
    How will that work out in the future? If the emissions remain increasing exponentially, the increase in the atmosphere will do the same. If we can fix the emissions to a certain level, the levels in the atmosphere will increase assymptotically to a new equilibrium where emissions and sinks are equal. And if we can curb the emissions to what the sinks are nowadays (4 GtC/yr), there wouldn’t be any increase anymore in the atmosphere.
    Thus anyway, the future CO2 levels in the atmosphere only depend of the future emission scenario’s and all the assumptions made to come to these scenario’s…

  72. Another point is that a fourier fit or Tchebychev polynomials use an excessive number of parameters. That is not at all what I was doing.

  73. kwik (11:33:53) :
    So. Segelstad says;
    http://folk.uio.no/tomvs/esef/esef5.htm
    “The implication of the approximately 5 year lifetime is that about 135 GT C (18%) of the atmospheric CO2 pool is exchanged each year. This is far more than the about 6 GT C in fossil fuel CO2 now contributed annually to the atmosphere.

    This seems to be one of the most difficult problems to explain: Segalstad is right and wrong. The amount of molecules of human origin in the atmosphere indeed is only a few percent and half of what was emitted is absorbed by oceans and vegetation within 5 years. But that is not important at all. What is important is that the 8 GtC we add as CO2 per years results in 4 GtC increase in the atmosphere.
    No matter how much is exchanged per year, the 8 GtC is additional and isn’t fully removed (only about 4 GtC is removed each year). Thus nature as a whole is a net sink for CO2 and humans are fully responsible for the increase.
    To give a (bad) comparison: Segalstad talks about how much you invested in the turnover of your bank, while the interest is in how much gain the bank has at the end of the day. If that is only halve of what you individually invested, better look for another bank…

  74. “You only need a set of functions to be “orthogonal”.
    You don’t even need that. You just need a full rank involutive distribution spanning the tangent space at all points.

  75. Tenuc (00:46:05):
    You mention Residence Time calculations in two published papers – at 5 and 16 years. This seems about right: I have (in my homespun way) done an estimate based on the annual “downticks” in the Mauna Loa CO2 record (there’s an annual decline every year, peaking Jul/Aug). I extrapolate these downward as an exponential decay curve.
    Based on 1959 I get a half-life of 125 months, and based on 2009 a 121-month half-life. Satisfyingly consistent. I don’t have the maths to convert these half-life figures into a residence time, but it’s pretty obviously between 10 and 20 years.
    Contrast this with the following abominable statement on the Royal Society’s website: “Once our actions have raised concentrations of CO2 in the atmosphere, levels will remain elevated for more than a thousand years.” (I can’t find it this month – maybe they’ve withdrawn it, fearing a backlash from the not-so-gullible public.)

  76. Brent Hargreaves (12:27:15) :
    Tenuc (00:46:05):
    You mention Residence Time calculations in two published papers – at 5 and 16 years. This seems about right: I have (in my homespun way) done an estimate based on the annual “downticks” in the Mauna Loa CO2 record (there’s an annual decline every year, peaking Jul/Aug). I extrapolate these downward as an exponential decay curve.

    Brent,
    The downticks in the station records are the result of (seasonal) temperatures on CO2 levels, mainly from vegetation growth. That, together with the opposite effect of warming oceans, results in about 4 ppmv/K change globally.
    But that has nothing to do with the decay rate of an excess amount of CO2 added to the atmosphere above an equilibrium. At the end of the seasonal cycle, the amount of CO2 is – again – increased to a new level, the difference is about 55% of the emissions +/- 4 ppmv*dT where dT is the difference in temperature with the previous year.

  77. Brent Hargreaves (12:27:15) :
    Forgot to add:
    The Bern model the IPCC uses indeed has one long term part, but that is only for about 10% of the initial amount and only important if you burn all available oil and a lot of coal. At the current rate of emissions, that is irrelevant and the other terms of the model are much faster decaying. What the media (and alarmists) use is the longest term, but “forget” to mention that that is only for a small part of the additional CO2.
    The reality may even be better, as Peter Dietze calculated: a total half life time of about 55 years, without longer terms:
    http://www.john-daly.com/carbon.htm

  78. >> George E. Smith (11:28:20) :
    If you’ve ever looked at a multiple choice exam paper; you quickly realize, that the offered answers are anything but equally plausible, and an astute examinee, can easily eliminate many of the answers.
    If they truly were equally plausible, then of course, a raw answer score, would have to have 20% removed from it (to make it more robust); since a chimpanzee can guess 20% (of five answer quations).
    So if you start with 100 questions, do you simply subtract 20 from the raw score, or do you multiply the raw score by 0.8 ? to eliminate the dartboard answers. <<
    You divide the percentage of WRONG answers by 0.8 and subtract that from 100% to get the 'chimp-adjusted' score. For example, if a person got 80 correct out of 100 5-answer questions, then the 20 wrong answers would correlate to guessing 25 questions and the chimp-adjusted score should be be a 75%.
    As a student I once took a chimp-adjusted 100 question true-false test with the final grade being 100 – (wrong answers) / 0.5. The test was graded on a curve, and the minimum passing grade ended up being minus sixteen.

  79. Ferdinand Engelbeen (12:41:44) :
    I have read through your web-site. It is very interesting and convincing. But I am still wondering about that war-peak.
    Ice core data looks to me to be the worlds largest low-pass filter, you know….
    Thanks.

  80. @Ferdinand Engelbeen (11:42:12)
    So you are assuring me that, despite all the “smoothings, interpolations, extrapolations” discarded data and so on, the modern CO2 record is genuine, reliable, reproducable 100% scientific data?
    I only ask because, if that’s right, it must be about the only data set in this whole fraudulent AGW scam (including every one of the shroudwaving disaster predictions that are trotted out by the warmists and their politician and media chums, on a daily basis) that is worth a pinch of shit.

  81. Brent Hargreaves (12:27:15) :
    Contrast this with the following abominable statement on the Royal Society’s website: “Once our actions have raised concentrations of CO2 in the atmosphere, levels will remain elevated for more than a thousand years.” (I can’t find it this month – maybe they’ve withdrawn it, fearing a backlash from the not-so-gullible public.)
    Reminds me of the radioactive waste story by our ‘friends’ at Gangrenepiece. Their estimate is based on a decay back to background levels. If you use a figure for decay to radiation levels of unrefined ore you get ~200 years, not tens of thousands. Maybe the Royal Society is functionally identical to an alarmist NGO.
    —–
    Anyone remember what the mobile phone companies achieved? When sales were rising at 20% per year they assumed it would just carry on. Then, almost suddenly, almost everyone who wanted a mobile phone already had one. Result? Disaster. Aren’t trends wonderful?

  82. Ferdinand Engelbeen (12:41:44) :
    Beck replies:

    “Ferdinand Engelbeens pages are outdated and refer to my 2007 paper inspecting 3-4 stations of 138 to result in rejecting my work. Is that science? Has he contradicted me in a peer reviewed paper?”

  83. Re: TonyB–I read your post at the AirVent. I am not sure what to make of the historic CO2 data, but note that this is why I did not use any data before 1958 in my analysis. If you assume low values for CO2 for say 1900 it is easy to make the exponential model have an even steeper slope, and thus a more alarming value in 2100.

  84. Hi Bart, yes I didn’t explain anything, just asserted. The explanations came later which I appreciate. But here’s yet another analogy (as if there aren’t enough already). You trade baseballs with your neighbor, both your yards have quite a few, you scoop up armloads and toss them over the fence and he does the same. You are the atmosphere, he is top layer of the ocean. He trades with his neighbor the deep ocean. Deep ocean has lots of balls to toss over to the ocean surface but the rate is somewhat limited.
    A new neighbor comes along to put more balls in your yard (manmade CO2) You can then grab armload more efficiently because you have more of them being dropped in and you can toss them in the ocean surface neighbor’s yard. The extra balls are accumulating in your yard because you can’t keep up with the new source. At the same time the warming ocean surface may be tossing more balls in that direction. That can’t be easily measured. Nor can the amount of CO2 in the ocean (balls in the neighbor’s yard).
    But one thing can be easily measured which is atmospheric CO2 (balls in your yard) and it is increasing. Another thing can be fairly easily estimated which is the manmade inputs which comes from fossil fuels, cement, etc. That input is 8ppm/year. The increase in your yard is 4ppm/year. If the 8ppm stopped, your increase would stop, would it not?

  85. I’m in a hurry, so didn’t read the actual paper, just this article – just a couple quick comments:
    First, basically what I get as a summary is that a variety of least-squares fits “model” the data well. This isn’t a surprise, any good engineer/statistician/real scientist can tell you that. Of course, don’t try arguing that with people that fit the original curves…if you do, then you’re “unscientific”. 😉
    Second, did you plot the residuals for the three models? I’ve seen similar R2 values between fits in the past, but when plotting residuals it became clear which model was superior.
    Third, from what I can tell, the “forecasting” of future “climate change” is full of all sorts of extrapolations where we can have no idea on the uncertainty levels, or even if certain things are possible at all! Certain fields of science suffer from this badly, including climate science (some areas of biology can be as bad or worse). Experienced scientists in other areas can usually come up with at least a handful of examples where small-scale models that work totally fail when extrapolated.
    Regards,
    -Scott

  86. kwik (13:31:36) :
    Ferdinand Engelbeen (12:41:44) :
    Ice core data looks to me to be the worlds largest low-pass filter, you know…

    Thanks for the appreciation of my web page… Indeed that is the main problem with ice cores: smoothing out any fast changes. But that depends of the accumulation rate at the place where the ice core was drilled. For the fastest accumulation ice core (Law Dome 2 of 3 cores) the averaging of the gas bubbles is only 8 years. That makes that any peak value of +100 ppmv CO2 going up and down in a period of 15 years as Beck’s graph show should be seen in these cores.
    And no other way of measurement or proxy of CO2 levels (or d13C changes) show abnormal values in the period 1935-1950…

  87. Bart (14:14:07) :
    Beck replies:
    “Ferdinand Engelbeens pages are outdated and refer to my 2007 paper inspecting 3-4 stations of 138 to result in rejecting my work. Is that science? Has he contradicted me in a peer reviewed paper?”

    Hé, I didn’t know that peer review is the gold standard for skeptics?
    I have looked at all measurements in the period 1935-1950, as given by Beck in 2007. Several of them were just wrong: the instrument used in Barrow had an accuracy of +/- 150 ppmv. I don’t think one can use that for accurate measurements around 310 ppmv… Measurements at Antarctica show low oxygen values at high CO2 values, which points to contamination. Many measurements in towns, fields,… Only a few over the oceans and coastal with wind from the oceans. These show low values around the values found in ice cores…
    Later works show the same lack of quality control: his 2008 publication in E&E ( http://www.biokurs.de/treibhaus/180CO2/08_Beck-2.pdf ) shows that balloon/rocket measurements in the higher atmosphere give (much) higher values (up to 800 ppmv) than the average ground measurements (310-350 ppmv) in the period 1928-1935. That is physically impossible, there are no sources of CO2 in the higher atmosphere. The opposite is found too, notably during the “peak” around 1940-1945.
    His 2009 paper, together with Francis Massen is far more interesting: proposing a method to derive the “background” level from measurements at a contaminated place, based on wind speed, by preference stormy weather. Unfortunately the wind speed for some interesting historical series, like Giessen in Germany, doesn’t show good peaks and the data quality seems rather discutable…
    Thus sorry, while I appreciate the tremendous amount of work he has done, I can’t agree with his interpretation of the historical data…

  88. Martin Brumby (13:39:34) :
    So you are assuring me that, despite all the “smoothings, interpolations, extrapolations” discarded data and so on, the modern CO2 record is genuine, reliable, reproducable 100% scientific data?
    Martin,
    The difference with climate models is that these are basic observations. Be it that some “cleaning” is necessary, as every station has some minor (or worse) problems: Mauna Loa has a volcano nearby (+4 ppmv), and upslope wind in the afternoon (-4 ppmv). As the fastest change over a day from “background” CO2 is only a few tenths of a ppmv, the outliers are easily detected (also from the wind direction).
    Coastal stations have problems when the wind is from land side and the severe conditions at the South Pole give a lot of mechanical problems. Despite that all, a lot of good data are at collected and at the deposition of anyone who want them (even the raw hourly data of a few stations if you want to recalculate the trends)…
    Despite the “cleaning” of contaminated data, all baseline stations are regularly checked with flask samples (in line and outside), measured by different labs, different equipment/methods and different people from different organisations (with some rivaling interests). That makes it very difficult to cheat, as hundreds of persons are involved worldwide.
    Compare that to the quality control of temperature stations, where only few know how data are collected, quality controlled (if at all) and “adjusted”…

  89. Craig, the fashionable question these days is — did you perform an Augmented Dickey-Fuller test for the presence of a unit root in the Hoffmann data?

  90. RockyRoad (18:42:23) :
    “Greenhouses typically run at 1,000 to 2,000 ppmv CO2 without harm to workers. CO2 levels approaching 1% become noticeable to humans, which is about 25 times current atmospheric abundance (~390 ppmv).”
    Just a note on this,
    – Carbon Dioxide (CO2) is not toxic until 50,000 ppm (5%) concentration (Source)
    – Any detrimental effects of Carbon Dioxide (CO2) including chronic exposure to 30,000 ppm (3%) are reversible (Source)

  91. Ferdinand Engelbeen (16:37:41) :
    You say of Beck’s data:
    “His 2009 paper, together with Francis Massen is far more interesting: proposing a method to derive the “background” level from measurements at a contaminated place, based on wind speed, by preference stormy weather. Unfortunately the wind speed for some interesting historical series, like Giessen in Germany, doesn’t show good peaks and the data quality seems rather discutable…”
    Perhaps, but Giessen
    is not kms up in the air,
    is not on the side of an active volcano near an active volcanic vent, and
    is not only a few kms from Kilauea (i.e. the most active volcano on Earth).
    However, data from the Mauna Loa Station is so good that it cannot be disputed (sarc).
    Richard

  92. Richard S Courtney (11:53:38) :
    All is well that ends well. I understand better after some further reading what you are implementing. The only part of it I feel I should weight in on (as a molecular biologist) is the Michaelis-Menten formula you used as part of your proof samples. This goes to the heart of my accusation.. intentional misunderstanding.
    The Michaelis-Menton formula itself has a physical property it is attempting to describe about enzymes in reality and the complexes they form with substrates or E + S = [ES]. Assumptions are required and this is where experiment makes the knowledge of what the formula means useful.
    From wikipedia,
    “The first source of limitations for the Michaelis-Menten kinetics is that it is an approximation of the kinetics derived by the law of mass action. In particular, Michaelis-Menten kinetics is based on the quasi-steady state assumption that [ES] does not change”.
    Does the assumption hold true? Why are you using this formula when you can tell just by assumptions that it will not work? CO2 does not hold steady in the atmosphere and is constantly being changed. If the principle of the mathematics is relying on the algorithm and not in basis of michaelis-menton assumptions it still raises the question, what are the reasons for the use of the algorithm?
    I know now the answer is because it is an attempt to show the IPCC comes to conclusions which are equally implausible. At first I did not realise because the way in which the journal article misunderstands why we use formulas at all seemed to be entirely related to be ‘just look at the R^2 value!’. This could have been directed to be understood easier in the first instance if I weren’t looking for the merit of what the article explains rather than what it doesn’t explain.. sort of like the IPCCs conclusion I suppose. I am not aware of their theoretical basis and whether they have physical evidence and theories or other proof of the way the environment acts within the same time period which provided them a reason to conclude the exponential trend is correct.
    I withdraw my criticism however, due to my lack of knowledge of what the IPCC has produced. Anyone who knows the answers to any of my conjecture is more than welcome to elaborate.
    Thanks Richard I’ll keep reading and learning.

  93. @jorgekafkazar (17:07:11) :
    Extrapolation is speculation.
    – – – – – – –
    Exactly. And interpolation is math masquerading as data.

  94. @TonyB (13:45:29) :
    Tony Brown! Thanks for this link! An absolutely superb piece from you (yet again)!
    I’ll give it again:-
    http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/#comment-23540
    And the links and comments are excellent as well.
    This (and the discussion here) has also led me to some other interesting web sites, (not least those of Beck & Engelbeen), iIf they can put up with being mentioned inside the same parentheses!
    OK, it does seem likely that the modern CO2 records may not be as blatantly corrupt as the temperature products of Jones, Mann and Hansen. (That doesn’t say much, I know.)
    But it seems extremely questionable that the ‘pre-industrial’ CO2 level can be accepted as 280ppmv. Or indeed the idea that this is somehow the “correct” level.
    I’m not a boffin. I’m just an old and grumpy Chartered Civil Engineer who has spent his life trying to make things work cheaply & efficiently.
    But not the least infuriating aspect of the whole AGW scam is that, by prostituting their scientific talents to a socio-political agenda for the sake of a comfortable office, a fast computer, a generous salary and an index linked pension; thousands of scientists have gone along with a shroud waving fairy tale which they MUST realise will impoverish and endanger millions.
    And, increasingly, the public’s perception of the integrity and trustworthiness of “scientists” is understandably getting lower than the credit that they might give to a Medellin shithouse rat.
    And what have the major scientific institutions been doing? Why, diving into the trough. Just think Lord Oxburgh and all his chums.

  95. I find all of these discussions as to where CO2 in the future will go fascinating but I fail to see the relevance. We know that CO2 is an important chemical in God’s creation and in the creation of crops and forests (of which we want more), we also know it has been much higher in the past, during times when there was (logically!) much more vegetation, and more animal life, we also know that the MAK value of CO2 is about 9000 mg/m3 (= 9/1200*100= 0.75%) – so why worry about CO2 when it is obvious that it will still take a very, very, very long time to come anywhere near the safe working level? (we are now at only 0.04%). Unless, of course, there is anyone out there who can prove to me that CO2 causes warming rather than cooling? (If I look at the spectral data I believe it is pretty much evens between the warming and the cooling of CO2). People complaining about air pollution must be taught that it is not the CO2 that makes you feel bad but rather those that come from incomplete combustion and impurities in the fuel. (e.g. CO, SO2, etc)

  96. Richard S Courtney (17:41:03) :
    You say of Beck’s data:
    Perhaps, but Giessen
    is not kms up in the air,
    is not on the side of an active volcano near an active volcanic vent, and
    is not only a few kms from Kilauea (i.e. the most active volcano on Earth).
    However, data from the Mauna Loa Station is so good that it cannot be disputed (sarc).
    Richard

    Richard,
    The problem of Giessen is exactly that it isn’t high in the air, but on land near huge sources and sinks (mainly vegetation). This gives a bias of tens of ppmv’s compared to over 95% of the atmosphere: everywhere over the oceans and at over 1,000 meter over land. Even including the local disturbances, the difference between Mauna Loa, South Pole, Barrow and Giessen is enormous:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_background.jpg
    The data from SPO, MLO, BRW in that graph are the raw hourly averages, including outliers caused by volcanic degassing or upslope winds (MLO), landside wind (BRW) or mechanical problems (SPO).
    Using the historical data from Giessen is as scientific as using trends from a thermometer placed on an asphalted parking lot without any correction (and even then…) to obtain the “global” temperature of that time period.
    Ferdinand

  97. @Frederick Engelbeen:
    I would point out as an engineer that if the CO2 pre-industrialisation concentration was static then we have two possible scenarios:-
    1] There is a God, and a very active one, that gets up every day to adjust the celestial CO2 thermostat to ensure that the climate is always set to “comfortable”. Not very scientific perhaps that one.
    2] The CO2 concentration is part of a negative feedback control loop. Additional CO2 added to the atmosphere by active volcanoes (as an example) will be removed by a second process (such as increased precipitation) at a rate that opposes the original change.
    From a science POV, point [2] is the one to go for, but it means that any addition of CO2 from human sources would then be removed by the very same process. Why do we not see such a thing happening right now? Probably quite simply because negative feedback control loops often don’t react instantaneously. Usually what happens is that the output of the control loop oscillates gently before reaching a resting equilibrium. There is no reason to believe that the resting equilibrium will actually be very different from the equilibrium before the sudden change was introduced. This depends on the gain available in the negative feedback – if the negative feedback reacts strongly to the step change then the output will be almost unchanged after the step change in input.
    So, my hypothesis would be this – that the earth’s climate system is a negative feedback loop with high gain in the negative feedback such that any step change to any of its input will be unlikely to make any significant long-term change to the behaviour of the climate system. I propose a mechanism for such negative feedback that includes scrubbing of the CO2 from the atmosphere caused by increased precipitation caused by greater evaporation of the oceans due to slightly higher temperatures. I further note that this hypothesis is supported by the fact that the earth undergoes regular dramatic step changes to the climate’s input conditions (such as massive volcanoes erupting like Yosemite or impact from meteorites such as the one that killed the dinosaurs) and yet eventually the climate has stabilised to much the same conditions as before these step changes occured.

  98. The following are some references and comments illustrating the guesswork character of estimates having to do with the carbon cycle and projected CO2 concentrations.
    Begin by taking a look at the following IPCC chart of the carbon cycle:
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-7-3.html
    There is a sentence in the caption under the chart that sounds a bit like a joke to me:
    “Gross fluxes generally have uncertainties of more than ±20% but fractional amounts have been retained to achieve overall balance”
    NASA has a similar chart but note that no fluxes are shown between the deep and the surface oceans, as if they barely talked to each other. Instead, they they put some little wiggly lines that are meant to represent blank ignorance, I suppose. At some point in the text they understate softly that “new measurements are needed to reduce uncertainties in coastal carbon fluxes and to quantify carbon export to the deep ocean.”
    http://nasascience.nasa.gov/earth-science/oceanography/ocean-earth-system/ocean-\
    carbon-cycle
    I suspect that the fluxes between deep and surface ocean layers must be comparatively large for the simple reason that the overwhelming majority of CO2 (over 90%) is in the oceans. If we know nothing about this flux, then we know nothing at all.
    At any rate, a >20% uncertainty in most gross fluxes means that the charts are very pretty, but they are also largely a work of fiction. Particularly, if you compare the amount of the uncertainty acknowledged for gross fluxes with the known amount of human emissions, the former is many times larger, so I wonder how we are supposed to have any idea where the small amount of human emissions ends up distributed.
    And I wonder where the notion of the necessity of a permanent steady state balance comes from. Since the distribution of CO2 among the different parts of the system seems to have been changing all the time in geological history, doesn’t this mean that the system is seldom, if ever, in strict balance? After all, imbalances in the cycle are the only way you can change the distribution. So there must be constant fluctuations in this imbalance, contained within certain boundaries. In this respect I see plants acting as a definite negative feedback. If CO2 increases, then biomass increases and therefore it begins to act as a net sink until it offsets and eventually reverses atmospheric CO2 increases. If CO2 gets too low, they stop growing and they stop acting as net sinks. Also, the net effect of more plants is probably cooling, thereby increasing the sink capacity of the oceans.
    There is a post by E.M. Smith in his blog where he does a rough calculation of the CO2 absorption capacity of plants:
    http://tinyurl.com/yh5r6ul
    Very interesting calculations and reflections. Especially the following:
    ***quote***
    So a “fast forest” species like Poplar or Eucalyptus can completely deplete [the CO2 from] about twice as much volume of air as sits above that forest (all the way to space) and a fertile pond growing pond scum could completely deplete about 20 times the volume of air as sits above it. In one year.
    Golly.
    So let me think about this for just a minute… If I grow a fast forest for 50 years, it will completely deplete 100 times the volume of air that sits above it. So 1% of the planet surface being these fast species would completely scrub all present CO2 from the air in one lifetime… 75 years in the PPM by volume case.
    And pond scum could do it in 5 years. 7 and a bit years if CO2 is ppm volume. (Which I think it is, per wiki).
    I think I know now why plants are CO2 limited in their growth. They have scrubbed the CO2 down to the point where they are seriously unable to grow well. Otherwise they would have removed it all not very long ago in geologic (or historical) time scales.
    I come to 4 conclusions from this:
    1) We desperately need more CO2 in the air for optimal plant growth. Plants must have depleted the air to the point of being seriously nutrient limited.
    2) Any time we want to scrub the air of CO2, we can do it in a very short period of time using nothing more exotic than trees and pond scum on a modest fraction of the earth surface.
    3) Biomass derived fuels will be CO2 from air limited in their production, especially if we start some kind of stupid CO2 “sequestration” projects. Siting biofuel growth facilities near CO2 sources (like coal plants) ought to be very valuable.
    4) Any CO2 sequestration project that does get started by The Ministry of Stupidity needs to allow for CO2 recovery in the future. Things like ocean iron enrichment that sink it to the “land of unobtainable ocean depths” are a very bad idea. We are one generation away from CO2 starvation for our crops at any given time.
    ***end of quote**
    And you may remember this post here at WUWT:
    Biosphere is booming – Satelite data suggests CO2 the cause
    http://tinyurl.com/5sf5us
    “They found that over a period of almost two decades, the Earth as a whole became more bountiful by a whopping 6.2%. About 25% of the Earth’s vegetated landmass — almost 110 million square kilometres — enjoyed significant increases and only 7% showed significant declines. When the satellite data zooms in, it finds that each square metre of land, on average, now produces almost 500 grams of greenery per year.”
    Well, if half of our emissions have been accumulating in he atmosphere, then one might be allowed to reason that without emissions there would have been CO2 depletion by the same amount, and one would also be allowed to reason that the effect of that on biomass would have been the opposite, perhaps a 6% reduction in plant mass over less than two decades. Another name for that would be desertification. Hasn’t the Sahara been greening, with people moving into areas uninhabited since living memory? Well I suppose some fanatics could argue that the shrinkage of deserts is yet another act of aggression by mankind, breaking the “natural balance” that nature considered wise before we started sinning with fossil fuels and so on. But seriously, between the alternative of a 6% reduction and a 6% increase in vegetation, I would take the latter any time. And I would also take a degree or two of warming over the same amount of cooling any time. One of these days, they are going to start arguing that the optimal “natural balance” is the one found some 12,000 years ago, when most land in North America and Europe was frozen stiff.
    One thing I find particularly perplexing is that very cold countries, like Canada and Scandinavia, seem to be the most pious in their adherence to global warming anxiety. Why on earth would Canadians worry about a degree or two of warming? Do they think it’s too hot? They should chill out and listen to Minnesotans for Global Warming.
    To me, all current estimates of future CO2 concentrations are pure gross guesswork, whether done by models or by consultation with oracles. In view of our enormous ignorance, the most sensible thing would be to just see where the current trend is headed for, and it seems it is headed for around 500 ppm by 2100.
    This is without even considering the effect of peak fossil fuels on limiting our contribution. The notion that emissions will continue to increase throughout this century seems unlikely. See for example:
    http://www.theoildrum.com/pdf/theoildrum_5084.pdf
    There is also a Vincent Gray essay commenting on a new study on the Law Dome ice cores.
    His conclusion is:
    “It is back to the drawing board for carbon cycle models.”
    http://www.john-daly.com/bull120.htm
    And you may remember an essay by Roy Spencer on oceans as drivers of CO2 that generated a lot of comments at WUWT, especially the comments by Richard Courtney
    http://wattsupwiththat.com/2008/01/25/double-whammy-friday-roy-spencer-on-how-oceans-are-driving-co2/

  99. Ferdinand Engelbeen:
    My post at (17:41:03) queried your casting doubt on Beck’s studies while you accept Mauna Loa data. You have responded at (03:50:43) saying:
    “Using the historical data from Giessen is as scientific as using trends from a thermometer placed on an asphalted parking lot without any correction (and even then…) to obtain the “global” temperature of that time period.”
    And the Mauna Loa is supposed to be better!!??
    As I said, the Mauna Loa Station that monitors atmospheric CO2 concentration is
    1.
    kms up in the air,
    2.
    on the side of an active volcano (i.e. Mauna loa) near an active volcanic vent,
    3,
    only a few kms from Kilauea (i.e. the most active volcano on Earth).
    Compared to that, monitoring temperatures near asphalt is good prctice!
    PROVING ONE THING IS WRONG DOES NOT PROVE ANOTHER IS RIGHT.
    The point of my post at (17:41:03) was to assert that none of the data on atmospheric CO2 concentration deserves an unquestioning trust, and I doubt all of it.
    Richard

  100. Ryan (03:58:52) :
    I would point out as an engineer that if the CO2 pre-industrialisation concentration was static then we have two possible scenarios:-
    2] The CO2 concentration is part of a negative feedback control loop. Additional CO2 added to the atmosphere by active volcanoes (as an example) will be removed by a second process (such as increased precipitation) at a rate that opposes the original change.
    From a science POV, point [2] is the one to go for, but it means that any addition of CO2 from human sources would then be removed by the very same process. Why do we not see such a thing happening right now? Probably quite simply because negative feedback control loops often don’t react instantaneously.

    I largely agree with this (as a former process -automation- control engineer myself). The whole CO2 system reacts as a simple first order linear process to disturbances. The limiting factor is that the feedback needs time (precipitation is not a huge factor), as well as for more vegetation growth as for more ocean uptake. In average the current uptake rate is about 55% of the emissions, as long as the emissions are growing (near) exponentially.
    If we should stop all emissions today, the half life time back to equilibrium (at around 300 ppmv) is about 55 years, while the influence of temperature on the equilibrium setpoint is about 8 ppmv/K on long term changes (like the MWP-LIA-CWP temperature changes).

  101. great, all that stuff from Francisco. God bless you for that. I really enjoyed that!!!
    Still chuckling. The folly of it all is really mind boggling. People wanting to bury CO2 when it is still coming out of the earth at numerous places. Why don’t they first “plug” those “holes” where the CO2 is coming out of earth naturally?

  102. Richard S Courtney (04:54:23) :
    My post at (17:41:03) queried your casting doubt on Beck’s studies while you accept Mauna Loa data. You have responded at (03:50:43) saying:
    “Using the historical data from Giessen is as scientific as using trends from a thermometer placed on an asphalted parking lot without any correction (and even then…) to obtain the “global” temperature of that time period.”
    And the Mauna Loa is supposed to be better!!??
    As I said, the Mauna Loa Station that monitors atmospheric CO2 concentration is
    1. kms up in the air,
    2. on the side of an active volcano (i.e. Mauna loa) near an active volcanic vent,
    3, only a few kms from Kilauea (i.e. the most active volcano on Earth).
    Compared to that, monitoring temperatures near asphalt is good practice

    Richard, as I have responded, have a look at the data: The influence of the volcano at Mauna Loa is less than +4 ppmv and when discovered (as outliers) these data are not used for averaging. Similar for upwind conditions (-4 ppmv).
    More important, with or without these outliers, the average and trends of Mauna Loa are identical (less than 0.1 ppmv difference, see http://www.ferdinand-engelbeen.be/klimaat/klim_img/mlo2004_hr_raw.jpg for the raw data trend 2004 and http://www.ferdinand-engelbeen.be/klimaat/klim_img/mlo2004_hr_selected.gif for the “cleaned” data).
    Further, all 10 baseline stations (Barrow, sea level, no volcano; South Pole 3000 m, no volcano;…) show similar values within a few ppmv and near identical trends. Thus the volcano near the Mauna Loa station is not important at all (except if there is real outburst…). And the altitude is not important too: that only dampens the seasonal variation, as good as the ITCZ hinders the NH-SH exchange of air, so that the SH CO2 values lag the NH values with about 14 months.
    Compare that to the Giessen data in summer: +170 ppmv at night and -50 ppmv during the day. Which one do you prefer for “background” CO2 measurements?

  103. An exponential function best fit to global mean CO2 levels for the 28 year period 1982 – 2009 gives CO2 = 0.0374(exp0.0046t) R^2 = 0.9955. This would suggest CO2 at year 2100 would be 586 ppmv.
    A exponential function best fit to Southern Ocean (below 40 deg. south) CO2 levels for the 27 year period 1982 – 2008 gives CO2 = 0.0442exp(0.0045t) R^2 = 0.9956. This would suggest Southern Ocean CO2 at year 2100 would be 562 ppmv – some 4.2% lower than the global mean.
    A power function best fit to global mean CO2 levels for the 28 year period 1982 – 2009 gives CO2 = 2e-28.t^9.1775 R^2 = 0.9953. This would suggest CO2 at year 2100 would be 618 ppmv.
    A power function best fit to Southern Ocean (below 40 deg. south) CO2 levels for the 27 year period 1982 – 2009 gives CO2 = 7e-28.t^9.0036 R^2 = 0.9954. This would suggest Southern CO2 at year 2100 would be 572 ppmv – some 7.7% lower than the global mean.
    Ain’t math fun!

  104. Richard S Courtney (04:54:23)
    This is very odd. The most active volcano on Earth is a few km away from the supposedly most accurate CO2 measuring station??
    Regarding the altitude, since CO2 is about 50% denser than air, I suppose they must use some kind of theoretical concentration gradient by altitude to adjust for the height at which it is measured when calculating a total average. I wonder how this is done and how reliable it is. If it is reliable, then it would seem to make sense to measure it as high up as possible in the atmosphere, where it would be most unaffected by earth sources. Instead, we choose to measure it next to volcanoes? That’s what the French call “se compliquer la vie.”
    I sometimes think that constructing averages of such things as obviously “lumpy” as global surface temperature, atmospheric CO2, or ocean pH, cannot possibly be done with very good accuracy, meaning that the expected errors may well be too large for the size of the signals being measured. I’ve read there are a lot of questions about how realistic the Volstok ice core reconstructions are, for a variety of reasons leading to CO2 depletion in the air bubbles after such long periods. But now it looks like even the post-1958 measurements could be the result of arcane smoothing. I fully agree with Richard Courtney. Doubt is very much in order.

  105. Henry Pool (06:11:23) :
    Thank you Henry. I’m glad you enjoyed that.
    And yes, CO2 sequestration is a lunatic idea, and I think it is widely regarded as such even by most warmists. But one should not underestimate the snowballing potential of crazy ideas. Growing and burning food to run cars is still going pretty strong. And you may still find people invoking a wonderful “hydrogen economy” that will solve all our energy problems. So I would not be surprised if CO2 sequestration projects started to materialize in order to save us from imaginary roasting.

  106. Francisco (04:21:53) :
    The following are some references and comments illustrating the guesswork character of estimates having to do with the carbon cycle and projected CO2 concentrations.
    Fransisco,
    One need to make a differentiation between what is known and what is unknown in these matters.
    We know with reasonable accuracy how much humans have emitted in the past 150 years or so, simply by counting how much fossil fuel was sold and how much CO2 that gives. Expressed in carbon equivalents, that is about 8 GtC/year and increasing. We know with good accuracy how much CO2 is in the atmosphere and with far less accuracy how much is in the oceans, vegetation, soils and sediments. And we know something, with large margins of error, about the gross flows between the different compartiments.
    The most important point is the mass balance: we add 8 GtC to the atmosphere and see an increase of 4 GtC, thus the rest of the system (oceans, vegetation) is absorbing about 4 GtC out of the atmosphere. Thus anyway, nature as a whole is not a net source of CO2 (even if some 150 GtC is exchanged each year between the different compartiments), it is a net sink.
    Then the individual flows: each year some 50 GtC is exchanged back and forth between vegetation and atmosphere and 100 GtC between upper level oceans and atmosphere over the seasons. The accuracy of these movements is unknown, as based on estimates, but anyway don’t give a net addition of CO2 over a full seasonal cycle, only a 4 GtC sink. The partitioning of the sinks between oceans and land (vegetation) was estimated from d13C and oxygen measurements:
    http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf still with large margins of error.
    The exchange between upper and deep oceans is limited to a few sink places (mainly the THC in the NE Atlantic) and source places (mainly the equatorial Pacific) and estimated in exchanging some 100 GtC/yr with the upper ocean layer.
    Indeed plants and oceans act as negative feedback for disturbances of the equilibrium, but not for 100%. In average plants use 80% more CO2 for a CO2 doubling in the atmosphere. Oceans are better, as long as the deep oceans are not too much affected (if you burn all available oil and coal…), as these simply react on pressure differences, which remain huge at the poles, but the uptake is limited by the diffusion speed at the surface.
    The opposite may happen if we stop all emissions today: no increase in vegetation uptake (even a decrease), but the oceans at last are the regulating factor: ocean temperature and CO2 levels in the atmosphere were tightly coupled over the past million(s) years, with about 8 ppmv/K change. As can be deduced from the Vostok (and Dome C) ice core:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/Vostok_trends.gif
    That means that if we reach around 300 ppmv some day with current temperatures, there is no risk that vegetation will suck that further down.
    Further, the comment on the essay on the Law Dome ice cores is somewhat outdated: it is quite clear that the pre-industrial variations in CO2 were the result of temperature variations (MWP-LIA, about 0.8 K, 6 ppmv difference).
    And have you read my commment(s) on Dr. Spencer’s essay? Dr. Spencer nowadays is convinced that the oceans are not the driver for the current higher levels of CO2…

  107. I may be a little late but I’ll ask anyway.
    Why does CO2 uptake remain a constant 55%? In raw terms 55% of 2GT yields an uptake of 1.1 GT. However, 55% of 8GT is 4.4GT.
    So, what is the physics behind nature sequestering 4x as much CO2 now than it did 50 years ago? I can understand some difference, but this seems extremely high.

  108. Eric (skeptic) (14:32:19) :
    Your analogy would be less flawed if you posited that, as more balls come in the yard, you sprout more arms with which to throw them back. The rate at which you throw them back is proportional to the number in the yard. Result: the new input drives the level in the yard up, but only by the fractional increase in rate at which they are coming in.

  109. Richard M (08:59:06) :
    I may be a little late but I’ll ask anyway.
    Why does CO2 uptake remain a constant 55%? In raw terms 55% of 2GT yields an uptake of 1.1 GT. However, 55% of 8GT is 4.4GT.
    So, what is the physics behind nature sequestering 4x as much CO2 now than it did 50 years ago? I can understand some difference, but this seems extremely high.

    Richard M., that is the result of simple physics: the processes that govern the CO2 levels in the atmosphere were in some state of dynamic equilibrium in the pre-industrial era, where the equilibrium point was (and is) mainly affected by temperature. While a lot of (probably non-linear) processes are at work which affect CO2 levels in the atmosphere, the net result of all these processes is surprisingly linear: from about 4 ppmv/K for short term (seasons, year-by-year variations) to 8 ppmv/K for long term (MWP-LIA, glacials-interglacials) temperature changes.
    If you add some CO2 (whatever the source) to such a system, the system will react in the opposite way against the disturbance.
    In this case, as there is more imput to the atmosphere, the increase of CO2 (partial pressure) in the atmosphere reduces the amount of CO2 released by the warm equatorial oceans and increases the amount of CO2 absorbed by the cold polar oceans. The drive for more uptake thus is the increase of CO2 in the atmosphere. As the difference between the CO2 levels in the atmosphere and the old equilibrium increased over time, the uptake increased too in ratio with the difference. In 1960, the difference was roughly 25 ppmv (315-290), nowadays it is 100 ppmv (390-290). Also a factor 4…

  110. Francisco (07:02:43) :
    Regarding the altitude, since CO2 is about 50% denser than air, I suppose they must use some kind of theoretical concentration gradient by altitude to adjust for the height at which it is measured when calculating a total average.
    Francisco, CO2 is quite good distributed all over the air layers, except near huge sources and sinks and with a seasonal amplitude, larger near ground and larger in the NH than in the SH.
    Mixing of gases has little to do with gravity (except near sources), as once mixed by wind, it stays mixed until absorbed somewhere else.
    Yearly average CO2 levels and trends of Barrow (11 m altitude) and Mauna Loa (3400 m) and from the SH at low (Samoa) and high (South Pole) altitude are very similar and are within a few ppmv of eacht other:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends.jpg
    Even airplane measurements at different altitudes and newly satellite measurements show similar levels of CO2 everywhere within a few % of the values…
    Thus averaging for a “global” CO2 level is not that difficult in this case…

  111. Ferdinand Engelbeen (07:20:34) :
    Well, if as you say the margin of error for these flows is “very large” or if their “accuracy is unknown,” as you say later, one begins to wonder how useful they are, and whether the line that separates estimates from wild guesses might be too nebulous in these sciences.
    IPCC chart of carbon cycle
    http://tinyurl.com/y8s4m6m
    NASA chart
    http://tinyurl.com/624cbs
    The 8 ppmv/K coupling between CO2 levels and ocean temperatures seems reasonable. But it does not seem very significant at all. So you need a huge 10 deg swing in temperature to increase CO2 by a mere 80 ppm? Clearly something else was causing CO2 levels in the distant past to be much higher. I’ve never seen much of a correlation anyway when zooming out sufficiently:
    CO2 vs temperatures
    http://www.geocraft.com/WVFossils/PageMill_Images/image277.gif
    http://www.geocraft.com/WVFossils/Carboniferous_climate.html

  112. re Ferdinand Engelbeen (07:20:34) :
    Ferdinand,
    Like Francisco, I’ve a lot of issues with that IPCC supposed balance
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-7-3.html
    Including that they in effect assumed steady state in the first place for the post 1000AD but preindustrial period.
    If there are oceanic processes operating on timescales of 100s or 1000s of years even out to 10000 years which is what Carl Wunsch mentioned in the original version of TGGWS, then an assumption of steady state even over the post 1000AD but preindustrial period is problematical.
    Something I was trying to find were Law Dome results going back 20000 years. I’ve only been able to find results post 1000AD.
    Does anyone know if results going back further exist?
    cheers
    brent

  113. Ferdinand Engelbeen (10:18:52) :
    “Mixing of gases has little to do with gravity (except near sources), as once mixed by wind, it stays mixed until absorbed somewhere else. ”
    I find it hard to believe that its significantly higher density with respect to air has no effect at all in its vertical distribution. Maybe you are right and it is overall well diffused. In any, at least this report tells a different story:
    http://adsabs.harvard.edu/abs/2002AGUFM.A62B0151W
    Atmospheric CO2 concentration was observed using a research aircraft (Gulfstream II) over the western North Pacific during the Pacific Exploration of Asian Continental Emission (PEACE) phase A (6 – 22 January 2002) and phase B (20 April – 16 May 2002). 13 and 12 flights were conducted during phase A and B, respectively, with its latitude range of 22 – 42 north latitude and maximum altitude of about 13 km.
    […]
    The observed CO2 concentration is generally high in low altitude and low in high altitude.
    ========
    also, in two Cameroon lakes there was a sudden cloud of CO2 emitted back in the 80s, due to some disturbance at the bottom, which then drifted and fell to the ground asphyxiating many people and livestock.

  114. Francisco (11:45:52) :
    Further in the abstract, the “high” and “low” are defined: The difference is about 8 ppmv at most. Or some 2% of the measured values. Not really much difference for a momentary measurement in different air masses. These order of differences are measured between latitudes and altitudes over the seasons, but that all levels out if you compare yearly averages.
    See e.g. the inflight measurements over altitude in Colorado:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/inversion_co2.jpg
    Below the inversion layer (below 500-1000 m air altitude over land), one can measure any high level of CO2 in the morning. In the afternoon the air layers are better mixed and levels fall to near background.
    And here the influence of the seasonal changes (mainly by vegetation) on CO2 levels at different altitudes in the NH:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/seasonal_height.jpg
    That is of course not valid near (huge) point sources like the Cameroon lakes or near volcanic vents (or in towns, near smoke stacks,…), where one need more time, volume and wind to mix the extreme high quantities of CO2 through the rest of the atmosphere…

  115. Francisco (10:40:51) :
    Well, if as you say the margin of error for these flows is “very large” or if their “accuracy is unknown,” as you say later, one begins to wonder how useful they are, and whether the line that separates estimates from wild guesses might be too nebulous in these sciences.
    The 8 ppmv/K coupling between CO2 levels and ocean temperatures seems reasonable. But it does not seem very significant at all. So you need a huge 10 deg swing in temperature to increase CO2 by a mere 80 ppm? Clearly something else was causing CO2 levels in the distant past to be much higher. I’ve never seen much of a correlation anyway when zooming out sufficiently:

    Most of the estimates are based on some measured values. E.g. the deep ocean – ocean surface exchanges were deduced from following the fate of CFK’s into the sinking waters of the NE Atlantic and the carbon which followed that stream. Deep ocean and surface ocean carbon content is the result of many ship’s cruises measuring CO2 and -bi-carbonate at different depths. Still only a part of the oceans is covered over a relative short time frame, besides (only two) fixed stations (one at the Bermuda’s and one at Hawai) with longer series of much more different measurements. Including measuring sinking rests (organic and shells) from algues. Other inventories are more difficult to obtain (land carbon in vegetation…).
    The 8 ppmv/K is only valid for the past few million years, up to now. More distant times are not comparable, as these involve very slow processes like silicate rock weathering, but also the position of the continents: available land for vegetation, ice caps, sea surface, mountain ranges influencing air and sea flows…

  116. brent (11:01:53) :
    Including that they in effect assumed steady state in the first place for the post 1000AD but preindustrial period.
    If there are oceanic processes operating on timescales of 100s or 1000s of years even out to 10000 years which is what Carl Wunsch mentioned in the original version of TGGWS, then an assumption of steady state even over the post 1000AD but preindustrial period is problematical.
    Something I was trying to find were Law Dome results going back 20000 years. I’ve only been able to find results post 1000AD.
    Does anyone know if results going back further exist.

    The knowledge of the details of the carbon cycle still is in full development and should be seen as a rough indication of what really happens in nature. Lots of measurements are under way, e.g. some 400 stations are measuring CO2 fluxes over land and forests, to help to understand where and how much CO2 is absorbed by vegetation. Other “tall” towers (200 m high) measure CO2 fluxes over a large area, including towns, again to see where the CO2 is coming from and going to over large(r) areas. And newly developed satellites will help a lot.
    The Law Dome ice cores indeed are only going 150 and 1,000 years back in time, due to the high accumulation at the summit and downslope. But there are other ice cores, the longest going back some 800,000 years. The data can be found here:
    http://www.ncdc.noaa.gov/paleo/icecore/current.html
    But I have already made a few graphs for different time frames:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/antarctic_cores_001kyr_large.jpg
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/antarctic_cores_010kyr.jpg
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/antarctic_cores_150kyr.jpg
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/antarctic_cores_800kyr.jpg

  117. There is a more obvious reason for disputing that Mauna Loa records increases in the level of CO2 due to industrialisation. The Mauna Loa record shows a smooth curve going back to the 1950s. However, human CO2 output due to industrialisation has not been a smooth curve – Consumption of oil after the 1973 and 1979 oil crisis actually fell by 50% – but this point of inflection in the consumption of fossil fuels is not shown at all in the Mauna Loa record.
    Whatever Mauna Loa is showing, it isn’t caused directly by human burning of fossil fuels. Correlation is not causation.

  118. I have discovered thanks to a commenter here that I need to add more significant digits, esp for the quadratic model, to the paper. This will be available in a few days.

  119. “”” Craig Loehle (12:05:58) :
    re: George Smith: The exponential model IPCC uses is NOT from physics. It is a fit to the historical rise in CO2 resulting from burning fossil fuels. It has no theoretical or causal basis. Thus any other empirical formula is equally well-based. You should think through why you are so annoyed before venting. “””
    Craig, I reread my post quite carefully (it was in response to Anna’s observation. I didn’t find any indication of “annoyance”, either in Anna’s original post or my response to hers. We both were simply commenting that fitting data to some mathematical function is rather straight forward.
    I agree with your point that such close fits generally require a lot more parameters than the type of fits you were investigating.
    I see fitting a real set of data of some presumably continuous function, as being useful exercise; if the intent is to infer the likely value for some intermediate point that was not included in the data set (interpolation). I have no problem with that.
    The problem arises when one uses such fits to extrapolate beyond the data set. My (somewhat flippant) example of the trend of the terrain around the grand canyon, was imply to point out mainly for the benefit of non-expert (lay) readers, why extrapolation can be hazardous.
    And it is particularly hazardous in the case of situations where the data set is “not from Physics” as you describe it.
    If in fact a set of data values is “not from physics” ie, not under the control of any real natural laws; then it can not be fitted to any function, with any expectation that the function should continue to yield plausible values beyond the range of values used to set the function.
    I’ve forgotten the exact wording you used in your original story; but I think you said something like all of your fittings were satisfactory over something like 58 years or so, of past history; well statistically indistinguishable from each other. In that case I agree that any of them would be fine for estimating interpolated intermediate values.
    But unless, any of those functions CAN be tied to some physics of the processes; there’s no basis for judging any one of them to be likely to give the best extrapolated values.
    The mathematical function that describes the spectral radiant emittance of a black body contains precisely zero arbitrary parameters; all the parameters are standard fundamental physical constants. Even (h) ; Planck’s constant, which appears in this extpression, is not arbitrary, and in need of experimental determination from the Planck formula, since it is specifically incorporated In Einstein’s formula E= h.(nu) relating photon energy to the frequency of the assoicated electromagnetic wave.
    Now the black body itself is a fictional non-existent entity; so it’s a MODEL, but practical approximations to that model can be made to quite high precision over certain temperature ranges; and ppredictions from that fictional model have been experimentally verified over pretty much the entire range of detectable EM radiation spectra.
    So that sort of curve fitting CAN be used for extrapolation; but lacking a causal relationship; which is what you seem to imply; no extrapolation can be valid.
    Some time back, Roy Spencer plotted some data and fitted some fourth order polynomial to it. There was no physical basis for such a formula; and after thinking about it, Dr spencer abandoned the notion; which I think resulted from his own self realization that such a fitting; really added nothing to the understanding of the real processes.
    I have no problem with people fitting simple math functions to experimental data; it does not annoy me.
    What might annoy me, is their (possible) insistence that the future will follow their formula.

  120. Ferdinand Engelbeen+++
    I fear you don´t have understood our Climate 2009 paper. Its possible to reconstruct background levels from near ground measurements by using vertical profiles and wind speed (or precipitation) within 1 %. So the Giessen data are valid.
    And please check my station file from my website http://www.realCO2.de. You will see vertical profile measurements in the 19th and 20 century you have never heard about. I do not have listed data from cities or other sources with unknown air masses and used for calculations.
    Your postings cite old data I have presented but selected out. Please check.
    By the way: another erroneous citing: the CO2 levels rose since 1900 for 40 years to a maximum (as Callendar had suggested from 18 sources), very similar to the Mauna Loa rising since 1955. And the reconstructed rise ( within 3% ) are about 60 ppm, very similar to Mauna Loa in modern times. The Mauna Loa data had been selected within 0,1 ppm and smoothed, therefore the steady shape.
    regards
    Ernst Beck

  121. Any projections of CO2 emissions for the 21st century based on extrapolations from the second half of the 20th century are highly dubious. Anthropogenic CO2 emissions during the last half century originated primarily from the production and consumption of fossil fuels. Currently fossil fuel production is undergoing an historic transition from a demand-driven world to a supply-driven world. The ability to increase production is increasingly running up against the constraints of the resource base, particularly for petroleum liquids.
    As i demonstrated last year in “Traversing the Mountaintop: World Fossil Fuel Production to 2050” (Phil, Trans. R. Soc. B 2009 364, 3067-3079), even with what many consider to be highly optimistic asumptions about ultimate world resources of coal, natural gas, and petroleum liquids, world fossil fuel production will peak between 2025 and 2040 at a rate only 18-54% above 2005 world fossil fuel production (compared to 521% growth between 1950 and 2005). After this peak, fossil production and the accompanying CO2 emissions from it will decline steadily for the rest of the century. By the end of the century, CO2 emissions from fossil fuel production and consumption will be back to the levels of the 1950’s and 1960’s.
    The IPCC and other projections of runaway CO2 thus depend upon postulated levels of fossil fuel production which are impossible to achieve, given what we know about ultimate levels of ultimate world fossil fuel resources.

  122. Ryan (04:44:42) :
    There is a more obvious reason for disputing that Mauna Loa records increases in the level of CO2 due to industrialisation. The Mauna Loa record shows a smooth curve going back to the 1950s. However, human CO2 output due to industrialisation has not been a smooth curve – Consumption of oil after the 1973 and 1979 oil crisis actually fell by 50% – but this point of inflection in the consumption of fossil fuels is not shown at all in the Mauna Loa record.
    Whatever Mauna Loa is showing, it isn’t caused directly by human burning of fossil fuels. Correlation is not causation.

    Ryan, there is little doubt that the increase in the atmosphere is caused by human emissions. See:
    http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html
    I have looked at the emissions figures for the 1970-1980 period (based on fossil fuel sales): 4.1 GtC in 1970, 4.6 GtC in 1975 and 5.3 GtC in 1980. Hardly a few years with some slowing of the increase in emissions but no dip at all.
    While the emissions are not smoothly increasing over time, neither is the natural variability in sink capacity, the overall trend is near exponentially over time, both for accumulated emissions as for accumulation in the atmosphere:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_emiss_increase.jpg

  123. Ferdinand Engelbeen+++
    I fear you don´t have understood our Climate 2009 paper. Its possible to reconstruct background levels from near ground measurements by using vertical profiles and wind speed (or precipitation) within 1 %. So the Giessen data are valid.
    Dear Ernst, I did understand your/Massen 2009 paper all to well. It is possible to reconstruct present day accurate background levels by using wind speed. I doubt that this is possible for all historical data. Much depends of the amount of data and the quality of the data. The reconstruction of the Giessen data is not valid in my opinion, as the spread of the data at high wind speeds still is very high: 230-530 ppmv for only 20 datapoints above 4 m/s. Using a calculated asymptote to derive the possible background CO2 level at that time includes a very large margin of error, far beyond the 1% you mention. The 1% is based on measurements from modern stations like Diekirch, which shows a highly visible “finger”, where the bulk of hundreds of measurements over 4 m/s lies between 350-410 ppmv.
    By the way: another erroneous citing: the CO2 levels rose since 1900 for 40 years to a maximum (as Callendar had suggested from 18 sources), very similar to the Mauna Loa rising since 1955. And the reconstructed rise ( within 3% ) are about 60 ppm, very similar to Mauna Loa in modern times. The Mauna Loa data had been selected within 0,1 ppm and smoothed, therefore the steady shape.
    Even without any selection or smoothing of the Mauna Loa data, the yearly averages and trend are identical within 0.1 ppmv. Callendar shows a rise of a few tens of ppmv 1850-1940, based on a priory criteria, which excludes stations like Giessen and Poona. His estimated trend was confirmed 50 years later by the ice cores. Without the Giessen and Poona data, there is no CO2 peak around 1940-1945. Neither is that visisble in any other type of proxy for CO2 levels.
    The vertical profiles where (much) more CO2 is found at altitude are simply physically impossible. There is no sink or source of CO2 in the atmosphere above canopea/stack levels. There may be some delay (of a few months) in vertical mixing, but horizontal mixing is anly a few hours to a few days. If they found much higher levels, then the measurements are in error. The more that the high profiles are found when ground level CO2 was “normal” (around 310 ppmv) and lower than ground level during the 1940-1945 “peak”.

  124. Ferdinand Engelbeen (04:03:24),
    Forget Giessen, then. There were also many thousands of CO2 measurements taken during Atlantic, Pacific, Beaufort, Arctic, South Pacific and other ocean crossings. Most showed significantly higher CO2 than the static, unchanging 285 ppmv that Michael Mann claims in his debunked Hokey Stick chart.
    How do you explain that? Did the ships’ smokestack emissions curl around and contaminate the test apparatus? And were the scientists [including several Nobel laureates, at a time when the Nobel prize actually meant something] who took the CO2 readings ignorant of that possibility, or did they take their measurements to windward?
    Also, your comment about using the increase in fossil fuel consumption as a proxy for atmospheric CO2 ignores the portion used for fertilizer, plastics, etc.
    I’m not necessarily arguing that the Mauna Loa readings are wrong, I think they’re at least in the ball park. But there is a disconnect in the data somewhere. Until it is made clear exactly where, how and why, I’ll tend to question any particular data set as being right, and all the others as being wrong.

  125. Smokey (04:46:34) :
    Forget Giessen, then. There were also many thousands of CO2 measurements taken during Atlantic, Pacific, Beaufort, Arctic, South Pacific and other ocean crossings. Most showed significantly higher CO2 than the static, unchanging 285 ppmv that Michael Mann claims in his debunked Hokey Stick chart.
    All measurements taken over the oceans show minimum values below the ice core values, despite that seabound value series show large variability, not seen in modern cruises in the same parts of the oceans. All ice core values (from ice cores with very different accumulation rate and temperature) are in the range of accuracy of the average ocean cruise values (+/- 10 ppmv), at least in the period 1930-1950, but unfortunately there are no seabound figures for the peak period 1940-1945. That is not Mann’s hockeystick btw (which is about temperature) but the CO2 hockeystick, which is quite real, opposed to Mann’s.
    Beck made some severe errors by including seaship values (Meteor cruises in the Atlantic 1925-1927) measured at 0 m depth in seawater and interpretating that as atmospheric values…
    Emissions are based on fuel sales, not oil sales, so plastics (4% of all oil) are not involved, but may end in the atmosphere some day (thus emissions may be slightly underestimated). The emissions are not a proxy for atmospheric increase, but the atmospheric increase follows the emissions quite exactly with some 55% over the past 110 years. I don’t know of any natural process which can be responsible for such an exact scheme…

  126. Re Ferdinand 13:39:42
    Thanks Ferdinand,
    I had been hoping that most recent work on Law Dome went further back than 1000AD, as I had gained the impression that this site had much better condition, eg. high snowfall rates , more rapid compaction etc than some others.
    In general, I have significant misgivings about tacking on ice core derived CO2 estimates before the modern instrument readings such as from Mauna Loa, just as I would for the Hokey Team doing similar with their supposed proxy derived temperatures with instrumental readings tacked on at the end.
    I’ve a lot of difficulty believing that one isn’t getting effectively what I’ll call a smearing/averaging in the Ice Core derived estimates of CO2 which would then provide an inappropriate comparison with discrete instrumental readings
    My misgivings are much in the same vein as per link below
    Thoughts of a geologist-geophysicist on climate change and energy forecast (1/3) – FIG Saint Dié – 6 octobre 2007 (2,5 Mo)
    http://tinyurl.com/84x2f3
    With respect to The IPCC Diagram
    IPCC chart of carbon cycle
    http://tinyurl.com/y8s4m6m
    They are trying to differentiate between two periods, I.E. Post1000AD but preindustrial, I’ll call that 1000PreI for convenience, and a subsequent Industrial period.
    For the 1000PreI period, they have in effect assumed this represented some idyllic period in world history, an utopia before the advent of industrialization, where all the Carbon fluxes were in steady state. Of course there must be an overall carbon balance as we know. However each of the individual blocks, Terrestrial (Vegetation Soil and Detritus),Surface Ocean (including Marine Biota), and Deep Ocean are all individually in steady state I.E. with no change in base inventories in the various reservoirs.(we even see 0.2 presumed weathering offsetting 0.2 deposition of sediment from Deep Ocean to the Ocean floor. )
    The above of course is simply an assumption. Simply from internal variability, without even considering a human perturbation, there’s no guarantee that we would have had steady state for the period in question, eg as per Wunsch comments on duration of long term oceanic processes.
    Their choices/assumptions for the state of the base period (and the choice of what period they consider base in any event) have a material effect on subsequent calculations because they are initializing them from that point while adding a human industrialization perturbation.
    In short , it’s clear they are guesstimating some numbers and fudging others to achieve their presumed balances in the base case.
    The two numbers I would think have the best accuracy would be the instrumental CO2 readings, and our total usage of fossil fuels. Beyond that there’s a lot of very rough work .
    cheers
    brent

  127. brent (06:03:17) :
    I had been hoping that most recent work on Law Dome went further back than 1000AD, as I had gained the impression that this site had much better condition, eg. high snowfall rates , more rapid compaction etc than some others.
    Indeed it is a pitty that the downslope Law Dome ice core doesn’t go back more that 1,000 years. The high resolution is the result of the high accumulation rate: faster closing of the bubbles, but that also means that you have less year layers when you reach the rock bottom…
    I have the impression that Jean Laherrere of the first reference has troubles to see the differences in time frames for the different items involved. The important point is that the smaller the precipitation at a given ice core, the more smoothing you have of the values over longer time frames, but the farther you can go back in time. The main problem is to know the average gas age, as that only can be measured for the fastest accumulating ice cores and only modelled for those with minimal accumulation (which even changes over glacials – interglacials).
    The positive news is that CO2 levels of overlapping periods in (calculated) gas age of ice cores with very different accumulation and temperature regimes are all within 5 ppmv of each other. As we have a lot of ice cores with decreasing accumulation rate, where the first period in time is overlapping, that gives confidence that the oldest ice still shows real CO2 values of that period in time. The more that the ratio between CO2 level and temperature change proxy in the ice (dD or 18O) doesn’t change over time, thus there is no (vertical) migration in deep ice, as that should smooth out any CO2 changes.
    For the past 1,000 years, the figures are quite robust, as we have ice cores with only 40 years smoothing and for the past 150 years only 8 years smoothing and an overlap of some 20 years with the South Pole data.
    That the increase is real and caused by humans can be seen in a complete different, independent proxy: corraline sponges grow their calcite layers with CO2 from (bi)carbonate in seawater. The isotopic composition can be measured with a 2-4 years resolution. The accuracy is fine enough to observe a change of 1 GtC from humans (or vegetation decay) or 4 GtC from the deep oceans. The decline in 13C/12C ratio starts exactly around the same time as human emissions start to increase and ice cores show increasing CO2 levels (which excludes deep oceans addition as source of the CO2 increase: that should increase the 13C/12C ratio). That all accelerates over time:
    http://www.ferdinand-engelbeen.be/klimaat/klim_img/sponges.gif
    Before 1850 there are only small variations in 13C/12C ratio, which are also measured in ice cores over glacials-interglacials and during the Holocene.

  128. Ferdinand Engelbeen+++
    You wrote:”All measurements taken over the oceans show minimum values below the ice core values, despite that seabound value series show large variability, not seen in modern cruises in the same parts of the oceans. ”
    I don´t know why you made simply wrong statements without checking data. Here a short list of selected measurements over sea (accuracy method <3%): (source
    http://www.biomind.de/realCO2/stations.htm)
    1936 K. Buch : travel to Spitsbergen , 8 m over sea surface, 13 samples , 329, 368, 365 ppm
    1935 Y Kauko 1, 6km altitude over Baltic sea: 361, 375 ppm
    1932 K. Buch Cruise to Iceland; 28 double determinations over sea surface: e.g. 328,330 334 ppm.
    1921 B. Schulz Cruise on Baltic Sea, 8 m over sea surface, 25 samples: 300, 310, 320 ppm
    1907 J. Lindhard Greenland, Denmarkshavn, 11m, 23 samples, normal average: 350 ppm because of upwelling
    1906 R. Legender Concarneau (France), Institute surrounded by the Atlantic ocean, average July 1907: 303 ppm
    1902 A. Krogh over Greenland , 59 samples, average 464 ppm because of upwelling
    1890 A. Palmqvist, Maseskaer light house island in Skagerrak, 38 samples, average 312 ppm
    1888 F. Nansen crossing Greenland 308 ppm
    1886 E. Selander, 263 samples at Baltic Sea Stockholm; average 300 ppm
    1865 T. Thorpe, 77 samples , over sea surface at Irish Channel, average 304 ppm.
    Do you want more?
    You can measure the same over sea surface today inclusive upwelling.
    You wrote:" Beck made some severe errors by including seaship values (Meteor cruises in the Atlantic 1925-1927) measured at 0 m depth in seawater and interpretating that as atmospheric values…"
    I have made no errors: I have presented the Wattenberg sea data but had not used them for calculations because they are seawater.
    Why do you constantly spread propaganda on my research Ferdinand? Please write a paper and contradict the historical data. Please do it.
    regards
    Ernst Beck

  129. Dear Ernst,
    What I have written is true: Modern cruises measure CO2 levels in seawater and the air above it. The measurements in air over the oceans doesn’t show much variation worldwide, a maximum of 10 ppmv from near the North Pole to Antarctica. Thus either there was much more (real) variability in the past or the measurements were less accurate (less the methods than the sampling).
    But let us begin with the first graph at http://www.biomind.de/realCO2/stations.htm :
    Practically all red dots between Africa and South America are from the 1925-1927 Meteor cruises which haven’t measured air samples, only a lot of ocean samples from the surface down to near the bottom (quite a prestation for that time!). Thus even if you haven’t used these anymore for the air trends (they were in 2007 as we have discussed at that time), they shouldn’t be on the graph of stations without mentioning that these are (deep) ocean samplings only.
    Then the other data:
    I have looked especially at the 1930-1950 data, as these show the most recent peak in your assesment around 1942. Seaborn atmospheric data before 1930 are also mostly around the ice core data (around 305 ppmv)
    1932 K. Buch Cruise to Iceland; 28 double determinations over sea surface: e.g. 328,330 334 ppm.
    http://www.biokurs.de/treibhaus/literatur/buch/buch1948.doc
    The full measurements range is 297-335 ppmv, average of all samples: 317 ppmv. Accuracy +/- 10 ppmv (1 sigma). Siple Dome ice core about 306 ppmv. Accuracy +/- 3 ppmv (1 sigma).
    1935 Y Kauko 1, 6km altitude over Baltic sea: 361, 375 ppm
    From the above document:
    The average atm. CO2 at sealevel over the North Atlantic ocean in 1935 is 322 ppmv (table 2 lacks in the above document, no range known). It is physically impossible that the air at 6 km anywhere on earth has higher values, as in previous years no average higher values were measured. Siple core about 307 ppmv.
    1936 K. Buch : travel to Spitsbergen , 8 m over sea surface, 13 samples , 329, 368, 365 ppm
    From the above document:
    The full measurements range is 152-368 ppmv, average of all samples: 278 ppmv. Siple Dome ice core about 307 ppmv.
    Thus, as far as the accuracy, the quality control of equipment, methods and reagens and the skill of the people involved allows, the ice core values are within the range of the measurements taken over the oceans.
    The use of land based data was already disputed by R. Meyer and R. Keeling in E&E. Thus no need for me to repeat them. That you don’t accept that land based data are such contaminated that they can’t be used for estimates of historical background CO2 levels is up to you. But other people disagree with you on that point, including me. That is why I will react on your averaging trend everywhere I see them, as these give – in my opinion – a false impression of the reality of CO2 levels in the not so far past history.

  130. wrt Ferdinand Engelbeen (09:46:28) :
    Thanks Ferdinand
    I still have a lot of trouble with this whole mess. However I should spell out for you what exactly is the basis for my disagreement with the politically correct agenda. First the incantation “climate change is real” is about equivalent to me of yelling “Allalu Akbar”. Anthropogenic climate change is real too. Even if we had one human on the planet, that human would affect the environment “in the limit”. So would a moose or even a mosquito. This is trivial^10.
    What matters is whether there were a basis for alarm. Lindzen has it right in that regard. The climate change alarmism is propaganda
    http://ceiondemand.org/2009/10/26/cooler-heads-event-with-dr-richard-lindzen-on-cap-and-trade/
    Climate change has been going on on all timescales ever since the earth had an atmosphere. And it’s very difficult to conceive of CO2 as a primary driver of change when one looks at the geological record
    CO2 vs temperatures
    http://www.geocraft.com/WVFossils/PageMill_Images/image277.gif
    Over Geological time, we’ve had volcanic outgassing adding CO2 to the atmosphere, with oceans and terrestrial net sequestering so we are in about the most CO2 lean situation in geological time even in spite of the modest increase lately. There have been a lot of ups and downs of course, but if CO2 were a major driver of climate that we are told by the alarmists, then that would have to fit the geologic record. Such a claim is most certainly unsubstantiated.
    One of the reasons I wanted to go back 20k years was that I live in southern Ontario (Canada) It was only about 12-14k years ago that the last of the icesheets from the last glaciation receded where I live (obviously I know climate change is real and I hope we don’t flip back into glacial phase anytime soon : ) )
    At the last glacial maximum about 20k years ago, all of Canada, a bit of northern US, most of UK, Scandinavia, and a bit of northern Russia were under the icesheets.
    Wisconsin Glaciation
    http://www.thecanadianencyclopedia.com/index.cfm?PgNm=TCE&Params=A1ARTA0003271
    Since that time, the area, formerly under the icesheets has been re-vegetated, plus there has been buildup of organic material eg peat since that time. I don’t know what the net sequestration has been over that area but it must be at least some appreciable number.
    I’d like to see a purported carbon balance with a base case “20k BC to the industrial era”. Let the alarmists then try to conn the rest of us that the base case was in some sort of steady state : )
    20k years of course is not even a speck in geologic time.
    Just one of the many nonsensical statements from the alarmists is that peak bogs may oxidize releasing their stored C as CO2 (to say nothing about the hysteria about methane) to the atmosphere , and of course we will all fry they tell us : (
    I’ve certainly come to expect nonsense from the MSM, but we also get similar nonsense from the so called “peer reviewed literature”. I’ve seen articles in the “peer reviewed literature” where they try to calculate short term fluxes related to peat bogs and they clearly cannot see the forest for the trees. In the very short term the bogs may show either net offgassing or sequestration, but the short term variation is so high that there is negligible possibility of divining longterm trends from short term. If one takes a step back to see the forest, it’s clear that there has been net sequestration.
    Cold climate peat formation in Canada, and its relevance to Lower Permian coal measures of Australia
    Canada is a vast, cold country. Large expanses of wetlands occur whose characters change predictably over continental distances along north-south temperature gradients and coastal-inland precipitation trends. The changing climatic conditions during the Holocene have influenced the initiation and rates of peat formation. For instance, most of the peats of south-central Canada are not older than 5000 years B.P. Those in the Arctic regions are fossil deposits of the hypsithermal times of 7–8000 years B.P., maintained at the surface because of the low rates of peat oxidation
    http://tinyurl.com/yelsecy
    Occasionally (and all too rarely) finally someone may at least attempt to make sense of all the nonsense as per effort linked below:
    Feedback control of the rate of peat formation
    The role of peatlands in the global carbon cycle is confounded by two inconsistencies. First, peatlands have been a large reservoir for carbon sequestered in the past, but may be either net sources or net sinks at present. Second, long-term rates of peat accumulation (and hence carbon sequestration) are surprisingly steady, despite great variability in the short-term rates of peat formation. Here, we present a feedback mechanism that can explain how fine-scale and short-term variability in peat-forming processes is constrained to give steady rates of peat accumulation over longer time-scale
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1088743/
    Getting back to that IPCC carbon balance cartoon
    IPCC chart of carbon cycle
    http://tinyurl.com/y8s4m6m
    Notice their calculation steps. They quote the following from caption to their cartoon:
    “The net terrestrial loss of –39 GtC is inferred from cumulative fossil fuel emissions minus atmospheric increase minus ocean storage. The loss of –140 GtC from the ‘vegetation, soil and detritus’ compartment represents the cumulative emissions from land use change (Houghton, 2003), and requires a terrestrial biosphere sink of 101 GtC”
    They show a net terrestrial LOSS of 39Gt , a summation of +101(final number calculated/inferred to achieve overall carbon balance) and -140 (figures in red for terrestrial reservoir). However their flux arrows show a flux down from the atmosphere into the terrestrial sink of 2.6Gt/yr, and a flux upward (land use changes) of 1.6Gt/yr. According to their flux arrows there should be a net GAIN in the terrestrial reservoir, not a net LOSS
    Considering the amount of absolute rubbish I’ve seen for purported calculations of upward fluxes from land use changes (including but “NOT” limited to peat bogs), although I’ve not reviewed Houghton(2003), I’ve no confidence whatsoever in whatever he would say, and little inclination to wade through more of the alarmists’ malarkey.
    Very Warm Regards
    brent

  131. Ferdinand Engelbeen+++
    You wrote:
    “It is physically impossible that the air at 6 km anywhere on earth has higher values, as in previous years no average higher values were measured.”
    Ferdinand, you are the real follower of Callendar. In 1935 Kauko had measured a vertical profile using the best gas analyser at that time, better than Keeling 20 years later. Several times > 350 ppm during the year at differnet altitudes.
    Now we know you message: measurements are physically impossible because of your predefined thesis. Thats the problem of you: you don´t accept measurements and prefer erroneous ice core data which cannot resolve in the 20th century because of porous firn.
    Is that Engelbeens science?
    Look out for my next paper, that will be feed for you.
    regards
    Ernst

  132. Ernst Beck (02:13:25) :
    Dear Ernst,
    According to your own near ground values, for the period 1927-1935 the variability of averages in CO2 levels is 270-343 ppmv, with one exception, a measurement place at the Ayrshire coast in 1935 at 370 ppmv (other places in 1935 show lower values in the above range).
    (see http://www.biokurs.de/treibhaus/180CO2/08_Beck-2.pdf )
    According to the same work, measurements in the higher atmosphere and stratosphere in the period 1928-1935 were between 360 and 800 (!) ppmv, while the previous highest ground values since 1850 were all below 400 ppmv and most were below 350 ppmv. This proves beyond doubt that the balloon measurements were unreliable, as there is no source of CO2 in the higher atmosphere and atmospheric mixing is less than 5 ppmv difference in 95% of the atmosphere in just over a year nowadays (and there is no reason to assume that that was different 50-150 years ago).
    The difference between you and me is that I try to understand what the value is of the old measurements in light of other findings, both historic and modern, while you accept any old measurement as equally valuable, whithout looking at what is physically possible and/or the possible errors in analyses and collecting of the samples.
    Btw, two of the Law Dome ice cores have an accuracy of 1.3 ppmv (1 sigma) and an 8 years resolution of CO2 measurements over the past 150 years, including a 20 year overlap (1960-1980) with the atmospheric CO2 data at the South Pole. That is sharp enough to detect any peak of over 20 ppmv around 1943, but that doesn’t show up in the data (neither in any other CO2 related data)…

  133. Could it be that Life itself has one of the greatest influence on climate on planet earth, and is totally ignored in all the models?
    Maybe I’ve missed something here but it sure seems to me that the climate on planet earth would be a whole lot different if the life suddenly vanished.
    Could it be that life has co-evolved to regulate the climate on earth. Previous life forms that could not regulate the climate died out due to excessive climate variation. Life forms that were able to regulate the climate thrived and replaced them.
    This could have started out on a small scale, life that was able to influed the local climate sufficient to increase its chances of survival. This would have given it a competitive advantage, allowing it to eventually spread across the plant.

Comments are closed.