What can we learn from the Mauna Loa CO2 curve?

Guest post by Lance Wallace

The carbon dioxide data from Mauna Loa is widely recognized to be extremely regular and possibly exponential in nature. If it is exponential, we can learn about when it may have started “taking off” from a constant pre-Industrial Revolution background, and can also predict its future behavior. There may also be information in the residuals—are there any cyclic or other variations that can be related to known climatic oscillations like El Niños?

I am sure others have fitted a model to it, but I thought I would do my own fit. Using the latest NOAA monthly seasonally adjusted CO2 dataset running from March 1958 to May 2012 (646 months) I tried fitting a quadratic and an exponential to the data. The quadratic fit gave a slightly better average error (0.46 ppm compared to 0.57 ppm). On the other hand, the exponential fit gave parameters that have more understandable interpretations. Figures 1 and 2 show the quadratic and exponential fits.

image

Figure 1. Quadratic fit to Mauna Loa monthly observations.

image

Figure 2. Exponential fit

 

From the exponential fit, we see that the “start year” for the exponential was 1958-235 = 1723, and that in and before that year the predicted CO2 level was 260 ppm. These values are not far off the estimated level of 280 ppm up until the Industrial Revolution. It might be noted that Newcomen invented his steam engine in 1712, although the start of the Industrial Revolution is generally considered to be later in the century. The e-folding time (for the incremental CO2 levels > 260 ppm) is 59 years, or a half-life of 59 ln 2 = 41 years.

The model predicts CO2 levels in future years as in Figure 3. The doubling from 260 to 520 ppm occurs in the year 2050.

image

Figure 3. Model predictions from 1722 to 2050.

The departures from the model are interesting in themselves. The residuals from both the quadratic and exponential fits are shown in Figure 4.

image

Figure 4. Residuals from the quadratic and exponential fits.

Both fits show similar cyclic behavior, with the CO2 levels higher than predicted from about 1958-62 and also 1978-92. More rapid oscillations with smaller amplitudes occur after 2002. There are sharp peaks in 1973 and 1998 (the latter coinciding with the super El Niño.) Whether the oil crisis of 1973 has anything to do with this I can’t say. For persons who know more than I about decadal oscillations these results may be of interest.

The data were taken from the NOAA site at ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt

The nonlinear fits were done using Excel Solver and placing no restrictions on the 3 parameters in each model.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
341 Comments
Inline Feedbacks
View all comments
Lance Wallace
June 4, 2012 4:15 pm

I expect this post is well past its natural lifetime. One last comment. Integrating the exponential provides an estimate of the total amount of CO2 contributed to the globe since the rise began (around 1720 as predicted by the fit). By 1958, the total increase was 37,000 ppm-months. Conveniently enough, the super-El Nino of 1998 was almost exactly one half-life further on, and the total was 74,000. Now the total is 94,000. That is, since the temperature flattened out 14 years ago, the atmospheric levels have increased by the addition of 21% of all the CO2 added to it in the last 200 years. Yet the temperature is not cooperating. Where is the temp-CO2 correlation now?

Lance Wallace
June 4, 2012 4:19 pm

Well, actually the increase was 27% of the total amount added by 1998, or 21% of the total to date.

June 4, 2012 5:32 pm

Bart,
The ice core data indicates global temperatures fluctuate in several cycles of varying length. That 200 year cycle observed in the CO2 data could well be controlled by a 200 year temperature cycle with an unknown lag time. Some of the shorter less significant cycles are very likely temperature controlled. A cycle around 20 years is possibly related to ENSO.

Editor
June 4, 2012 6:05 pm

P. Solar says:
June 4, 2012 at 12:24 am (emphasis mine)

Willis :

“In reality, nothing in nature continues to grow exponentially.”

Your point about many curves matching such a short segment if fair, but the whole AGW debate is all about a change that is NOT natural : economic growth (and hence fossil fuel usage) has been growing exponentially since the 60′s (about 2% per year).
So I don’t see why you suggest a sigmoid, which would correspond to constant emissions. No sign of that happening in the near future. 🙁

Not even close, P. Solar, not even close.

Source
Note that per capita emissions have been dropping faster than total emissions, and that the population growth has been slowing down, and that population is due to stabilize by around 2050 …
Please, people, do your homework. It saves you from a host of miseries, not the least of which is people publicly correcting your errors.
w.

Reply to  Willis Eschenbach
June 5, 2012 6:34 am

Willis,
Your input/output model should produce the best fit. Basically, it is what Ferdinand uses. The problem is that it does not seperate natural from anthropogenic emissions. Ferdinand assumes that natural net input/output is in “dynamic equilibrium” thus the net accumulation is all anthropogenic. Bart says that the natural changes in the input/output are so much greater than anthropogenic emissions that they make little difference. My analysis indicates that, at present anthropogenic emission rates, they are statistically significant, but account for less than 10% of the accumlation. http://www.retiredresearcher.wordpress.com.

gnomish
June 4, 2012 6:57 pm

http://www.nist.gov/data/PDFfiles/jpcrd427.pdf
solubility of co2 in water.
something i find quite odd- they rely on models a whole lot.
why would they do that when actual experiment is so easy and gives actual data?

Editor
June 4, 2012 7:05 pm

Lance Wallace says:
June 4, 2012 at 3:20 am

Willis Eschenbach says:
June 3, 2012 at 11:41 pm

“I hate to say it, but this analysis is meaningless. You can’t just fit a curve to something and extend it, that’s the kind of thing that the AGW alarmists do.”

Ouch! You really know how to hurt a guy, Willis. I’m hardly defending what I did in a lighthearted way for an hour or two a couple of days ago, it was just that the fit resulted in a rather good estimate of both the rough time of the beginning of the rise (some 200 years ago) and the rough level of the background CO2 level (about 260 ppm).

My apologies, Lance, I was commenting on the analysis. The problem is that bad science done by skeptics diminishes the reputation of the blog as well as the reputation of all skeptics. In addition, I get busted all the time for not commenting on skeptical papers, since most of my analyses are of AGW supporting papers. So I try to come down equally hard on both sides.
The problem is that the rise in CO2 is just a bit off of linear, just slightly curved. As a result, you can fit a whole host of curves to it. If you do the statistics, you’ll see that the difference in how good the fit is tends to be very small. As a result, we have absolutely no information that would allow us to pick one curve over the other.
If you want to fit a curve, my suggestion is that you fit a curve with some physical meaning to it. The amount of CO2 remaining in the air can be very closely modeled by a sink which sequesters a few percent of the atmospheric excess amount each year.
For example you can use a time step equation
A(t) = E(t) + .968 * A(t-1)
where A(t) is the amount of the emissions remaining in the atmosphere in year t, E(t) is emissions in year t, and A(t-1) is the amount remaining in the atmosphere in year t-1.
To convert from ppmv to gigatonnes of carbon (GtC), multiply by 2.1838. This assumes a pre-industrial CO2 level in 1850. I’ve put an Excel spreadsheet up here to show how it can be done.
Using that, you can get a pretty good read on what will happen under various future scenarios, you put in the emissions, it will tell you the airborne CO2 concentration. I’ve included an example of freezing emissions at the 2005 level, you can put in what you want.
All the best,
w.

Allan MacRae
June 4, 2012 7:38 pm

Allan MacRae says: June 4, 2012 at 4:05 am
First, you totally miss the point of the urban CO2 readings – it’s about Ferdinand’s mass balance argument, which fails not only on a seasonal basis but even on a daily basis, imo.
FerdiEgb says: June 4, 2012 at 6:26 am
The mass balance must be always obeyed, no matter what happens where. But that is only calculatable on a yearly basis, as we only have yearly inventories of the emissions. Urban readings anyway are irrelevant for the mass balance, as are all readings in the lowest few hundred meters above land. That represents only 5% of the air mass where the CO2 is not well mixed due to a lot of local sources and sinks. In the rest of the global air mass, the yearly averaged measurements are all within 2 ppmv for the same hemisphere and 5 ppmv between the hemispheres, where the SH lags the NH but the trends are exactly the same:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends_1995_2004.jpg
_____________
You are missing the point Ferdinand. The SLC urban CO2 readings show that even a the typical SOURCE of manmade CO2 emissions (the URBAN environment), the natural system of photosynthesis and respiration dominates and there is NO apparent evidence of a human signature. If your premise was correct, you would see CO2 peaks at breakfast and supper times and the proximate (in time) morning and evening rush hours, when power demand and urban driving are at their maxima. This human signature is absent In the SLC data, and yet the natural signature is clearly apparent and predominant.
Similarly, in the AIRS animation I posted earlier, there is NO human signature and the power of nature is clearly evident. Here it is again.
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4
These are huge natural DYNAMIC systems that are apparently NOT impacted by the relatively small human contribution. Sadly, Nature apparently just ignores our humanmade CO2 emissions, as irritating as that must be for you.
I know you have made up your mind on this point Ferdinand, and nothing will shake your belief. Try watching the George Carlin video again – George gets it. 🙂

Allan MacRae
June 4, 2012 8:00 pm

Here are a few more references on C13/12:
There are many more such references our there, Ferdinand – but no doubt you think Murry Salby is totally out of his depth too, just like Roy Spencer.
http://wattsupwiththat.com/2012/04/19/what-you-mean-we-arent-controlling-the-climate/
http://wattsupwiththat.com/2011/08/05/the-emily-litella-moment-for-climate-science-and-co2/
Here is your dilemma Ferdinand:
CO2 lags temperature at all measured time scales, and yet you insist that CO2 drives temperature.
You may be right and I may be wrong, but please explain to me again how the future can cause the past.

Brian H
June 5, 2012 1:50 am

P. Solar;
Both those images work fine for me. But you misspelled “inferred” on the title line(s).
>:)

June 5, 2012 1:51 am

Bart says:
June 4, 2012 at 3:54 pm
Sure it does. You forgot to integrate, since it is the CO2 rate of change which is proportional to temperature.

And that is the problem:
The rate of change indeed is proportional to the temperature (change). But temperature near completely explains the variation in rate of change, not the whole rate of change.
Where it goes wrong is that, by scaling and offsetting the temperature, you attribute the total of the rate of change to temperature, including the part that is introduced by the scaling and offset. But there is nothing that allows you to attribute the bulk of the rate of change to temperature, as even if you detrend the whole bunch (and the integral is essentially zero), the correlation between temperature (change) and rate of change variation remains the same.
See:
http://esrl.noaa.gov/gmd/co2conference/pdfs/tans.pdf
page 14 and following,
Thus in my informed opinion, the bulk of the rate of change is caused by human emissions (at about twice the rate of change) while the variability of the rate of change is caused by temperature changes, (near) completely independent of each other.

FerdiEgb
June 5, 2012 3:23 am

Leigh B. Kelley says:
June 4, 2012 at 3:30 pm
The 8 ppmv/°C is based on the Vostok ice core, recently confirmed by the 800 kyr Dome C ice core:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/Vostok_trends.gif
The bulk of the variability around the trend is caused by the variable lag of CO2 vs. temperature: about 800 years during a glacial-interglacial transition, but several thousands of years during the opposite transition (therefore more deviation at the upper side than at the lower side). I didn’t compensate for the lags, that should have given even a better fit.
CO2 is already well mixed within a few years, thus as Vostok is a mixture of (about 600) years, that is no problem. The temperature is a proxy: either hydrogen (dD) or oxygen (d18O) isotopes are used. The origin of the isotope changes is mainly in the sea water surface temperature of where the water vapour of the clouds/snow/ice of the core originated and partly the temperature at the condensation place and the freezing pace. For more coastal ice cores like Law Dome, the bulk of the vapour originates from the nearby Southern Ocean, while for the high altitude, inland ice cores like Vostok and Dome C, most originates from a wide area all over the SH. Thus in general, the temperature proxy of Vostok reflects the whole SH oceans…
There may be some problems if the NH showed a different behaviour, but besides some shifts in the start and other episodes of the glacial/interglacial events, the NH behaves quite similar as the SH.
Another confirmation of the around 8 ppmv/°C is in Law Dome:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_1000yr.jpg
There is an app. 6 ppmv drop in CO2 around 1600, at the coldest part of the LIA. Law Dome can not be used as global temperature proxy (it is regional, but see the current discussion at ClimatAudit), so we need to compar that to one of the “spagetthy” reconstructions of global temperature over the past 1,000 years. I compared it to these with the highest MWP-LIA difference (Mann’s HS has the lowest difference) like Esper, Moberg,… which show a change of ~8°C over the time span of interest, which brings us again to around 8 ppmv/°C.
There are several other proxy ranges for the CO2/temperature ratio, the full range is, if I remember well some 4-20 ppmv/°C. But it seems to me that the ice cores in this case gives the best, at least hemispheric answer.
A constraint on the upper bound is the change introduced by ocean warming: any warming of the ocean surface (including the upper 700 m, the “mixed layer”) gives according to Henry’s Law an increase of 16 microatm of the pCO2 at the surface. Thus an increase of ~16 ppmv in the atmosphere is sufficient to compensate for that. But as at the other side vegetation works harder at higher temperatures (and increased precipitation), the average increase would be lower when everything again is in dynamic equilibrium…

Allan MacRae
June 5, 2012 5:15 am

Ferdinand Engelbeen says: June 5, 2012 at 1:51 am
“Thus in my informed opinion, the bulk of the rate of change is caused by human emissions (at about twice the rate of change) while the variability of the rate of change is caused by temperature changes, (near) completely independent of each other.”
____________
Comment to Bart:
It is possible that Ferdinand is correct and I am wrong (but I really doubt that).
Ferdinand is correct in that the short-cycle derivative dCO2/dt apparently does not significantly impact the trend of the overall CO2 versus temperature relationship – it just explains the “wiggles” in that trend.
The question is what primarily causes what – does atmospheric CO2 drive temperature or does temperature drive CO2? Do current humanmade CO2 emissions significantly increase atmospheric CO2, or are they “lost in the noise” of the much larger dynamic natural system?
My contention is, adapting Ferdinand’s wording:
“ the bulk of the rate of change is NOT caused by human emissions, BUT IS A RESULT OF ONE OR MORE LONGER-TIME NATURAL TEMPERATURE CHANGE CYCLES, CONSISTENT WITH the SHORT-TIME-CYCLE variability of the rate of change THAT IS ALSO caused by temperature changes.”
I prefer my hypo because
1. My hypo is more consistent with Occam’ s Razor – whereas Ferdinand’s hypo requires opposing trend directions at different time scales in the system, mine does not, such that all trends are consistently in the same direction (temperature drives CO2) at all time scales.
2. My hypo is consistent with the fact that CO2 lags temperature at all measured time scales, from an ~800 year lag on the longer time cycle as evidenced in ice cores, to a ~9 month lag on the shorter time cycle as evidenced by satellite data.
3. I have yet to see evidence of a major human signature in actual CO2 measurements, from the aforementioned AIRS animations to urban CO2 readings ( although I expect there are local data that I have not seen that do show urban CO2 impacts, particularly in winter and locally in industrialized China.)
The impacts on humanity of these two opposing hypotheses are significant:
If Ferdinand’s hypo is correct, we will likely see a little more global warming – not the catastrophic warming of the IPCC scenarios (driven by ridiculously high “climate sensitivity” and positive feedback assumptions), but a modest warming that will actually be (net) beneficial to humanity and the environment, imo.
If I am correct in my overall assessment (and not just this hypo), we are likely to see some global cooling, which may be moderate or severe. Historically, humanity has done very poorly during periods of severe global cooling. For example, many millions starved in Northern countries circa 1700, during the depths of the Little Ice Age and the Maunder Minimum.
The implications of the current obsession with global warming mania is that, ironically, society will be unprepared should a period of global cooling occur.
During the last period of global cooling, tens of thousands of innocent people were burned as witches, in many cases because they were accused of causing the cold weather that devastated crops and resulted in widespread starvation.
If there is another period of severe global cooling, I would not like to be one of the many climate scientists who has profited from stoking the fires of global warming hysteria.

Joachim Seifert
Reply to  Allan MacRae
June 5, 2012 10:37 am

Allan, you are right, 100%….. My new paper on 4 longterm global
warming/cooling mechanisms will show it for over a 10,000 years.
time frame….It is all just a matter of a few more months…JS

richard verney
June 5, 2012 5:54 am

Shyguy says:
June 3, 2012 at 12:26 am
Looks to me like the co2 records got corrupted just like everything else the ipcc get it’s hands on.
Dr. Tim Ball explaining:
http://drtimball.com/2012/pre-industrial-and-current-co2-levels-deliberately-corrupted/
///////////////////////////////////////////////////////////
I am sceptical of the reasons justifying the ignoring of this old experimental data and what it tells us of 19th and early 20th century CO2 levels.
It would be interesting to repeat those old experiments using the same location, same time of year, same equipment and same methodology etc and see what results are achieved today.

Brian H
June 5, 2012 6:02 am

FerdiEgb says:
June 5, 2012 at 3:23 am
“spagetthy”

Not even close. Spaghetti.

FerdiEgb
June 5, 2012 6:57 am

Leigh B. Kelley says:
June 4, 2012 at 3:30 pm
Sorry, mistake on the MWP-LIA difference which was 0.8°C in several reconstructions, lucky for our ancestors (and us), not 8°C…

FerdiEgb
June 5, 2012 7:53 am

Allan MacRae says:
June 4, 2012 at 7:38 pm
You are missing the point Ferdinand. The SLC urban CO2 readings show that even a the typical SOURCE of manmade CO2 emissions (the URBAN environment), the natural system of photosynthesis and respiration dominates and there is NO apparent evidence of a human signature.
Depends where and when you measure… Mauna Loa had problems with local traffic CO2 when that increased over the years, until they banned all traffic there. And have a look at the data from Diekirch (Luxemburg), in a shielded valley with forests + urban + small factories:
http://meteo.lcd.lu/papers/co2_patterns/co2_patterns.html
Especially Fig. 12 for the differences between Sunday and weekday pieks during rush hour…
Of course the human signal is small (about 3%) compared to the diurnal and seasonal fluxes. But that is only important if the natural fluxes are in unbalance and add or substract some net amount of CO2. Well, it is proven that nature as a whole substracts CO2 from the atmosphere: each year in quantity about halve what humans emit. Thus momentary measurements near huge sources and sinks don’t tell you what happens in the total atmosphere, but a lot of stations and airplanes and ships surveys and nowadays AIRS in the “wel mixed” atmosphere do.
To make a comparison:
You have a fountain where the water is pumped out of a bassin and recirculates back in the bassin. The fountain has some computerised valve system which regulates the height of the fountain from 60% to 100% height on a regular basis. The maximum flow over the fountain is 1000 liter per minute. Now someone opens a small supply into the main waterflow at 10 liter per minute, he goes away on another job and forgets that the was adding water.
The extra supply is only 0.1% of the maximum flux. That is practically unmeasurable in the huge 40% change of the main waterflow. But will we have an overflow of the bassin sooner or later, or not?
Further, I missed the end of April discussion, as I was travelling through West-Australia, but I have intensively discussed at the Salby discussion last year, read my comments there again…
http://wattsupwiththat.com/2011/08/05/the-emily-litella-moment-for-climate-science-and-co2/
Here is your dilemma Ferdinand:
CO2 lags temperature at all measured time scales, and yet you insist that CO2 drives temperature.

Well, it was true until 1850 that temperature dictated the CO2 levels with different lag times, but since 1850, the CO2 levels were increasing far beyond what the temperature shows. At the current temperature, CO2 levels should be 290-300 ppmv. but the counts are ticking up and we reached near 400 ppmv already. Thus at this moment CO2 is leading with 100+ ppmv… Still temperature swings cause (a few months) lagged swings in the CO2 rate of change, but that are swings around the trend, not the trend itself.
Thus at this moment the CO2 levels lead the temperature to a far extent. If that will have a huge impact on temperature, that is an entirely different question. My opinion is that the inpact will be small (around 1°C), hardly a problem and mainly beneficial. But my fear is that opposing every single bit of what climate research shows, even based on solid evidence, works contraproductive for where the skeptics are right.

Bart
June 5, 2012 9:07 am

Allan MacRae says:
June 4, 2012 at 7:38 pm
No use, Allan. I have demonstrated in excrutiating mathematical detail that Ferdinand’s “mass balance” argument is completely bogus. It made no dent in his armor.
Ferdinand Engelbeen says:
June 5, 2012 at 1:51 am
“…the correlation between temperature (change) and rate of change variation remains the same.”
You cannot arbitrarily detrend the data. The slope of the temperature is what produces the curvature in the accumulated CO2, and it matches exactly. That leaves no room for a significant human influence.
Allan MacRae says:
June 5, 2012 at 5:15 am
“It is possible that Ferdinand is correct and I am wrong (but I really doubt that).”
I have addressed this issue at several points in this thread, e.g., here, and here, and here.
fhhaynie says:
June 5, 2012 at 6:34 am
“My analysis indicates that, at present anthropogenic emission rates, they are statistically significant, but account for less than 10% of the accumlation.”
My estimate is between 4% and 6%.

Bart
June 5, 2012 9:13 am

“I have addressed this issue at several points in this thread…”
First link should have been here.

Bart
June 5, 2012 9:15 am

Gah! Here.

June 5, 2012 9:17 am

Ferdinand 6/5 – 3:23 a.m.
Thank you for your reply. After rereading my post of yesterday, I was appalled. It seems that after years of relative isolation in rural Montana, my writing style has devolved into an amalgam of Kant (in the Critique of Pure Reason) and Henry James (in Wings of the Dove), with the piercing clarity of Hegel thrown into the mix!
Some of my puzzlement remains. Using the first graph you linked to, we have a range of ~187 ppmv CO2 to ~292 ppmv CO2 and ~ -9.5 deg. C to +3 deg. C for temp. These give total dCO2 of 105 ppmv and total dT of 12.5 deg. C. Using your hypothesized 8 ppmv CO2 per 1 deg. C, we have 105 ppmv / 8 ppmv = 13.125 deg. C dT, reasonably close to the ~12.5 deg. C I eyeballed from your graph. Here is my problem. Whether the temperature change is referred to the “nearby Southern Ocean” (presumably at very high latitude) or to the entire SH (mostly to the oceans); and whether it is referred to the change in SST or to the entire 0-700 m mixing layer, this sort of temp. change for/in the ocean seems confoundingly large (at least to me). I note that you qualify the processes contributing to the ice core(s) dT by adding where the condensation takes place and what the freezing pace is, but still… I got similarly high dT’s using the 8 ppmv per 1 deg. C formula for other G-IG transition dCO2 increases ranging from 180-300 ppmv to 200-260 ppmv for dCO2. It seems that there has to alot to this “place of condensation” and “freezing pace” business. Please throw me a line!

FerdiEgb
June 5, 2012 9:30 am

richard verney says:
June 5, 2012 at 5:54 am
I am sceptical of the reasons justifying the ignoring of this old experimental data and what it tells us of 19th and early 20th century CO2 levels.
There were two problems with the olde data: first the methods. Some were really bad (accuracy +/- 150 ppmv, but intended for measuring CO2 in exhaled air, still OK with such accuracy), the better were +/- 10 ppmv, good enough for average measuring, but even the seasonal swings are hard to detect.
The main problem is where was measured: Some series were from mid-towns, within forests, some at the coasts, some on seaships. It may be clear that measuring one or a few samples per day at a place where the diurnal variation may be a few hundred ppmv is not really representative for the CO2 levels of that time in the bulk of the atmosphere…
Those measurements that were made on seaships and at the coast, with seaside wind, all are around the ice core measurements.
The modern measurements, invented by C.D. Keeling and used since 1958 at the South Pole and Mauna Loa has an accuracy of +/- 0.2 ppmv and is fully continuous. The calibration happens each hour to maintain the accuracy.
It happens that we have some interesting data from a modern station near Giessen, Germany, near where the longest historical series in the period 1939-1941 was taken. The old data show a variability of 68 ppmv (1 sigma). Have a look at the diurnal variation from the modern station near Giessen, compared to baseline stations (Mauna Loa, Barrow and the South Pole), all raw data:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_background.jpg
The historical sampling in Giessen was 3 times a day, of which the morning and evening samplings were on the flanks of the highest changes…
Thus, unfortunately, most of the historical measurements can’t be used to know the real background CO2 levels of that time.

Bart
June 5, 2012 10:43 am

This keyboard is cursed. The first link where I explained why the derivative information is dispositive was here.

June 5, 2012 10:45 am

Friends:
This thread has become an ‘angels on a pin’ discussion with various participants each asserting that their model of carbon cycle behaviour is right (so everybody else is wrong).
I remind that I began my post on this thread at June 3, 2012 at 2:31 pm by saying

The important point is that the dynamics of the seasonal variation in atmospheric CO2 concentration indicate that the natural sequestration processes can easily sequester ALL the CO2 emission (n.b. both natural and anthropogenic), but they don’t: about 3% of the emissions are not sequestered. Nobody knows why not all the emissions are sequestered. And at the existing state of knowledge of the carbon cycle, nobody can know why all the emissions are not sequestered. But that is the issue which needs to be resolved.

I repeat the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?
Nobody has addressed that question although Allan MacRae (at June 4, 2012 at 7:38 pm) touches on it when he writes

The SLC urban CO2 readings show that even a the typical SOURCE of manmade CO2 emissions (the URBAN environment), the natural system of photosynthesis and respiration dominates and there is NO apparent evidence of a human signature. …
Similarly, in the AIRS animation I posted earlier, there is NO human signature and the power of nature is clearly evident. …
These are huge natural DYNAMIC systems that are apparently NOT impacted by the relatively small human contribution. …

Exactly so.
But that takes us back to my question.
It rephrases my question as being,
Why does the system behave as it does when it could easily sequester all emissions and when it does sequester the local anthropgenic emissions?
At June 5, 2012 at 5:15 am, Allan MacRae asserts that the answer is the observed increase in atmospheric CO2 emission is a delayed response to global temperature rise over the past century.
I admit that I think he is right but I point out that the change could be in part or in whole the anthropogenic emission. I explain this as follows.
The carbon system may be adjusting to a new equilibrium in response to a change such as the temperature rise, the anthropogenic emission, a combination of those two effects, and/or something else.
The rate constants of some processes of the carbon system are very slow so they take years or decades to adjust. Hence, any change causes the system to adjust towards a new equilibrium which it never reaches because the system again changes before the new equilibrium is attained.

As I said in my post at June 3, 2012 at 2:31 pm and repeated in my post at June 4, 2012 at 5:05 am, using that assumption

there are several models of the carbon cycle which each assumes a different mechanism dominates the carbon cycle and they each fit the Mauna Loa data. We published 6 such models with 3 of them assuming an anthropogenic cause and the other 3 assuming a natural cause of the rise in CO2 indicated by the Mauna Loa data: they all fit the Mauna Loa data.

ref. Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005)
The real issue is that the Mauna Loa data is little different from a straight line relationship with time. Hence, almost any model with two or more variables can be tuned to match the Mauna Loa data to within the measurement error (n.b. to a perfect fit to each datum).
Hence, arguments which amount to “My model works so it must be right” get nowhere: a wide variety of models ‘work’.
I again repeat, the important question is
Why don’t the natural sequestration processes sequester all the emissions (natural and anthropogenic) when it is clear that they can?
Richard

June 5, 2012 10:56 am

Cause and effect suggests that CO2 is driving temperature change, and what we’re looking at here is predominately the “fast response” of climate to changes in CO2.
I’m guessing that Bart never tested the lag when he made this claim:

I already pointed this out, in detail, on another thread where many things were discussed including the ability of his own model to fit variable contributions of CO_2 to the final growth in concentration without losing the correlation between temperature fluctuations and the derivative of CO_2 concentration (which I demonstrated numerically and posted the code for). Richard Courtney also commented that he has successfully fit multiple models to the CO_2 data within the error bars on the data, making it difficult to use pure “agreement with the data” to resolve differences between models.
However, Bart had an answer for the “lead/lag” problem in that discussion. I haven’t (I admit) gone all the way back to the raw data to investigate the claim, but he alleges that the order inconsistency is due to the fact that his curve uses 24 month running averages, so (one presumes) certain fluctuations can cause dCO_2/dt to run up before T. However, one would have to look at the raw data to see if in fact this is what is happening, and I have not done so. It could equally well be the case that the raw dCO_2/dt data is leading the T data — statistically, this is the most likely possibility simply because of the very sharpness of the correspondence, the very thing he argues for to make his case. The fluctuations match shape at a derivative granularity of the minimum time step size, or very nearly so, making it unlikely (in my own opinion) that this as an artifact of smoothing. But I am not sure, and unless/until you recompute the running averages on different timescales you will not be sure either. I’m too busy to do this, but woods for trees does make it pretty easy to do it, so play through.
Second, because sometimes dCO_2/dt leads T, and sometimes T leads dCO_2/dt (or they are closely synchronized) it is also important to bear in mind that inferring causality from correlation is weak in both directions. A better explanation is that both of them are being driven by a third variable, with a differential and somewhat random lag. For example, global atmospheric temperatures and fluctuations in CO_2 concentration could both be driven by global SSTs, or even local SSTs in only e.g. the south central pacific. Or by variations in cloud based albedo. Or by seasonal business and heating energy consumption in east asia. Or by space aliens, who are heating the Earth with an invisible heat ray that uses dark energy beamed from a secret base on the Moon to cause global warming with the intent of making us extinct in a couple of hundred years (no hurry, their colonization party is en route and the trip will take them a half century or more).
But exotic speculations aside, I think the fairest response so far is Richard’s. There are many models, with very different presumed underlying causal mechanisms, that CAN fit the CO_2 data. Some of those models make the bulk contribution come from anthropogenic sources — and work well enough to describe the data within error bars. Some models (like Bart’s) make the bulk contribution come from temperature-dependent shifts in e.g. chemical equilibrium in global sources and sinks that regulate the base atmospheric CO_2 concentration almost completely independent of what humans contribute.
In both cases there is some weak evidence confounding the simple explanations, but we simply lack objective, model-assumption free measurements from real data of the presumed processes that would permit us to conclusively favor any model over all the others (and in a complex system like the planet, may never obtain the needed information because the true model may NOT be simple, or simplifiable — it may involve solving a double coupled nonlinear Navier-Stokes problem on a global scale with intimate coupling of source and sink chemistry to local non-Markovian state, so that all of the fluctuations and variation we see is the “accidental” correspondence of a complex nonlinear system with one of the many variables that drive it that a momentary shift to a different Poincare cycle will then confound. We might just lack the long time scale data (with a sampling density and precision sufficient to be useful) to resolve the question, and might not HAVE it for a century or more of “modern” precision measurements at a still final granularity than the network of SST buoys or weather stations permits at present.
Given this, it is entirely plausible that Bart is correct. It is entirely (but somewhat less) plausible that he is incorrect. It is silly to argue that he must be right or must be wrong, because this is yet another variant of the eternal “correlation is causality” argument and is known to be scientifically and logically invalid, easily confounded by both accidental correspondence (which happens) and by additional variables with completely distinct causality that control both correlated variables. Smoking does not cause teen pregnancy, in spite of the fact that one can show a positive correlation between teens that smoke and teens that participate in activity that does cause teen pregnancy. We can never resolve the argument with better or worse matches in the correlation, though — we have to appeal to a considerable amount of completely independently derived science and measurement and raw sociology to understand what really (most probably) causes the observed correlation — teen hormones that encourage risk taking and social rebellion and uninformed participation in experimental sexual behavior simultaneously.
rgb

June 5, 2012 11:10 am

The real issue is that the Mauna Loa data is little different from a straight line relationship with time. Hence, almost any model with two or more variables can be tuned to match the Mauna Loa data to within the measurement error (n.b. to a perfect fit to each datum).
Damn skippy, Richard. Although I would have just asserted monotonic nonlinear relationship with time, so a near-infinity of nonlinear models with 1-3 parameters can fit it to within annualized fluctuations, and it is then VERY trivial to superpose any of those primary models with a secondary model that explains only the fluctuations including correspondence in the derivative of CO_2 and temperature!
Given such a short and monotonic baseline behavior, we understand almost nothing on the basis of mere numerical correspondence. Nor are any of the physical arguments or models particularly convincing — they depend way too much on the prior beliefs and biases of the arguer, with little to no way to falsify any of them using the data alone. What would help would be some sign of significant non-monotonic variation at the current monotonic scale (that is, not teensy annualized noisy fluctuations) that can only be predicted or hindcast with a subset of those models, but so far, that isn’t visible in the Mauna Loa data, especially not after they throw part of it away on the basis of (biased) arguments to produce a “cooked” product. The raw data, including the data that they throw away because of the direction the wind blows etc, might tell a different story because accepting or rejecting any part of the data according to external considerations forces the conclusion away from anything that omitted data might confound.
It isn’t worth going through the sins against the logic of statistical analysis routinely committed in climate science, but they are manifold and mortal.
rgb

1 6 7 8 9 10 14