Guest post by Lance Wallace
The carbon dioxide data from Mauna Loa is widely recognized to be extremely regular and possibly exponential in nature. If it is exponential, we can learn about when it may have started “taking off” from a constant pre-Industrial Revolution background, and can also predict its future behavior. There may also be information in the residuals—are there any cyclic or other variations that can be related to known climatic oscillations like El Niños?
I am sure others have fitted a model to it, but I thought I would do my own fit. Using the latest NOAA monthly seasonally adjusted CO2 dataset running from March 1958 to May 2012 (646 months) I tried fitting a quadratic and an exponential to the data. The quadratic fit gave a slightly better average error (0.46 ppm compared to 0.57 ppm). On the other hand, the exponential fit gave parameters that have more understandable interpretations. Figures 1 and 2 show the quadratic and exponential fits.
Figure 1. Quadratic fit to Mauna Loa monthly observations.
Figure 2. Exponential fit
From the exponential fit, we see that the “start year” for the exponential was 1958-235 = 1723, and that in and before that year the predicted CO2 level was 260 ppm. These values are not far off the estimated level of 280 ppm up until the Industrial Revolution. It might be noted that Newcomen invented his steam engine in 1712, although the start of the Industrial Revolution is generally considered to be later in the century. The e-folding time (for the incremental CO2 levels > 260 ppm) is 59 years, or a half-life of 59 ln 2 = 41 years.
The model predicts CO2 levels in future years as in Figure 3. The doubling from 260 to 520 ppm occurs in the year 2050.
Figure 3. Model predictions from 1722 to 2050.
The departures from the model are interesting in themselves. The residuals from both the quadratic and exponential fits are shown in Figure 4.
Figure 4. Residuals from the quadratic and exponential fits.
Both fits show similar cyclic behavior, with the CO2 levels higher than predicted from about 1958-62 and also 1978-92. More rapid oscillations with smaller amplitudes occur after 2002. There are sharp peaks in 1973 and 1998 (the latter coinciding with the super El Niño.) Whether the oil crisis of 1973 has anything to do with this I can’t say. For persons who know more than I about decadal oscillations these results may be of interest.
The data were taken from the NOAA site at ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt
The nonlinear fits were done using Excel Solver and placing no restrictions on the 3 parameters in each model.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Dave Walker says:
June 3, 2012 at 1:46 am
Caleb- Downslope from Mauna Loa are miles of recent lava flows and then miles of rainforest. The Observatory is 11,000 feet high and enjoys pretty steady wind. They throw out readings tainted with vog or low altitude pollutants. Not a perfect observatory but pretty good.
==============
They decide what is or isn’t this claimed ‘well-mixed’ background – without ever proving there is such a thing as this, or showing even if there was how they can tell the difference. They merely decide what the figure will be, that isn’t science. They’re not measuring anything.
Keeling was anti-coal and went to a laboratory on the world’s largest active volcano, surrounded by constant volcanic activity in venting and thousands of earthquakes every year above and below sea level, over a huge hot spot creating volcanic islands in a warm sea, all constantly releasing carbon dioxide, and he claimed with less than two years data that he had definitely established there was a trend, established that global levels from man-made carbon dioxide were rising. WUWT?? Doesn’t ring any warning bells? The man had an agenda, all his decisions can be seen to be agenda driven, and, he and his son had control over the stations for years and now all within the coordinated ‘consensus’ to prove AGW – there is nothing to suggest the Keeling curve is anything but make believe.
What happens here? The disjuncts are just ignored. Why hasn’t AIRS produced the top and bottom of troposphere data? Too much proof from their conclusions of mid troposphere that carbon dioxide was lumpy and not well-mixed? And they’d have to go away and learn something about wind systems…
Here, a real picture of carbon dioxide levels worth a thousand words:
http://www.biomind.de/realCO2/literature/evidence-var-corrRSCb.pdf
Evidence of variability of atmospheric CO2 concentration during the 20th century
Dipl. Biol. Ernst-Georg Beck, Postfach 1409, D-79202 Breisach, Germany
Discussion paper May 2008
From page 9:
CO2 in Troposphere/Stratosphere 1894 -1973
“Figure 4 Tropospheric and stratospheric measurements of CO2 from literature 1894-1973 (see Table 2) graphed from 66 samples, calculated as 18 yearly averages.
Despite the low data density, the CO2 contour in troposphere and stratosphere confirms the direct measurements near the ground that suggest a CO2 maximum between 1930 and 1940.
The CO2 peak around 1942 is also confirmed by several verified data series since 1920 sampled at ideal locations and analysed with calibrated high precision gas analysers used by Nobel awardists (A. Krogh since 1919) showing an accuracy down to ±0.33% in 1935. Figure 5 shows the 5 years average out of 41 datasets (see Table 1). For comparison the reconstructed CO2 from ice records according to Nefttel et al. is included.”
The Nefttel et al on next page 10, but don’t let that distract you from the picture on page 9…
Then see picture on page 12:
“Considering Figure 8 we can see that Callendar selected only the lowest sample values and omitted several data sets. His averages are mostly lower than the correct values. His so-called “fuel line” is therefore about 10 ppm higher than he calculated. Furthermore he ignored thousands of correctly measured data on the sea, continent and in the troposphere for reasons we can only speculate.”
And if objective, that speculation leads to the inevitable conclusion that this was agenda driven and the Keeling Curve should be flagged as such in science teaching.
Kevin Kilty says:
June 3, 2012 at 9:42 am
The exponential model presented here has no coefficient that multiplies the exponential, and therefore, implies that this coefficient is 1.0ppm, and is a constant in the model–not determined by data at all. This seems extremely unlikely. Is this a typo?
The observed data are all contained within one e-folding time far along in the time series, So the coefficients are correlated to one another and not well determined.
Actually, I struggled with that for a time and don’t really know who won. There IS a coefficient of sorts–write the exponential as a product and the coefficient is exp(-t0/tau). If we add another coefficient, leaving the rest of the equation alone, the new coefficient just interacts with this coefficient and makes everything ambiguous. If on the other hand we replace the exp(-t0/tau) with just the new coefficient, we lose the knowledge of when the exponential rise began and also the time constant, although it is probably the case that the fit would be as good.
Ken Gregory says:
June 3, 2012 at 11:45 am
Here is a graph of the CO2 extrapolation using the parameters given in the lead post, with some selected IPCC scenarios. Note the huge range of CO2 projections to the year 2100; 1248 ppm in the A1F1 high case, 486 ppm in the B1 low case.
http://www.friendsofscience.org/assets/documents/CO2_IPCC_ScenVsActual.jpg
The A2 Reference case is very close to the CO2 model extrapolation.
Many thanks Ken for applying the model above and comparing it to the IPCC models. Quite delightful that a simple exponential model tracks so closely to a highly sophisticated multiple-parameter model employing the full panoply of hard-won climate science findings.
Lance Wallace says:
June 3, 2012 at 4:48 pm
“Well,you have 5 adjustable parameters here and you know what von Neumann said about that.”
It isn’t a “fit”, so the criticism is inapposite. It is the simplest model possible to elucidate the behavior dictated by the data, and you know what Einstein said about that.
Wallace, I think the key finding here is in your residuals. They show that there is a temperature dependency but it’s very small. +/- 1.5 max over the period of the data.
That in itself is very useful because I have seen a number of people suggesting that most of CO2 rise was to temperature rise ( like happens during and after deglaciation ).
The other thing to notice is that the quadratic is a better fit. The exponential starts too big and ends too small. That is a bad sign in view of the extrapolations you do at both ends.
As your plot shows the two can be grossly similar over that short a section and many observers have loosely categorised the change as “exponential” because it is growing ever faster.
There was some paper (reported on WUWT) that had found “super-exponential” growth in CO2. This attracted much derision here from the uneducated howlers who thought it was alarmist propaganda. In fact, as I pointed out at the time, in mathematics super-exponential just means a function that grows faster than an exponential. The papers finding was that a quadratic was a better fit and noted that a quadratic is super-exponential. Your plot basically confirms this.
The other problem with extrapolating in either direction is that there is very little of the curved section to fit to so very small errors could lead to large changes in the coeffs of the fit. It also implies the assumption that whatever caused / is causing the rise is unchanging on the century scale, eg. ecomonic growth had been and always will be a fixed percentage per year ( a fixed %age growth gives an exponential).
I did the exponential plotting thing two or three years ago. I found it best to do three different exp. fits: pre-1900; 1900-1965 and 1965 onwards. I was using economic data for fossil fuel extraction and scaling the result to Mauna Loa record.
I’ll try to dig out my results.
The curve fitting exercise of the above article is pointless. If a curve is fitted then the equation of the curve provides a description of the shape of the curve but no information is gained by such an exercise. And it cannot assist in explaining why all the emissions are not sequestered.
I have to agree — indeed, the very impossibility of distinguishing a quadratic from an exponential (or, as pointed out, from a harmonic function or many other possibilities) is indicative that the baseline is too small to prove much of anything at all.
I also agree with your argument on non-uniqueness — Indeed, one thing that IS apparent in this curve is that the variation ABOUT exponential, or quadratic, or sinusoidal, is almost completely irrelevant compared to the dominant behavior. It can be modeled quite accurately by:
C(t) = C_0(t) + V(t)
where C_0(t) is either of the empirical fits above (or any other fit to the smooth curve) and V(t) describes the “anomaly” — the noise on the curve, and is visibly over two orders of magnitude smaller. LOTS of things could determine the shape of C_0(t). Lots of combinations of things, at that. The problem continues to be the absurdly short baseline. Monotone increasing is not only boring, it is impervious to analysis — any monotonic driver can be correlated with it and will “work”. A completely separate cause — which might or might not be a significant component of the cause of C_0(t) — could be responsible for the small secular variations in V(t).
The same argument works both for and against the CAGW CO_2 is the devil argument. Over the last 110+ years, there is at least some correlation between the solar cycle and global temperatures. There is also (presumably) monotone/exponential or whatever increasing CO_2. It has been argued that CO_2 is responsible for the increasing temperature, with solar cycle at best a minor modulator around the overall average increase. However, one can also fit the data with the solar cycle being a primary driver and CO_2 an all but irrelevant modulator. This sort of thing is often possible when fitting nonlinear curves — there is an almost irresistible temptation to commit the sin of seeking confirmation from the success of a fit to some set of functions that tells a good story (that is, the story you want to believe) and blind yourself to the fact that there might be a dozen equally or even more successful fits (and a few less successful ones) that tell a very different story, and that (nature being nature) one of the less successful fit/stories could end up being the true one, given the (usually unstated or unknown) errors in the measurements and methodology that comprise the fit data.
There is a really, really lovely paper by Koustoyannis that illustrates the general problem quite beautifully. In fact, on its first page the figure alone says it all, and shows why Anthony’s curve fitting is meaningless — it is half way between window A and window B, where the data could be quadratic, exponential sinusoidal or just spectral noise on a really long term deterministic behavior that hasn’t even begun to be resolved. You can grab a preprint of his paper here:
http://itia.ntua.gr/en/docinfo/673/
Koustoyannis is actually a Really Bright Guy, and the essence of his paper is that the entire statistical basis of climate science is tragically flawed. He goes on and shows that the entire concept of causality in climate science is badly broken; that it should really be described not by ordinary classical statistics but rather by Hurst-Kolmogorov statistics, which is basically a peculiar structure of stochastic state transitions modulated by non-stochastic noise. I find his argument rather compelling. You can see one of the talks he has presented on HK statistics in climate science here:
http://www.cwi.colostate.edu/nonstationarityworkshop/SpeakerNote/Wednesday Morning/Koustayannis.pdf
Sadly, I think his math is way beyond the level of understanding of most climate scientists. I keep hoping Bob Tisdale sees one of my posts of his work, though, as his models precisely reproduce the stochastic jump followed by trendless noise Bob observes in SST data. Hurst-Kolmogorov process — stationary noise plus scaling behavior.
I also just finished Taleb’s The Black Swan and found it quite revelatory. It, too, warns against the abuse of statistical methods designed for Mediocristan — that part of the world where Gaussian statistics and concepts like standard deviation have meaning and life is rather predictable and boring (like mainline classical physics) — by applying them in Extremistan, places where nonlinearity and scaling behavior render any application of traditional statistical methods completely invalid and capable of making dangerous and expensive mistakes. He also warns — repeatedly — against the danger of confirmation bias, the ability of humans to go looking for data that will confirm their pet theory and find it, at the minor cost of ignoring all the other data that confounds it.
Places like climate science. The very first question that should have been asked in climate science before starting a sky-is-falling-and-we-must-tell-the-king scandal is is there anything truly unusual about the climate today. The answer is very clearly no. A resounding no. A statistically certain, overwhelming no. Whether one looks at 25 million years of proxy-derived climate data, 5 million years of proxy-derived climate data, 1 million years of proxy-derived climate data, 100,000 years of proxy-derived climate data, or 10,000 years of proxy-derived climate data, there is absolutely nothing remarkable about the present. It isn’t the warmest, the most rapidly warming, the coolest, the rainiest, or the -est of anything. It is boring.
The entire CAGW fiasco is a clear example of how humans implicitly believe the largest thing they’ve ever seen to be the largest thing in existence.
rgb
OK, found my exp fits. The data I used has this header, should be enough to find its source.
#*** Global CO2 Emissions from Fossil-Fuel Burning,
#*** Cement Manufacture, and Gas Flaring: 1751-2007
#***
#*** June 8, 2010
#***
#*** Source: Tom Boden
#*** Gregg Marland
#*** Tom Boden
#*** Carbon Dioxide Information Analysis Center
#*** Oak Ridge National Laboratory
#*** Oak Ridge, Tennessee 37831-6335
The extraction data was integrated by addition of each years output to get total emissions and then plotted on a log scale. This shows three quite clear stages in development and underlines the fact that fitting just one curve or any sort is too simplistic.
As is generally known, industrial output only really took off after about 1960.The log plot shows there were three stages of fairly constant annual percentage growth. The post 1960 period corresponds to the Mauna Loa record and allows scaling the industrial extraction data to the residual airborne CO2 concentration. This scaling produces a preindustrial CO2 level of around 295 ppm, suggesting the favorite figures of 260-270 are rather too low.
The later segment was fitted from 1965 onwards and the actual data rises slightly quicker than the fitted line near the end, so the 2050 projection of 462ppm may be somewhat low *if* current rate of growth continues.
http://imagebin.org/index.php?mode=image&id=215058
http://imagebin.org/index.php?mode=image&id=215060
[mods, these images are only good to 15days, please copy them if possible.]
I looked at the correlation between d(S_CO2)/dt and HADSST3(?), and the results were pretty much what I was expecting (ironically based in part on work that Bart had done previously).
The peak correlation corresponds to approximately a 2 month lag between change in CO2 and temperature, with temperature lagging CO2. It’s actually easy to see which lags which with a zoom in of Bart’s chart.
Here’s my figure.
Cause and effect suggests that CO2 is driving temperature change, and what we’re looking at here is predominately the “fast response” of climate to changes in CO2.
I’m guessing that Bart never tested the lag when he made this claim:
(That’s par for the course from my experience.)
Allan:
Depends on whether you are looking at surface measurements, or “full atmospheric column” as to whether this is true or not.
The Phoenix dome regularly achieves surface CO2 concentration levels over 600 ppm, but that has a negligible effect on Phoenix temperatures because it’s all concentrated near the surface. The whole point of measuring on Mauna Loa is related to looking at the “well-mixed” portion of the atmosphere, well above regional scale changes (and being situated above a tropical jungle there are substantial diurnal variations in CO2 driven by natural variability–again well studied by Keeling.)
People who are going to criticize the work of others, should really take the time needed to understand the rationale of the original experimental setup. Otherwise you’re not being skeptical you’re being overly credulous to your own pre-conceived notions. Keeling spent roughly 10 years researching the optimal way of collecting these data, and before him, for reasons you’ve managed to elucidate it was generally accepted it was not achievable. Part of the credit he has received, and it was well-deserved, was working out how to side step many of the issues you and others have raised (and these are patty-cake level issues you’re raising compared to the critics of the day he had to face).
This is just another one of those wave-offs you guys like to do for facts you can’t readily counter. “Demolished” just means somebody wrote some nice sounding words that don’t standup to the scrutiny of the skeptical mind.
Carrick says:
June 3, 2012 at 10:47 pm
Honestly, Carrick, I would have expected better than such a series of elementary errors from you. But, then again, that’s par for the course from my experience.
We’re talking about the relationship between the CO2 level and the temperature. You are looking at the derivative of the former versus the latter. With a 90 degree phase lead from the derivative, what the hell else would you expect?
As for zooming in on my chart, the CO2 data is processed with a 2 year non-causal filter, being the centered average. The WoodForTrees site automatically centers the average to make up for the phase delay. So, it’s hardly surprising that the data points anticipate the future when half of each one is made up of information from the future.
Massive fail, dude.
OK , let’s try again for those plots.Imagebin seems to appropriately named , if you don’t want anyone to see your image: image bin it!
http://image.bayimg.com/iaongaadp.jpg
http://image.bayimg.com/jaonfaadp.jpg
I hate to say it, but this analysis is meaningless. You can’t just fit a curve to something and extend it, that’s the kind of thing that the AGW alarmists do.


Instead, you need to look at the actual rates at which CO2 is emitted, and the rate at which it is sequestered. This is an exponential rate of sequestration of some kind, in which the amount sequestered increases as the atmospheric concentration increases.
One of the characteristics of this type of exponential decay is that if the emissions are constant, you end up with a “sigmoid function”, where eventually the amount sequestered will increase to match the amount emitted and the rise in atmospheric levels will stop.
Now, the author shows us this image, claiming doubling by 2050 …
However, there is no reason to prefer his curve over this sigmoidal curve, which matches the data just as well as his does …
In reality, nothing in nature continues to grow exponentially.
w.
Carrick, could you explain why you expected the rate of change of CO2 to ’cause’ a temperature rise?
It is well known that higher water temp will produce outgassing, on the other hand if you are suggesting CO2 conc is producing a “forcing” that is producing a temperature change you should be looking at CO2 conc vs dT/dt
Perhaps you could explain your thinking. I seem to have missed the causal relationship you are suggesting.
Lance Wallace says:
June 3, 2012 at 5:01 pm
Thank you Dr. Engelbeen for the useful references. Your proposed formula seems to suggest that at times of decreasing or plateauing temperatures, the CO2 emissions would need to increase at just the right speed to offset the reduced effect of temperature and maintain the eponential increase.
The human emissions are near linearly increasing over time, that is the basis for the trend. The 0.55 is pure coincidence, as that depends of the reaction speed of the process, but remarkably constant over the past 100 years or so (the “airborne” fraction of the emissions – in mass – remains constant). As the influence of temperature on short term (seasonal: 5 ppmv/°C, interannual ~4-5 ppmv/°C around the trend) is rather small, its influence on the trend if averaged over 2-3 years is near negligible.
BTW, no need to plot the accumulated CO2 trend on log paper, even on lineair scale there is a remarkable correlation with the emissions:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg
BTW2: No Dr. here, have a B.Sc. in chemistry, but changed to a M.Sc, job in process automation some 25 years ago, now already 8 years retired…
Willis : “In reality, nothing in nature continues to grow exponentially.”
Your point about many curves matching such a short segment if fair, but the whole AGW debate is all about a change that is NOT natural : economic growth (and hence fossil fuel usage) has been growing exponentially since the 60’s (about 2% per year).
So I don’t see why you suggest a sigmoid, which would correspond to constant emissions. No sign of that happening in the near future. 🙁
Bart says:
June 3, 2012 at 3:24 pm
I assert that these are the facts, folks:
1) CO2 is very nearly proportional to the integral of temperature anomaly from a particular baseline since 1958, when good measurements became available.
That is nice, but much too short to give a straight answer. If you extent that back to before the MLO time, the temperature was below your baseline for the full period 1900-1935, average -0.2°C. That is good for a CO2 drop of 84 ppmv, if you integrate over that period. Whatever the quality of the historical measurements, other proxies (like stomata) or ice cores, none of them shows such a drop. To the contrary, all show some increase. Extending that further to the depth of the LIA (some 0.4°C below baseline during several centuries), about all CO2 would be used up.
That simply shows that the correlation between temperature and rate of change of CO2 is mainly on the variability, but on the trend itself that is a spurious correlation.
Second, there is no known physical mechanism which can deliver 70 ppmv in 50 years time into the atmosphere for an increase of only 0.2°C in average. The equilibrium CO2 level between ocean surface and atmosphere for a 0.2°C increase is +3 ppmv. Vegetation acts in opposite ways and both are proven net sinks for CO2. All other sources are too slow or too small.
Further, there are lots of other observations which need to be fitted, whatever the source of the increase may be, temperature driven or not:
– the decline in d13C/12C ratio in the atmosphere and oceans
– the pre-bomb decline in d14C/12C ratio
– the increase of biomass (oxygen balance)
– the increase of DIC (total inorganic carbon) in the oceans
– the overall mass balance
2) Because of this proportionality, the CO2 level necessarily lags the temperature input, therefore in dominant terms, the latter is an input driving the former.
Agreed, except that this is true for the influence of temperature variations on the variations in increase rate, not on the increase rate itself.
3) The temperature relationship accounts for all the fine detail in the CO2 record, and it accounts for the curvature in the measured level.
The first is true, the second is for 96% spurious (the remaining 4% is the effect of the increase in overall temperature over that period)
4) This leaves only the possibility of a linear contribution from anthropogenic inputs into the overall level, which can be traded with the only tunable parameter, the selected anomaly offset.
That is where we differ in opinion: You can have the same fit of both curves if the human emissions are responsible for most of the trend and temperature variability is responsible for most of the variability in the derivative of the trend.
5) Anthropogenic inputs are linear in rate. Therefore, to get a linear result in overall level from them, there has to be rapid sequestration. (Else, you would be doing a straight integration, and the curvature, which is already accounted for by the temperature relationship, would be too much.)
As the temperature-trend relationship is largely spurious, there is no need for rapid sequestration (as can be seen in the observed adjustment time).
6) With rapid sequestration, anthropogenic inputs cannot contribute a significant amount to the overall level.
Again, the observed sequestration is not rapid, it is in the order of 53 years.
Now, you may quibble about this or that, and assert some other relationship holds here or there, but your theories must conform with the reality expressed by these six points, because this is data, and data trumps theory.
As your theory already trumps the data in the first point, your theory is falsified…
Duh, looks like links only work from my IP where I uploaded them. The day WP has a page preview we’ll all do a lot better. let’s try again……
Bart says:
June 3, 2012 at 7:27 pm
Lance Wallace says:
June 3, 2012 at 4:48 pm
“Well,you have 5 adjustable parameters here and you know what von Neumann said about that.”
It isn’t a “fit”, so the criticism is inapposite. It is the simplest model possible to elucidate the behavior dictated by the data, and you know what Einstein said about that.
Excellent riposte, Sir, and I fully deserved it for being flippant. Models with some sort of underlying physics are much to be preferred over my simple curve-fitting exercise. I checked your link and could see that a small tau1 was better than a large one, but didn’t see what values you obtain for tau2 or the other parameters and wondered whether you cared to present the values here and perhaps comment on their interpretation.
Willis Eschenbach says:
June 3, 2012 at 11:41 pm
“I hate to say it, but this analysis is meaningless. You can’t just fit a curve to something and extend it, that’s the kind of thing that the AGW alarmists do.”
Ouch! You really know how to hurt a guy, Willis. I’m hardly defending what I did in a lighthearted way for an hour or two a couple of days ago, it was just that the fit resulted in a rather good estimate of both the rough time of the beginning of the rise (some 200 years ago) and the rough level of the background CO2 level (about 260 ppm). Then it turned out, as RGBrown noticed above, that the residuals were on the order of 1 ppm, two orders of magnitude below the CO2 levels. Finally, the residuals may be conveying some information to us, which several people above have tried their hand at interpreting. For example, the single almost pure spike occurred in 1998, contemporaneous with the super El Nino and the temperature spike. Without the “meaningless” model, one would not see the departures from the model, which could provide clues to the underlying physics. For example, there appear to be some sort of annual-to-decadal cycles visible in the residuals.
That said, I have no argument at all with those who have correctly noted that other approaches can provide equally good fits. Presumably Bart’s coupled differential equation model, Willis’s sigmoid, Brown’s model with a main monotonic curve plus a small noise term, Koustoyannis’ Hurst-Kolmogorov statistics arising from stochastic step functions followed by trendless noise, P. Solar’s three-exponential model, can all fit the data equally well, and I would expect them all to show the exact same behavior of the residuals that was shown by both the quadratic and exponential models I used. If so, then there is something that the common “noise” afflicting all these models is trying to tell us about the underlying physical reality.
Lance,
I have been statistically curve fitting most of the avialable climate data for several years. Most recently I have worked with the Scripps column 10 CO2 and 13CO2 data monthly averages from the South Pole to Alert, Canada. I included global anthropogenic emissions in these regressions. This analysis indicates less than 10% anthropogenic contribution added onto a rising segment of a 200 year natural cycle. Take a look at http://www.retired researcher.wordpress.com. So far I have had no peer reviews of this work. Only one comment so far. With all the smart people that visit this site, I expected more.
Just learned about Julian Flood’s theory the Kriegesmarine Effect with a temperature spike at the same time as the CO2 in upper troposphere and stratosphere spike in the Beck paper I posted.
http://wattsupwiththat.com/2012/06/03/shocker-the-hansengiss-team-paper-that-says-we-argue-that-rapid-warming-in-recent-decades-has-been-driven-mainly-by-non-co2-greenhouse-gases/#comment-1000669
Carrick says: June 3, 2012 at 11:03 pm
Sorry Carrick,
First, you totally miss the point of the urban CO2 readings – it’s about Ferdinand’s mass balance argument, which fails not only on a seasonal basis but even on a daily basis, imo.
Your conclusions are technically wrong because you have not taken the time to understand the issues. You are missing one or more steps in the process of Read, Think, Write.
Your comment on the 2 month lag is just plain wrong on several counts. Look carefully at my original graphs in my 2008 paper. All the original data is in Excel sheets there.
The C14/13/12 issue has been done to death here at WUWT and elsewhere (by Roy Spencer and many others) – Google it.
Finally, your condescending tone demeans you. Forgive me for responding in kind.
Yet another attempt at getting a frigging image into this blog.
http://imagebin.org/index.php?mode=image&id=215058
http://imagebin.org/index.php?mode=image&id=215060
Lance Wallace:
At June 4, 2012 at 3:20 am you say:
I notice that you do not mention my post at June 3, 2012 at 2:31 pm which says
Each of our models matched each annual value of atmospheric CO2 concentration indicated by the Mauna Loa data to within the measurement accuracy of the Mauna Loa data. So, none of them have any “residuals” or “noise”. This was explained in the link.
But our three basic models each assumed a different mechanism dominates the behaviour of the carbon cycle. And they can each be used to emulate a natural or an anthropogenic cause of the rise in the Mauna Loa data.
Hence, our findings falsify your suggestion that “there is something that the common “noise” afflicting all these models is trying to tell us about the underlying physical reality”.
Richard
PS Our pertinent paper is
Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005)
Carrick – Here are just a few C13/C12 articles I found in 2 minutes of searching – there are many more.
http://wattsupwiththat.com/2008/01/28/spencer-pt2-more-co2-peculiarities-the-c13c12-isotope-ratio/
http://wattsupwiththat.com/2008/01/25/double-whammy-friday-roy-spencer-on-how-oceans-are-driving-co2/
http://chiefio.wordpress.com/2009/02/25/the-trouble-with-c12-c13-ratios/
Myrrh says:
June 3, 2012 at 5:13 pm
Despite the low data density, the CO2 contour in troposphere and stratosphere confirms the direct measurements near the ground that suggest a CO2 maximum between 1930 and 1940.
Myrrh, I have had a lot of discussions with the late Ernst Beck about the validity of his data. The tropospheric data don’t confirm the direct measurements on the ground, simply because these were sometimes hundreds of ppmv higher than near ground. Shows that the data are completely useless. Unfortunately so. That is also the case for most data which show the 1942 “peak”, mostly taken at places with a huge diurnal variation and extreme variation. That alone already shows that the data are highly contaminated by local sources.
The data at Mauna Loa are sometimes contaminated by local sources too, but not more than +/- 4 ppmv, compared to e.g. Giessen where the longest 1939-1941 series was taken with a variability of 68 ppmv (1 sigma!). How can one deduce a “global” signal from such a series?