The Climate Swoosh

by J Storrs Hall

In my previous post, I argued that sea-surface temperatures hadn’t shown an inflection in the mid-twentieth century, and that the post-50’s rise was essently a land-based phenomenon. To take the analysis further, I thought I could try to find just what the climate signal from CO2 was. The method is to find a fit to the temperature record that included the CO2 forcing signature as a component, and see how big its contribution was compared to the other components of the fit.

First, the CO2. To get a curve since 1850, I got the estimated emissions from here, integrated for accumulation, scaled by matching to the Mauna Loa measured CO2 (red), and took the log for forcing. (No arguments, please; this is the bog-standard story. Let’s assume it’s true for the sake of argument.)

There’s clearly a knee in the curve ca. 1960.  Also note that it’s been essentially straight since the 70’s — it’s the log of an exponential.

For components of the fit function, I used a cosine to capture the cyclicity we already know is in the record, a quadratic, and the forcing curve. I had used a second cosine before, and we know it produced two inflections in the result. The quadratic can only produce one, so the forcing curve has a better chance of matching the other one.

The idea is to find the overall best match and then look at the components to see how big the signal from the forcing is in comparison with the other components, which we will assume represent natural variability.  We’ll plot each curve with the amplitude the optimizer gives it.  Here’s what we get:

The blue line is the overall fit. Cyan is the 61-year oscillation, as before.  No surprises here. Magenta is the quadratic, looking a lot like the sinusoid of the previous fit.  Red is the CO2 forcing.

The CO2 forcing is upside down.

I gave the optimizer an initial guess for the forcing coefficient of 1; it came back with -1.67.  This was, frankly, unexpected.  I had seriously thought I would find some warming contribution from the forcing component.

So what on earth is going on?  Here’s what we get if we add just the quadratic and the forcing curve:

For comparison, I’ve also plotted the second sinusoid from last time (green).  It seems that the secular trend that the optimizer really, really wants is the shape of a Nike swoosh.  If given only a quadratic to work with, it has to subtract the forcing curve to straighten out the twentieth-century rise.  And it really, really wants the knee of the curve to be in 1890.

Does this mean that CO2 is actually producing a cooling effect?  Absolutely not.  It simply means that the secular rise in the twentieth century was a straight line, and the fit would do whatever it took to produce that shape.  (This is why Pat Frank’s linear fit worked so well.  As he noted, the linearity of sea-level rise would tend to confirm this.)  What it does mean, though, is that there is no discernable CO2 warming signal in the HadSST temperature record.  The (very real) twentieth century warming trend appears to have started about the time Sherlock Holmes was investigating the Red-Headed League.

0 0 votes
Article Rating
109 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
John Marshall
June 8, 2011 4:25 am

Why are CO2 outputs always given as the mass not, more importantly, as the proportion of total CO2 output compared to the natural emitters. Victorian atmospheric CO2 levels are guessed at and given as 280ppmv or so. Problem is that measurements taken back then, by several scientists round Europe, gives a value of 400-500 ppmv. Nearly twice that of the stated Victorian values.
There are also inbuilt errors because models use 100-200 years as the figure for CO2 residence time when in fact it is between 5-10 years. Thus the imagined IPCC ‘buildup’ of CO2 in the atmosphere does not happen.

RockyRoad
June 8, 2011 4:27 am

Oh, you’re going to be a very unpopular guy with this sort of analysis–warming starts w/ Sherlock Holmes; CO2 makes no contribution whatsoever to a warming Earth. I certainly hope you weren’t expecting some sort of government funding or any accolades from accademia!
/sarc off.

Sam Hall
June 8, 2011 4:32 am

“You will not apply my precept,” he said, shaking his head. “How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?” – Sherlock Holmes

June 8, 2011 4:35 am

Oh, please do consider CO2 cooling. Enhanced radiation to the Outer Deeps is quite plausible, IMO.

Steve Keohane
June 8, 2011 4:52 am

What it does mean, though, is that there is no discernable CO2 warming signal in the HadSST temperature record. We are 13 days from the summer solstice. The native vegetation is three weeks later than usual. I’ve always been able to plant a garden in May, and I’m tired of shuffling 4 dozen plants inside every night, waiting until it seems safe to put them in the ground. It is 33°F right now, not a confidence builder. I want my global warming back!

Mike Davis
June 8, 2011 4:52 am

HadSST is a best guess estimate extrapolated from a sampling of less than 1% of the ocean surface and that was not even the same 1% over time. I do not doubt your tests and end results but you are basing the tests on the Garbage that is HadSST! Even basing it on the 200 plus years of surface temperatures available would give the same end results. GIGO!

June 8, 2011 4:52 am

While the fittings you did are well done, I am not convinced that they have much in real world meaning. CO2 is (used to be) sensitive to the SST and the warming out of the LIA would result in increasing CO2 levels. If the SST’s are increasing, so are CO2 levels.
I used Moberg-2005 to build a rate of temperature change and his reconstruction showed mostly positive “rate of warming” for the 1800’s. That would indicate that most of the 1800’s had positive increases in CO2 levels.
http://theinconvenientskeptic.com/2011/04/2000-years-of-rate-of-temperature-change/

mathman
June 8, 2011 5:17 am

There you go again!
Relying on mathematics.
Can’t you just trust your feelings?
So what if the forcing is negative?
We all KNOW that we are at fault. Our hearts tell us so.
So there.

June 8, 2011 5:27 am

Very interesting, and Steve Keohane, I agree. Cold kills. Warmer is better.

June 8, 2011 5:28 am

Try substituting a 309 year cycle for your long term and forget about man-made “forcing”. This “natural” change projected into the future suggests a max of around 500 ppm about 2070.

Edim
June 8, 2011 5:35 am

I fully agree with John Marshall. Very important points.

J Storrs Hall
June 8, 2011 5:36 am

John: There’s at least a little evidence that the 19th was flatter than the 18th or 20th here:
http://wattsupwiththat.com/2010/09/28/loehle-vindication/
Both reconstructions agree on that point, though of course they are NH and mostly land, where I was looking at SSTs.

Doug in Seattle
June 8, 2011 5:41 am

The CO2 levels before the 1960’s from the Keeling curve and reproduced for this analysis are nothing more than an artifact of the mixing of apples (Hawaiian direct measurements) and oranges (Ice core measurements from trapped gas bubbles). They have no basis in reality and any conclusions drawn upon them can not be regarded as reliable.

Patrick Davis
June 8, 2011 5:47 am

“RockyRoad says:
June 8, 2011 at 4:27 am”
Just watched a program called “Catastrophy: Snow Ball Earth” narrated and presented by Tony Robinson (Baldrick in Blackadder). In this episode it is claimed that so much CO2 was removed from the atmosphere by bacteria 650mya resulted in a tipping point leading to “runaway” cooling. Ice built up and spread from the poles to the equator, covering the whole planet in thousands of meters of ice. Then super volcanoes errupted spilling CO2 into the atmosphere causing the planet to warm, melting the ice sheets leading to the warm planet we, and all other life as we know it, live on today. I had to laugh! If “Baldrick” says CO2 did it, it must be true.

OzJuggler
June 8, 2011 5:49 am

If we extrapolate from this formula then it would predict the next 20 years will see definite mild cooling, as the downward part of the sinusoid briefly overpowers the upward quadratic, followed by a return to every warmists favourite trend – and warming stronger than ever.
This would seem to mean that the last decade of ocean cooling is not an indicator that the warming has gone and so is of little value when debating AGW theists.

Tim Folkerts
June 8, 2011 6:00 am

How did you account for changes in aerosols in the atmosphere, which could have a significant forcing as well?
How did you account for changes in CH4 in the atmosphere, which could have a significant forcing as well?
How did you account for changes in the sun, which could have a significant forcing as well?

Ric Locke
June 8, 2011 6:03 am

In the field that used to be my profession, data points (surveyed ground locations, including elevation) are difficult and expensive to collect. It is therefore useful, from the standpoint of cost-effectiveness, to collect a few points and interpolate the intermediate values.
The original state of the art for that interpolation relied on polynomial and/or cosine-curve fits. Workers quickly found that that system is sensitive to errors and the specific configuration of the input data. In one memorable case, the procedure reported to us that the swamps north and east of Mobile, AL. had an elevation of roughly 2,300 ft AMSL, with a calculated confidence >90%.
Polynomial fitting was abandoned with great glee when a newer procedure was developed, viz., linearization of the equations by partial differentiation, followed by solving the linear equations by iteration using the method of least squares. It isn’t perfect, but it does have the virtue of damping, rather than accentuating, the effects of data point errors and pathological configurations of the input data.
As a result, whenever I see “cosine fit” or references to polynomials (including “exponentials”) I stop reading except as a matter of amusing interest. I never had much to do with the mathematical formulations themselves, but I got intimately acquainted with the results. If garbage in gives garbage out, polynomial and cosine fits are the garbage truck.
Regards,
Ric

Bob Longworth
June 8, 2011 6:09 am

I am neither scientist nor mathematician but I have read a lot of the scientific papers on the whole climate change subject. Seems that CO2 concentrations follow temperature fluctuations by 800 years +/- according to the ice cores etc. What totally baffles me is that everyone seems to be trying to correlate today’s numbers when it strikes me that today’s increase in CO2 would more likely be caused by temperature increases during the mediaeval warm period, no? Seems to me that the 800 +/- years would line up fairly close.

NikFromNYC
June 8, 2011 6:23 am

“As he noted, the linearity of sea-level rise would tend to confirm this.”
Church and White, the classic purveyors of an exponentially shaped sea level curve, in their latest article update of 2011 (which eliminated the word “accelerating” from the title) plots, in hard-to-see yellow, a simple average of tide gauges, which, once I clean all the dark plots behind it away, shows stark linearity.
Graph: http://oi51.tinypic.com/28tkoix.jpg
Reference: http://www.springerlink.com/content/h2575k28311g5146/fulltext.pdf

Duncan
June 8, 2011 6:50 am

what’s the justification for adding the quadratic?
i mean, what is it supposed to model? seems like that’s the forcing AGW proponents ascribe to CO2.

David Ozenne
June 8, 2011 6:51 am

OK, the sinusoid I think I get. Some natural cycle or another. But what on earth is the quadratic supposed to represent? Surely there is no physical reality to a quadratic temperature signal centered in the 1880s. So then you need a negative coefficient on the CO2 signal to keep the quadratic from running away. Who cares what you get when you include the blatantly unphysical quadratic? Or are you suggesting that temperatures in1750 were similar to today, and before that it was hotter than hell?
Take out the unphysical quadratic and what coefficient are you left with on the CO2 fit? (I don’t know what it is, but I’ll go ahead and assert that it will be positive.)

tom
June 8, 2011 7:06 am

J,
Are there studies that replace your proxy for natural variation with the actual natural variables and then regress delta temp = a + b1*CO2 + b2*var2 + …. + bN*varN + e, logged, or appropriately transformed as needed?
If natural variability, as captured by Variable2 thru VarialbeN, explain the change in temp, then the coefficient on CO2 would not differ statistically from zero.
Why don’t we see these kinds of studies?
thanks

June 8, 2011 7:19 am

Any reason I can’t paste in a quote ?
(suddenly new WUWT version appeared with ‘ W Log In, t Log In & f Log In, with my name and details already there) .

Keitho
Editor
June 8, 2011 7:20 am

Some folk require no science at all to convince themselves we are all just wicked and guilty . .
http://www.smh.com.au/business/denying-the-earths-growth-and-depleting-resources-is-just-eating-into-our-future-20110608-1ft6i.html

Dave Springer
June 8, 2011 7:25 am

@Hall
The result is unexpected because you have flaws in your assumptions.

There’s clearly a knee in the curve ca. 1960. Also note that it’s been essentially straight since the 70′s — it’s the log of an exponential.

You need to adjust your curve labeled “CO2 Forcing” to reflect the fact that the IR absorption characteristics of CO2 is not linear across the range of atmospheric concentrations in the graph. This is why the climate boffins talk about surface temperature rise being a constant amount per CO2 doubling. The constant amount per doubling is usually given at 1.1C (absent the mythical water vapor amplification). Thus going from 280ppm to 560ppm gives a rise of 1.1C and then going from 560ppm to 1120ppm also results in a rise of 1.1C. If you take this into account there is no longer a “knee” and you’ll find that the forcing is a straight line not an exponential.

In my previous post, I argued that sea-surface temperatures hadn’t shown an inflection in the mid-twentieth century, and that the post-50′s rise was essently a land-based phenomenon.

I agree it is largely a land based phenomenon. CO2 acts as insulator by absorbing upwelling LWIR (long wave infrared) from the surface and re-emitting it equally in all directions. Land surfaces can absorb downwelling CO2 and the net effect is that the rate of emission from surface to space is decreased i.e. the ground doesn’t cool down as fast. If daytime heating via SWVR (short wave visible light) remains constant the result is that the surface temperature will rise. This increases the rate of emission from surface to space establishing a new, higher surface equilibrium temperature.
Water is very different than land. H20 has many properties that when taken together make it a very unique substance. One of those properties that it’s almost perfectly transparent to SWVR and perfectly opaque to LWIR. Downwelling LWIR from CO2 cannot be absorbed by a body of water. LWIR absorption and emission takes place in a water surface layer of just a few micrometers. It results in no rise in water temperature but rather a rise in evaporation rate. This in combination with another property of water called “latent heat of vaporization”, which unique in how large it is in water, means that the lion’s share of the energy from downwelling IR results in water vapor of the same temperature as the water surface. Water vapor being far lighter than oxygen and nitrogen it tends to rise faster and get mixed into the atmosphere above. Like CO2 it has no greenhouse effect over the ocean. The effect it does have is increasing cloud cover which lowers daytime heating of the ocean by reflecting away more SWVR before it ever reaches the ocean surface. This then works as a negative feedback. The elevated evaporation rate caused by greenhouse gases is nullified by elavated albedo. This is the basis of my agreement with you on CO2 greenhouse effect being a land (or ice!) based phenomenon.
Marshall

Why are CO2 outputs always given as the mass not, more importantly, as the proportion of total CO2 output compared to the natural emitters. Victorian atmospheric CO2 levels are guessed at and given as 280ppmv or so. Problem is that measurements taken back then, by several scientists round Europe, gives a value of 400-500 ppmv. Nearly twice that of the stated Victorian values.
There are also inbuilt errors because models use 100-200 years as the figure for CO2 residence time when in fact it is between 5-10 years. Thus the imagined IPCC ‘buildup’ of CO2 in the atmosphere does not happen.

You appear to making the same mistake as Hall in that you are assuming a linear CO2 “build-up” when in fact the rate at which natural sinks take up anthropogenic CO2 is dependent on how far out of equilibrium it is. The farther out of equilibrium the faster the sinks work to restore it. The current rate of CO2 emission is rougly 2ppm/year yet the rate of accumulation in the atmosphere is 1ppm/year. This ratio of only 1/2 the anthropogenic CO2 emitted taking up residence in the atmosphere has remained constant since the beginning of the industrial era.
For the sake of argument let’s presume that the CO2 equilibrium point is that found in ice cores through the last 3 million years of ice age conditions. That would be about 280ppm in interglacial periods and 200ppm during the glacial periods. If we presume 280ppm is an equilibrium point for natural CO2 sources and sinks in the current interglacial and that the farther out of equilibrium it becomes the harder the natural sources work to restore it then we should expect exactly what we observe: a constant rate of accumulation in the atmosphere. In other words as anthropogenic CO2 emissions has grown on an exponential curve the ability of natural sinks to absorb it also grows exponentially.
So when it comes to “residence” time it depends on how far out of equilibrium it is. If the above description of a natural equilibrium is correct then if we were to somehow cease anthropogenic CO2 production today it would be removed by natural sinks at the same rate and thus take about 150 years to return to 280ppm. But it’s not a linear decrease. In the first 50 years it would have already decreased from 390ppm to 315ppm. So you get 2/3 of the way back to 280ppm in the first third of the 150 years. It take the next century to remove the remaining 35ppm above equilibrium.
So “residence” time isn’t a fixed amount and you really can’t say things like 15 years or 150 years because the residence time depends on how far out of equilibruim it is in any given year.
In regard to some anomalously high CO2 measurements in the past no one argues that local CO2 levels can’t vary by quite a margin close to the surface in calm air. After all CO2 is heavier than both O2 and N2 and the sources are all near the surface. This is why we use as our gold standard for atmospheric CO2 level data obtained from balloon soundings, high atop frozen mountains, and in permanently frozen Antarctic. These different methods of obtaining “well mixed” air samples have all been in perfect agreement. Moreover another method of measuring CO2, entrainment in glacial ice, is also in perfect agreement. Essentially all the evidence is that CO2 is well mixed in the turbulent troposphere with the exception of local situations close to the surface in exceptionally calm air with active sources and sinks (i.e. not barren desert or ice).

Laura Gonzales
June 8, 2011 7:26 am

A must read
http://climateguy.blogspot.com/
just look at the exchanges with RC people. Opinion on this paper would be welcomed is it valid?

June 8, 2011 7:34 am

I am willing to accept the CO2 values as correct.
I think Fred Haynie may have a point.
It appears to me that so far nobody has ever brought any convincing scientific evidence that the net effect of more CO2 is warming rather than cooling.
I mean: did you ever see a forest grow where it is very cold?
By taking part in the life cycle, CO2 must cause cooling because plants and trees need warmth (and CO2) to grow.
But nobody can tell me EXACTLY how much cooling and how much warming it causes.
I am guessing the net effect is zero or something close to that…..
http://www.letterdash.com/HenryP/more-carbon-dioxide-is-ok-ok

Latimer Alder
June 8, 2011 7:35 am

1. Any chance of a commentary on this that I could use to explain to my non-mathemtaical, but intelligent g/f? Or even a UK governement minister?
What did you do, why did you do it, and what did it show…but leaving out (or explaining) the acronyms and the deeper maths? More about the general idea of the method, rather than just the details themselves?
2. Any thoughts on the obvious question ‘if it’s this easy, why haven;t zillions of others found these result also?’

Keitho
Editor
June 8, 2011 7:44 am

And the intolerance of the Australian MSM doesn’t help either.
http://www.smh.com.au/opinion/society-and-culture/you-are-just-plain-wrong-about-climate-change-mr-jones-20110601-1ffhd.html
The comments give real insight into the success of official left/liberal propaganda.

tallbloke
June 8, 2011 7:55 am

“there is no discernable CO2 warming signal in the HadSST ”
Yeah, well, that would be because back radiation doesn’t heat the bulk of the ocean, as Stephen Wilde and I have been saying for a long time. Much to Willis’ annoyance. 🙂
http://tallbloke.wordpress.com/2011/03/03/tallbloke-back-radiation-oceans-and-energy-exchange/

ZT
June 8, 2011 7:58 am

Please check the definition of inflection point.

Dave Springer
June 8, 2011 8:02 am

Speaking of unique properties of water another of those is that the solid phase is less dense than the liquid phase. In pure water the density begins to rise at about 3C IIRC correctly. However seawater is a different animal. The salt content lowers the freezing point from 0C and -2C and, very unlike pure water, the density of seawater keeps rising all the way up to its freezing point.
This raises the question of why the average temperature of the global ocean is about 4C and that nearly everywhere below 400 meters in depth the temperature is a constant 3C. This constant temperature deep water comprises 90% of global ocean volume.
The reason the average temperature of the ocean is so far below the average temperature of the air at the surface right now is because 4C is the average ocean surface temperature taken over the entire range of a glacial/interglacial cycle. A glacial/interglacial cycle these days is approximately 100,000 years. Although the mix rate between the upper ocean and lower ocean is slow it isn’t non-existant. Conduction and convection over the course of 100,000 years is sufficient time for the deep ocean to equalize with the average temperature of the surface layer.
What we should REALLY be worried about is that huge bucket of nearly freezing cold water called “the global ocean”. We live and breathe in a temporarily warmer interglacial surface environment. There’s a huge mass of frigid water with over 1000 times the heat capacity of the atmosphere above it and 10 times the mass of the warmer ocean surface layer just waiting for the right conditions to mix into the surface water and once the ocean surface layer cools it quickly takes the puny atmospheric mass above it down in temperature too.

Keitho
Editor
June 8, 2011 8:04 am

Dave Springer says:
June 8, 2011 at 7:25 am (Edit) . . . . .
. . . “Land surfaces can absorb downwelling CO2 and the net effect is that the rate of emission from surface to space is decreased i.e. the ground doesn’t cool down as fast.” . . .
Do you really mean this or is it shorthand?

Patrick Davis
June 8, 2011 8:06 am

“Ric Locke says:
June 8, 2011 at 6:03 am”
Made my jiggly bits jiggle. Yeah, the garbage truck IS in the room…yet, like the Elephant, no-one sees it!

Dave Springer
June 8, 2011 8:08 am

Correction to my last. Pure water begins to fall in density at 3C not rise – I was thinking volume and wrote density. Seawater keeps rising in density (or falling in volume) all the way to its freezing point where it then behaves like pure water and expands (lowered density) when it becomes a solid.

June 8, 2011 8:25 am

Dave Springer says:
You need to adjust your curve labeled “CO2 Forcing” to reflect the fact that the IR absorption characteristics of CO2 is not linear across the range of atmospheric concentrations in the graph
Dave, I am sure we can all agree that CO2 has absorption in the 14-15 um causing some warming. (24 hours per day, earthshine)
The question is: how much cooling does it cause because of quite a few absoptions in the 0-5 um range, including newly found UV absorptions (12 hours per day, sunshine)
In addition we have the cooling caused by CO2 by taking part in the life cycle. How much is that?
http://wattsupwiththat.com/2011/06/08/the-climate-swoosh/#comment-676008

Dave Springer
June 8, 2011 8:28 am

Keith Battye says:
June 8, 2011 at 8:04 am
Dave Springer says:
June 8, 2011 at 7:25 am (Edit) . . . . .
. . . “Land surfaces can absorb downwelling CO2 and the net effect is that the rate of emission from surface to space is decreased i.e. the ground doesn’t cool down as fast.” . . .
Do you really mean this or is it shorthand?
—————————————————————————-
I really mean that. It’s been an experimentally demonstrated fact for over 150 years and the basis of how electronic CO2 sensors have worked since their invention. Basically the sensors work by shining 15um LWIR through two samples of atmosphere. One sample is calibrated to a known known CO2 concentration and hermitically sealed while the other sample is ambient air. A matched pair of phototransisters measures the LWIR energy emerging from each sample and the difference in LWIR energy can then be converted by formula to difference in CO2 concentration.
Basically a modern electronic CO2 sensor is John Tyndall’s 1850’s experimental setup reduced from a space the size of a gymnasium to that of a thimbal, and far more sensitive than Tyndall’s setup too. But the principles behind Tyndall’s experimental work with LWIR absorptive gases is exactly as the principle behind a modern electronic CO2 sensor. If CO2 did not absorb LWIR coming in one direction and re-emit it equally in all directions then modern electronic CO2 sensors would not work correctly. But in fact there are millions of these instruments that work as perfectly in practice as theory predicts they should. Anyone who argues the point about the mechanism of greenhouse gases work is simply and demonstrably wrong – proven wrong millions of times every moment of every day by millions of electronic CO2 sensors working according to the gas physics being argued against.

Keitho
Editor
Reply to  Dave Springer
June 8, 2011 8:40 am

Sorry, I get the instrumentation issue and why it is accepted but you talk of “downwelling CO2” but I have the sensation you wish to talk of downwelling CO2 LWIR.
I am probably being rather thick.
Warm Regards

June 8, 2011 8:30 am

I’m a tiny bit confused.
The rise in CO2 concentrations is almost linear, and slightly declining, shouldn’t the forcing be increasingly declining?

Dave Springer
June 8, 2011 9:00 am

tallbloke says:
June 8, 2011 at 7:55 am

“there is no discernable CO2 warming signal in the HadSST ”
Yeah, well, that would be because back radiation doesn’t heat the bulk of the ocean, as Stephen Wilde and I have been saying for a long time. Much to Willis’ annoyance. 🙂
http://tallbloke.wordpress.com/2011/03/03/tallbloke-back-radiation-oceans-and-energy-exchange/

Exactly. In several peer-reviewed publications of ocean mixed layer energy budget it was found that less than 20% of heat loss in the lower latitudes takes place through radiative emission. Less than 10% leaves through conduction. The lion’s share of over 70% leaves as latent heat of vaporization. In fact it was found that a large fraction of summer heating of the mixed layer by shortwave radiation to a depth of 100+ meters was stored by the mixed layer until winter when the air is dryer and evaporation rate increases.
The greenhouse effect is dominated by the liquid water in the global ocean. Sunlight warms it radiatively to a depth of about 100 meters but for that heat to escape radiatively the warmed water must rise to within a few micrometers of the surface. Winds, waves, convection, and conducution are the only mechanisms which can bring the warmed back to the surface. The mixed layer of the global ocean works like greenhouse gases on steroids. It has the same properties, transparency to sunlight and opacity to long wave infrared, as greenhouse gases. Both liquid water and water vapor are fluids. It would be far more apt to say the earth is warmed by GHF (greenhouse fluid) effect than to say GHG (greenhouse gas) effect because so long as the surface remains largely covered by a liquid ocean it is liquid water that does most of the greenhouse warming. Land surfaces warmed by greenhouse gases is a significant factor but not a dominant one. As we both agree there is no significant greenhouse gas effect over the ocean but, and I’m not sure we agree on this, there is a large greenhouse effect from liquid water.

John F. Hultquist
June 8, 2011 9:04 am

Dave Springer says:
June 8, 2011 at 7:25 am
“. . . it has no greenhouse effect over the ocean.”

I have always preferred the term “atmospheric effect” rather than GH but the phrase above in your text destroys my interpretation. If instead of “over the ocean” you were to write ‘with the surface of the ocean” then it makes sense to me. The shape of the gas molecules (linear for nitrogen and oxygen; non-linear for the absorbing ones: water vapor, carbon dioxide, etc.) control the absorption and emitting of energy and the atmosphere is thought to delay [trap; produce the blanket effect (?)] the return of this energy out of the earth-system – thereby elevating the temperature. (Common reported temperatures are air, not surface measurements.) Land surfaces become involved in the process because of their varying absorption characteristics. This should not disqualify the non-linear gasses of the atmosphere above the oceans from the physical (physics) processes. Or are you really saying that what is going on in the atmosphere is of no concern?

June 8, 2011 9:10 am

Just curious
what does J. stand for
in J.Storrs Hall

Dave Springer
June 8, 2011 9:14 am


Woud you agree that if anthropogenic activity did something to change the average turbidity (or rather the lack of turbidity) of the mixed ocean layer this would then alter the mixed layer energy budget? It seems like greater turbidity would cause sunlight to be absorbed in a shallower layer and then if conduction, convection, wind and wave mixing remained equal those mechanisms would be more effective in bringing the warmed water to the surface where it can cool.
By the way google “continentality” which is a phenomenon first noted and so named hundreds of years ago. It refers to the fact that the difference in ocean temperature between summer and winter is far less than the difference in interior contenental regions. This is observational proof of what was more recently and more precisely meausured in mixed layer heat budgets studies to which I referred. The ocean stores summer heat and releases it in the winter greatly moderating the temperature differential. This is only made possible by the fact that water is a greenhouse fluid able to be radiatevly heated by sunlight to significant depth and impaired in its ability to release that energy radiatively due to its opacity to long wave infrared.

Lady Life Grows
June 8, 2011 9:23 am

Why sure, the CO2 forcing could be negative. One reason is that CO2 is taken up by plants, especially trees, and considerable data exists that vegetated areas are cooler, especially forests
And warmer is better. We are more comfortable in tropical climes than arctic ones. Crops grow best in the summer, the death rate is highest in winter, not summer.

June 8, 2011 9:41 am

The atmosphere on Mars is 95% CO2 and the average temp is -100F

gopher
June 8, 2011 9:41 am

Really? No one is going to question the validity or explain why there is a quadratic term included in the fitting procedure?

J Storrs Hall
June 8, 2011 9:46 am

Dave Springer: Look again — I’m using the log of the concentration for the forcing, not the concentration itself. I did somewhat confusingly plot the log of the Mauna Loa measurements to show the match, and didn’t mention that was a log too. Sorry for any confusion.
Duncan & David Ozenne: A quadratic represents the response of a system under a constant acceleration — cf the trajectory of a thrown ball. They are found fairly often in the traces of dynamical systems when one force dominates.
That said, it is rarely valid to extrapolate from one since forces change — as they clearly did in this case.
I actually tried the analysis with everything from a linear to a quartic. For everything quadratic and up — any form capable of modelling the 1890 knee — i got the same results. For linear you get a positive simply to fit the fact that the curve overall is concave upwards — but it’s still matching to the 1890 knee. Fit a linear+forcing to the century centered on the knee in the forcing curve, 1960, and it goes negative again.
The bottom line that the SST curve doesn’t show any signal of the forcing, either way. Nothing else should be read into it.
Ric Locke: You’re absolutely right, especially if you’re thinking of extrapolation instead of factor analysis. If you’re extrapolating, go for the PDEs (actually, ODEs should work fine here) or a new technique, symbolic regression. Google Eureqa. Maybe when I have a free week to play around 🙂
ZT: Haha! You are right — well, almost right. I didn’t say inflection point. Colloquially, “inflection” is commonly used to mean simply a bend, even though we mathematical sticklers know that “inflection point” refers to the boundary between opposite concavities. Happy Heteroskedasticity to you!

gopher
June 8, 2011 9:56 am

@J Storrs Hall says:
June 8, 2011 at 9:46 am
What is the cause of the quadratic term?

OK S.
June 8, 2011 10:00 am

vukcevic says @ 7:19 am
Any reason I can’t paste in a quote ?
(suddenly new WUWT version appeared with ‘ W Log In, t Log In & f Log In, with my name and details already there) .

I don’t know but I’m able to paste. Maybe it’s because I’m logged out of my WordPress account.
As a note, since WordPress updated their “Leave A Reply” input form, my 3rd party cookie alert is now showing that WordPress is tracking your usage.
OK S.

rbateman
June 8, 2011 10:31 am

There isn’t enough CO2 in the atmosphere to get beyond insignificant forcing.
There is 3 to 30 times the RH (H2O vapor) in the atmosphere, and thats where the focus should be.
Why is so much time wasted with a puny 395 ppm trace gas?

John B
June 8, 2011 11:15 am

RBateman said: “There isn’t enough CO2 in the atmosphere to get beyond insignificant forcing.
There is 3 to 30 times the RH (H2O vapor) in the atmosphere, and thats where the focus should be.
Why is so much time wasted with a puny 395 ppm trace gas?”

The answer is that while most of the GHE comes from water vapour, the extra bit from human emitted CO2 can still be significant. The usual analogy is that the atmosphere is like a bathtub that is being filled and emptied at the same rate, so remains at a given level (over the timescales we are talking about). The added effect of CO2 emissions is like adding a tiny but persistent extra inflow to the bathtub – it may be tiny, but it can still be significant, the bathtub will get fuller.
Yes, that is only an analogy. if you want more information, google “greenhouse effect”. There is a lot of science supporting CO2 as a significant greenhouse gas.
Hope that helps.
John

SteveSadlov
June 8, 2011 12:05 pm

Dave Springer says:
June 8, 2011 at 8:02 am
This has long been one of my concerns. We live in a world that wants to freeze.

June 8, 2011 12:26 pm

John B says
“Yes, that is only an analogy. if you want more information, google “greenhouse effect”. There is a lot of science supporting CO2 as a significant greenhouse gas”.
John, I am afraid that most people here from experience do not believe google or wiki when it comes classifying CO2 as a GHG. I suggest you can start here
http://www.letterdash.com/HenryP/more-carbon-dioxide-is-ok-ok
When you are finished studying all of that, come back here, and learn, together with all of us….

June 8, 2011 12:41 pm

J Storrs, I’ve put a comment in your previous thread showing why I think your original analysis is an irrelevant criticism of mine.
Your conclusion as stated here that, “the post-50′s rise was essently a land-based phenomenon depends on your unfit residual and on the validity of your model. Your model, in turn, is strongly influenced by extremely poorly-bounded mid-19th century SSTs.

Ben of Houston
June 8, 2011 12:57 pm

Pardon me if I’m missing something, but what exactly did you show?
A Best fit of three functions
1= A 61-year oscillation that was derived from sea surface temperatures
2 = A logarithmic function to model CO2 forcing
3= A quadratic function to model everything else. Quadratic chosen because it models the bend at 1890.
In short, two functions created from the surface temperature data model better with a small negative logarithmic addition than with a positive. Short version, the quadratic fit of a line with two degrees of freedom fits better than a line with only one degree of freedom.
Now, while you can argue that there isn’t a discernable signal from CO2. The fact that you can get a better fit with 2-degrees of freedom rather than a 1-degree doesn’t really give you much of a proof. That’s before getting to the fact that it ignores damped signal and offsets. In fact, this is a blah result with extremely non-indicative results that I would expect from the alarmist camp more than WUWT.
Interesting idea. If this was an undergrad doing this, I would give it a good grade. However, I would expect more from a grad student, much less someone publishing on this site.

Hoser
June 8, 2011 1:22 pm

John F. Hultquist says:
June 8, 2011 at 9:04 am
Unlike water, CO2 is linear. Maybe you mean a triatomic molecule can bend, so energy can taken up in bending as well as rotation and stretching. Vibrational modes exist in molecules containing 3 or more atoms.

June 8, 2011 2:03 pm

Why does everybody persist in using the hockeystick CO2 graph (the first one in this posting) which is cobbled together using two unrelated sets of data and promulgated from the false assumption that CO2 had been consistently low until recently? Comparing it with anything is a joke as it is false.
Why do sentient skeptics not realize that it would be totally unusual for anything like CO2 to be so constant and low over time? Why is the work of Ernst Beck and the many data sets produced by valid and reputable scientists over the last 200 years ignored and the spurious, cherry-picked low CO2 average for the 1800s by Guy Callendar given any credence at all?
Let’s try something more realistic and point out that CO2 has been much higher than now during three periods of the last 200 years, most recently in the 1940s and then temperatures crashed while CO2 was high. This most seriously repudiates the CO2-causing-warming link. High CO2 not only cannot cause warming, but it also cannot maintain it.

tallbloke
June 8, 2011 2:39 pm

Dave Springer says:
June 8, 2011 at 9:14 am (Edit)

Woud you agree that if anthropogenic activity did something to change the average turbidity (or rather the lack of turbidity) of the mixed ocean layer this would then alter the mixed layer energy budget? It seems like greater turbidity would cause sunlight to be absorbed in a shallower layer and then if conduction, convection, wind and wave mixing remained equal those mechanisms would be more effective in bringing the warmed water to the surface where it can cool.

Hi Dave. I think the ocean has a lot of counterbalancing things going on in it. Outside the tropics, (where the surface waters are very clear,) and most of the heat absorbed from the sun gets absorbed, the near surface water can get naturally turbid through plankton growth. This is limited by iron availability, but when a volcano goes off, the sea downwind of the plume is soon teeming with phytoplankton. These little critters fix co2 in their shells and take it to the bottom when they die, or it falls in whale poo etc. The co2 cycle in the ocean can take a thousand years or so. I suspect part of the C20th rise in co2 is down to the medieval warm period.
The ocean can only cool at the rate the atmospheric blanket lets it. Good job too, because there’s as much heat energy in the top two meters as there is in the whole atmosphere. If it could all escape quickly, we’d be prawned alive. So heat which can’t escape actually gets mixed downwards, which is why there is a linear dropoff in temp all the way down to the thermocline from the bottom of the mixed surface waters. This solar energy can stay in the ocean for millions of years, literally.
As Anthony put it a couple of years ago, the ocean is one big assed heat flux capacitor.
The interglacials are just like big scale el ninos, on a 100,000 year cycle rather than a decadal cycle. The ocean soaks up heat, and then when insolation falls, whammo, El nino, or in the case of the Milankovitch cycles, interglacial.
http://tallbloke.files.wordpress.com/2011/06/interglacial-elnino.jpg

June 8, 2011 2:55 pm

OK S. says:
June 8, 2011 at 10:00 am

Thanks. I am advised to use Ctrl V .
It works fine

June 8, 2011 4:07 pm

Ben, presuming you’re referring to my post here, recall that my analysis there depended on a prior analysis showing that a new 60-year oscillation appeared in the global air temperature anomaly record when SSTs were added to the GISS land-only global air temperature anomaly data set. This observation sparked my subsequent analysis that in turn led to the sensitivity estimate.
The presence of this net new oscillation in the anomalies, following SST introduction, provides a physical basis for inferring that a ~60-year cyclic ocean thermal signal exists in the 130-year global average surface air temperature trend. That, in turn, makes the oscillations that I did find more than just a numerical convenience.
The oscillations that turned up in the two anomaly data sets had about the same period as the net difference oscillation in the GISS data, but the phases were different. However, there’s no reason to think that parent net global ocean thermal phases will coincide with the phase of a difference oscillation.
As I pointed out, J Storr’s analysis was strongly determined by very poorly constrained mid-19th century SSTs. His 259 year oscillation has a period twice the length of the data set, which makes it also poorly constrained and hardly different from a numerical convenience. He also neglected to show us which part of the 259-year cosine phase actually played a part in the fit. Maybe it just mimics a linear rise during the 20th century.
In any case, when the (physically justifiable) oscillations were subtracted away from either of the the global temperature anomalies, all that was left was a linear rise in anomaly temperature; virtually the same linear rise in both data sets, as it turned out. The rest of the analysis followed directly from that, and in a very straight-forward manner.
So, with respect, you did miss something, and the rest of your post starting with this, “In fact, this is a blah result …” was intemperate.

June 8, 2011 4:41 pm

When volcanos go off, I figure the simple fact that 2/3 of surface is water mean less SW hitting the oceans.
While temperature increases the capacity of air for water vapor (and therefor increases the average), IIRC light incident on water is the primary driver of actually water vapor increase.
Cloud formation also dries the air (and releases latent heat above the surface to be radiated mostly way).
So, rather than decreased air temperature, I think the primary driver of water vapor decline during volcanic activity is less SW light in the lower troposphere.

June 8, 2011 4:47 pm

(I’m suggesting that studies of Pinatubo etc. overstate the relationship of temperature and humidity. Again, they likely get a large part of the correlation backward. The drop in humidity is probably more due to the decrease in the mechanism that heats the surface, rather than the temperature drop itself. The research papers’ adjustments are probably much smaller than the reality.)

June 8, 2011 4:56 pm

(adjustments for volcano, not temperature, drying)

June 8, 2011 5:08 pm

(Very important considering that normally most aerosols for cloud nucleation are over or near land. Big volcanoes scatter them over oceans where cloud nucleation particles are scarce and cloudless albedo is very, very low.)

dwb
June 8, 2011 5:09 pm

maybe someone can ‘splain this to me, i am obviously an idiot, it must be completely obvious:
C02, based on Mauna Loa (I have not checked other sites but I am assuming this is roughly representative) is increasing at about a .5%/yr rate, and appears to be highly variable on a year-over-year basis. Methane is not increasing based on NOAA statistics.
However, global oil (and other fossil fuel) consumption is rising at 1.5% rate. So all else equal I would expect either: man-made CO2 was a large fraction of global atmospheric CO2, and so it was going up at the same % rate as fossil fuel consumption; or its a small but growing fraction, in which case not only is CO2 going up, the the rate at which its going up asympotically converges to 1.5%, the rate of growth of fossil fuel consumption. So there should be an *increasing trend in CO2 growth rate* – which I fail to see.
So given the variability of CO2 and methane, and the fact that growth does not trend up or track population growth/ energy consumption, how can i conclude its from anthropogenic activities?
This has me puzzled. Generally, I expect man-made CO2 growth to track the business cycle closely. The reason is that vehicle miles, manufacturing, etc, are all energy intensive and business cycle sensitive (for example, during a recession, steel and aluminum manufacturers shutter facilities consuming less coal and oil for electricity and transport).
But, um I fail to see that in the data, and the correlations look poor. Which makes me wonder how much of the atmospheric CO2 0.5%/yr growth is really man-made.
anyone?

gopher
June 8, 2011 5:55 pm

So let’s see if I understand….the quadratic term can magically distinguish between CO2 forcing and all other forcings?
That is what you have claimed. That the quadratic term has accounted for ONLY non co2 forcing?

John F. Hultquist
June 8, 2011 7:02 pm

Hoser says:
June 8, 2011 at 1:22 pm
John F. Hultquist says:
June 8, 2011 at 9:04 am
Right, and I know better. A favorite link:
http://www.wag.caltech.edu/home/jang/genchem/infrared.htm

Jim Reedy
June 8, 2011 7:55 pm

Of course CO2 causes global cooling.. only have to look at Mars.
95% CO2 and very cold.
But as we know if also causes Global warming…
Look at Venus.. 95% CO2 and very hot…
its the gas for all seasons.

June 8, 2011 8:11 pm

John F. Hultquist says:
June 8, 2011 at 7:02 pm
Hoser says:
June 8, 2011 at 1:22 pm
John F. Hultquist says:
June 8, 2011 at 9:04 am
Right, and I know better. A favorite link:
http://www.wag.caltech.edu/home/jang/genchem/infrared.htm

No, you just think you do.

Werner Brozek
June 8, 2011 9:31 pm

“Does this mean that CO2 is actually producing a cooling effect? Absolutely not.”
If the increase in CO2 were exponential and if the effect on temperature were logarithmic of the right magnitude, then the resulting temperature increase would be expected to be linear. But if the increase in CO2 is linear, but the effect on temperature were logarithmic then the resulting temperature increase should decelerate.

George E. Smith
June 8, 2011 10:11 pm

Well your four different colored lines of the “components of the fit” graphs all look phony to me. That faint jagged light bluish stuff (or izzat green) actually looks more like real observed data to me. The rest is pure fiction and tells us nothing.
Now the jaggy blue/green graph looks like it has a whole lot of data points. It is a fairly elementary mathematical exercise to artificially construct a functional fit to all of those data points using a function that has fewer parameters than the number of data points to be fitted. And of course that function is not unique, Many such functions can be created, from sinusoids (Fourier); but one could also use Bessel functions, or Legendre polynomials, or Tchebychev polynomials, or any other set of orthonormal functions.
I don’t see any value in that; it gives no insight into a causal structure. Ultimately the jaggy data is the best representation of what actually happened.

George E. Smith
June 8, 2011 10:30 pm

“”””” Ric Locke says:
June 8, 2011 at 6:03 am
In the field that used to be my profession, data points (surveyed ground locations, including elevation) are difficult and expensive to collect. It is therefore useful, from the standpoint of cost-effectiveness, to collect a few points and interpolate the intermediate values. “””””
Well it is usually called sampled data theory, and there generally is no reason to believe that interpolation gives good values for data values in between actual observed samples. That is specially true when the sampling regimen comes no where near complying with the Nyquist criterion for proper sampling of band limited signals. On the other hand, correct sampling in accordance with the Nyquist criterion, allows (in principle) exact reconstruction of the complete band limited function. In which case any intermediate values can be calculated.
Why sampled data theory is not taught in freshman science courses, is completely beyond my comprehension.
So much garbage is generated under the name of science; by “researchers” with no understanding of data sampling.

J. Simpson
June 8, 2011 11:04 pm

“For components of the fit function, I used a cosine to capture the cyclicity we already know is in the record, a quadratic, and the forcing curve. ”
The reason for your surprising result is that you don’t understand your “forcing” basics.
A forcing is a power term and will affect the rate of change of temperature , not the the temperature.
If you integrate your linear approximation the the CO2 forcing it will give you a quadratic. It is the quadratic that you need to be attributing to CO2 NOT your linear term.
Despite the enthusiastic applause from the audience, your article is fundamentally wrong. You probably should try to understand the basic physics and post a note to your article that will obviously mislead a lot of less trained people reading it.

Edim
June 8, 2011 11:21 pm

Atmospheric CO2 concentration is much more temporally variable than we think. It was probably higher than today at the beginning of Holocene, 10,000 years ago. Since the general temperature trend for the last ~10,000 years is down, CO2 trend is also down. In 1940s it was higher than ~280 ppm, probably closer to todays values.
Ice core record is not reliable (accuracy). It cannot produce short peaks (low-pass filter). That ~800 years lag is likely some kind of an artifact and does not necessarily have any physical meaning.

June 9, 2011 12:53 am

What a joker.

Philip Shehan
June 9, 2011 3:16 am

Could J Storrs Hall or anyone else tell me where the data for the red line representing CO2 forcing came from. Or is is a fudge factor desighned to make the composite curve fit.

J Storrs Hall
June 9, 2011 4:28 am

Pat Frank: I did show what part of the 259-year cosine took part in the fit — it’s the green curve in the third graph above: the swoosh shape. So yes, it did essentially model a straight line over the 20th century.
dwb: You can’t compare exponential growth rates for amounts with different bases. If my income is increasing at 5% per year, and I’m putting it all in the bank, that doesn’t mean that the bank’s holdings must also be increasing at 5%.
J. Simpson: It’s a question of timescale: short term, forcing induces a changing temperature; but in the long term, the system will equilibrate and take up a new stable temperature that depends on the forcing. It’s much like the accelerator in your car: press it, and you get a change in speed; but the faster you go, the air resistance increases until it matches the engine power and you level off at a new, constant, speed. It’s the standard finite impulse response curve, if you’ve ever studied control theory.
For radiative forcing and temperature, consider the difference between night and day. Temps can change tens of degrees in a few hours. If equilibration were instantaneous, the hottest part of the day would be exactly noon. Instead, it’s 2 or 3 hours later. So we’re looking at equilibration times of hours for temp changes bigger than the ones anyone is predicting from enhanced greenhouse forcing. Even if it were thousands of hours, it would still be fast enough to ignore on the scale of years.

J Storrs Hall
June 9, 2011 6:18 am

Pat: One way to finesse the issue is simply to fit the short cosine and a quadratic to post-1910 data. Turns out that the baseline you get is very close to flat, but still slightly concave downward: I get a curve with a 0.63 degree/century slope in 1950, 0.60 in 2000. That’s SSTs with the notch taken out.

John B
June 9, 2011 6:24 am


Your link (to your own site) says:
“A short summary of the basic results of my study:
CO2 is insignificant as a greenhouse gas.
CO2 is not a poison or a pollutant.
CO2 is one of the two main building blocks of ALL plant life on Earth.
CO2 concentration has been up to ten times higher in the past
CO2 is good.”
Well, the last four points are irrelevant to whether the CO2 is a significant GHG. As to the first point, you are taking on all of mainstream science. Outside of a forum like this, good luck with that!
As regards trusting Google or wiki, Google is just a search engine, it’s the sources it leads you too that matter. Also with wikipedia, don’t trust it, follow the links from it and see where they lead, then decide.

gopher
June 9, 2011 6:39 am

@J Storrs Hall says:
June 9, 2011 at 4:28 am
You still have no explanation for the “acceleration” that the quadratic term is removing! And you have no idea what it is removing!
What is the forcing that causes the “knee” or bend around 1900?

J. Simpson
June 9, 2011 6:53 am

J. Simpson: It’s a question of timescale: short term, forcing induces a changing temperature; but in the long term, the system will equilibrate and take up a new stable temperature that depends on the forcing. It’s much like the accelerator in your car: press it, and you get a change in speed; but the faster you go, the air resistance increases until it matches the engine power and you level off at a new, constant, speed. It’s the standard finite impulse response curve, if you’ve ever studied control theory.
You need to consider several things here. Yes there will be a controlling negative feedback (sorry IPCC) , increased plank emissions (T^4 , not linear) and some weather based feedback. However the initial response like you car is acceleration proportional to the forcing. You seem the dismiss, or rather ignore, the quadratic despite having fitted it. What do you think it represents?
The impulse response will start off as that quadratic and then fade over time, what do the consider the time constant of the global mass of water to be in this context? You are plotting yearly data and the hypothetical equilibrium could be centuries in coming.
You do have a point about feedback though and this will not be taken into account by integrating the dT/dt caused by the forcing. In fact I suspect the negative coeff that surprised you may well be that feedback. This answers a problem I was having with some similar work and the quadratic was too strong.
If that does provide a means to get a handle on a value for the feedback it will be worth writing a paper about.
Thanks for your comment.s

J. Simpson
June 9, 2011 6:55 am

BTW what’s this “optimiser” you refer to?

June 9, 2011 7:13 am

John B says:
“Outside of a forum like this, good luck with that!”
Truth is not important, but luck is?
John, why don’t you measure the global warming at the place where you live so that you can see that it is not an increase in GHG’s that has caused it? Then we can add the results to my pool table!
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming
If the rate of increase in minimum temps. is greater than the rise in maxima & mean temps. you are right that the warming is or might be caused by an increase in GHG’s. You can do a straight linear regression (time against temperature) from all the data that you can get for minima, maxima and means from the station where you live.
Very simple really. Statistics 104. Just some work to copy and paste in Excel and do the trendlines…..
We will talk again when you have some results.

Dave Springer
June 9, 2011 7:31 am

Edim says:
June 8, 2011 at 11:21 pm

Atmospheric CO2 concentration is much more temporally variable than we think. It was probably higher than today at the beginning of Holocene, 10,000 years ago. Since the general temperature trend for the last ~10,000 years is down, CO2 trend is also down. In 1940s it was higher than ~280 ppm, probably closer to todays values.

Yes, it was probably closer to 200ppm which is the average during glacial periods. It can certainly vary close to the surface. After all the sources of CO2 are very near the surface and CO2 is heavier than air. In calm air it can build up. There’s no evidence at all that it varies much over short periods of time away from active sources and sinks. That’s why our gold standards are places like Mauna Loa, the Antarctic, balloon soundings, and samples taken by aircraft.

Ice core record is not reliable (accuracy). It cannot produce short peaks (low-pass filter). That ~800 years lag is likely some kind of an artifact and does not necessarily have any physical meaning.

Short peaks are recorded where there is sufficient snow accumulation. Entrainment happens in as little as 8 years (Law Dome). The entrained air samples match the CO2 concentration measured at above-ground sites. Places where the snowfall is heavy go back the least amount of time. Where there is little snow accumulation such as the Antarctic interior it takes hundreds of years for entrainment and the entrained air samples then reflect an average over a longer period of time. These cores go back much farther in time. The point is that ice core data has certain constraints but those constraints aren’t so onerous that it makes them “unreliable” nor incapable of recording short-lived spikes.

Dave Springer
June 9, 2011 7:47 am

“CO2 has been much higher than now during three periods of the last 200 years, most recently in the 1940s”
Probably because the samples were taken downwind from a wartime industrial source that wasn’t there before WWII kicked into high gear. There is zero evidence that CO2 varies much or rapidly at any significant remove from active sources and sinks. It’s well mixed into the global troposphere by turbulence. That’s why Mauna Loa and the interior Antarctic closely agree with a lag of a couple years for the Antarctic because it takes longer to migrate there from the sources and the interior is somewhat shielded from mixing by the polar vortex. There is also some annual variation between northern and southern hemispheres as the change of seasons changes the activity level of sources and sinks.

Edim
June 9, 2011 7:52 am

Dave,
I am very sceptical about that. This article is a good summary of my view:
http://hubpages.com/hub/ICE-Core-CO2-Records-Ancient-Atmospheres-Or-Geophysical-Artifacts
Good thing is, when the cooling gets going in the next decades, we will have a TEST. I predict CO2 decrease.

J Storrs Hall
June 9, 2011 7:58 am

Philip Shehan: The red line in figure 2 is the blue line in figure 1, with the new coefficient the optimizer gives it. Since it clearly cannot be representing a warming response in that configuration, you could very reasonably say that the optimizer did use it as a fudge factor.
J. Simpson:
http://wattsupwiththat.com/2011/06/06/earth-fire-air-and-water/#comment-676602

June 9, 2011 8:02 am

Dave Springer,
I generally agree with your comments. But in this case, the fact is that Beck et. al recorded CO2 measurements taken from such disparate sources as the unpopulated Ayrshire coast of Scotland, from mountain peaks, and from mid-ocean crossings on the windward side of ships transiting the Pacific and Atlantinc Oceans, the South Seas, the Sea of Okhotsk, the Arctic Ocean, etc. This was done to avoid false readings due to industrial activity.
Many Nobel Laureates participated in the CO2 data collection effort, with internationally esteemed scientists such as J.S. Haldane among them. They were meticulous and took copious notes, and made detailed drawings of their test apparatus. Probably most importantly, they were not government subsidized. They cared a great deal about their reputations, and they knew their work would be scrutinized by their fellow scientists.
Their CO2 readings were accurate to within ≈3%. Thus, there is little doubt that CO2 levels have varied much more than what is currently assumed.

Dave Springer
June 9, 2011 8:08 am

Bob Longworth says:
June 8, 2011 at 6:09 am

I am neither scientist nor mathematician but I have read a lot of the scientific papers on the whole climate change subject. Seems that CO2 concentrations follow temperature fluctuations by 800 years +/- according to the ice cores etc. What totally baffles me is that everyone seems to be trying to correlate today’s numbers when it strikes me that today’s increase in CO2 would more likely be caused by temperature increases during the mediaeval warm period, no? Seems to me that the 800 +/- years would line up fairly close.

The rate of increase in atmospheric CO2 exactly matches the growth rate in anthropogenic emissions. Half of what we emit remains in the atmosphere and this ratio has held true since the industrial revolution began through today. In fact it’s sometimes a point of concern for climate boffins because they don’t understand why as anthropogenic CO2 emission during the industrial era grew exponentially the ability of natural CO2 sinks to take it up also grew exponentially. They don’t know if the sinks will continue to grow in capacity commensurate with growing emission rates or whether the sinks may become saturated and start taking up less than half.
I hypothesize that there is a natural equilbrium point of 280ppm and the farther anthropogenic emissions push the atmosphere out of equilibrium the faster the natural sinks will take it up. I further hypothesize that anthropogenic emission ceased the natural sinks would remove it at the same pace it was added. The removal would be the fastest at the outset and diminish in magnitude as it became less and less out of equilibrium.
This is how simple equilibrium situations work. Take a hot piece of steel and put it in contact with a cold piece of steel of the same mass. The temperature difference will fall rapidly at first and become slower and slower as the equilibrium temperature is approached. It appears CO2 concentration in the atmosphere works the same way. The further out of equilibrium it is driven by anthropogenic sources the faster the sinks take it up.

Dave Springer
June 9, 2011 8:22 am

Smokey says:
June 9, 2011 at 8:02 am
“But in this case, the fact is that Beck et. al recorded CO2 measurements taken from such disparate sources as the unpopulated Ayrshire coast of Scotland, from mountain peaks, and from mid-ocean crossings on the windward side of ships transiting the Pacific and Atlantinc Oceans, the South Seas, the Sea of Okhotsk, the Arctic Ocean, etc. This was done to avoid false readings due to industrial activity.”
The funny thing is that Beck’s historical survey of CO2 measurements show wild fluctuation from 1810 to 1950 then mysteriously become a perfectly smooth curve from 1950 to present. One might reasonably ask what happened in 1950 that changed everything. The answer is that was when electronic CO2 sensors replaced chemical methods of determining CO2 concentration. You can extol the credentials of those performing the chemical measurements all day long but at the end of the day you are still faced with the fact that when the chemical measurement era ended and the electronic measurement era began the wild fluctuations of the past disappeared. Explain that to me without resorting to extolling the virtues of 18th century scientists.

Edim
June 9, 2011 8:44 am

“You can extol the credentials of those performing the chemical measurements all day long but at the end of the day you are still faced with the fact that when the chemical measurement era ended and the electronic measurement era began the wild fluctuations of the past disappeared. Explain that to me without resorting to extolling the virtues of 18th century scientists.”
Those wild fluctuations disappeared because they are diluting the AGW message.

June 9, 2011 8:48 am

Dave Springer,
Good points. However, others have replicated the CO2 titration methods based on drawings taken directly from the notebooks of the scientists who took the original measurements. The results agree with modern CO2 measurements within ≈3%. For the 1940’s measurements, the fact that industry was ramping up world-wide, along with the fact that numeorus cities were fire-bombed into oblivion could explain the higher CO2 measurements. Since then the biosphere has reacted, and absorbes atbout half of the emissions. I’m not sure about the early 1800’s spike, but unless major errors can be identified in the test methods used, I accept their results. To do otherwise is non-science.

Dave Springer
June 9, 2011 8:53 am

Edim says:
June 9, 2011 at 7:52 am
“I am very sceptical about that.”
So was I. Skepticism is part and parcel of science. I spent a long time trying to indict ice core data but couldn’t do it to my satisfaction. Agreement between recently entrained bubbles and the Mauna Loa record can’t be ignored. While one might entertain the notion that the record might not be accurate going back thousands of years it appears very clear that it is quite reliable going back 200 years given it was confirmed going back 60 years. Then going back hundreds of thousands of years we see repeatability in the record where during glacial epics the entrained air was roughly 200ppm and during interglacials roughly 280ppm. If there was progressive deterioration of entrained samples one would expect this to show up as increasingly different readings during glacial/interglacial periods as one measures older and older ice age cycles. The hypothetical complaints of Jaworski et al do not appear to be backed up by any empirical evidence while the accepted interpretations appear to be in accord with disparate geological evidence at every encounter. The acid test of any system of measurement is whether or not it is in satisfactory agreement with other methods.
It is for this very reason that I indict the common assumption by climate boffins that the earth’s albedo is more or less constant over time. Different methods at measuring albedo accurately are not in satisfactory agreement and fall in the range of 30%-40%. This is a huge uncertainty as it equates to a difference in surface forcing of 25 watts per square meter and where the forcing by anthropogenic GHGs are an order of magnitude lower. The one thing that the few attempts to measure earth’s albedo over a period of years DO agree on is that its albedo is not constant and fluctuates by as much as 1% from one year to the next with perhaps the best study IMO (Earthshine, which measures the intensity of light falling on the new moon) showing a 5 year trend of constantly increasing albedo which happened to coincide with a flat line global average temperature as measured by satellites over the same period.
This article is a good summary of my view:
http://hubpages.com/hub/ICE-Core-CO2-Records-Ancient-Atmospheres-Or-Geophysical-Artifacts
Good thing is, when the cooling gets going in the next decades, we will have a TEST. I predict CO2 decrease.

June 9, 2011 9:46 am

J Storrs, thanks , and thanks for letting me know the long period phase approximated a line during the 20th century, but in any case I was referring to your previous analysis, which didn’t show the 259 year cosine.

Ged
June 9, 2011 11:34 am

Springer
Thanks for your posts. I’ve learned a great deal from them more than most of the stuff I’ve read over the years.

June 9, 2011 11:45 am

To Dave Springer,
You have a lot of wild spikes in the raw flask data and continuous measurement data. These could be anthropogenic. Those spikes, however, are flagged and not included in the reported averages. As a result, the averages being reported from stations around the globe are likely natural background and reflect changes in the major natural sources (equatorial pacific) and sinks (Arctic ocean).

Dave Springer
June 10, 2011 6:32 am

Smokey says:
June 9, 2011 at 8:48 am
I asked you to explain to me why there are no wild fluctuations in CO2 measurements since the invention of the electronic IR CO2 sensors circa 1950. All you did was ( as I asked you not to do) once again rise in defense of chemical measurements performed by scientists of the more distant past. I don’t care about the theoretical precision of the chemical methods. I want to know why there is a great disparity between the chemical analysis record and the electronic methods. It appears one or the other was, regardless of theoretical accuracy, either not accurate in practice or atmospheric CO2 in 1950 stopped making large annual divergences from the norm.
Try again and this time explain what I asked you to explain.

John B
June 10, 2011 6:41 am

HenryP said “John, why don’t you measure the global warming at the place where you live so that you can see that it is not an increase in GHG’s that has caused it? Then we can add the results to my pool table!”
And I say, if your pool table means anything, publish! Seriously, a Nobel prize awaits.

Dave Springer
June 10, 2011 6:50 am

Fred H. Haynie says:
June 9, 2011 at 11:45 am
“To Dave Springer,
You have a lot of wild spikes in the raw flask data and continuous measurement data. These could be anthropogenic. Those spikes, however, are flagged and not included in the reported averages. As a result, the averages being reported from stations around the globe are likely natural background and reflect changes in the major natural sources (equatorial pacific) and sinks (Arctic ocean).”
http://cdiac.ornl.gov/ftp/trends/co2/Jubany_2009_Daily.txt
Above is daily raw data from 1994 to present of CO2 concentration measured at Jubany station in Antarctica. There are a few gaps in the data presumably due to sensor or recorder outages and a couple of wild spikes lasting a few days at most again presumably due to sensor malfunction. Otherwise the record is one of gradually increasing CO2 level with an annual upward trend that matches Mauna Loa.

June 10, 2011 7:20 am

Henry B
there are none so blind as those who do not want to see (for themselves)
but I can guess that you are also one of those whose livelyhoods depends on this whole scam of more GHG’s causing warming being true. Must be horrible to discover one day that your whole life (and livelyhood) is based on a lie.
I was just as surpised about my own findings. I was initially convinced Al Gore was not a lier…
PS
My pool table
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming
is a work in progress
there is more coming, even if you do not want to put some work into it.

June 10, 2011 7:59 am

Dave Springer,
You ask why there are no wild fluctuations in the recent CO2 record. I don’t know, I’m not too good at proving negatives. But I accept the CO2 measurements recorded by Beck et. al, which were done by dozens of reputable scientists and which are in agreement. Those 90,000 readings are the only record we have of past CO2 levels in the century preceeding intrument recordings.
The test apparatus used has been replicated and validated. I agree that the current readings have no unusual fluctuations. But that does not falsify the 90,000 readings taken. At this point the differences are unexplained. But unless you can falsify the methodology recorded by Beck, it makes no sense to assume that the results were not broadly accurate.

John B
June 10, 2011 8:09 am


I have no financial interest in AGW in any way, other than in the ways that we all do. My interest in the subject is as a rational, science-literate individual, whose last foray into online forums was arguing with creationists.
I wish you well with your study. If it amounts to anything, publish it. I’m serious, that is how real science is done.
John

June 10, 2011 8:38 am

Thanks John.
I am not interested in publishing as this is just a hobby of mine. One must have hobbies to fill in some of the “dead” times in your life….
If someone wants to pay me for it or if it becomes “work” again, I will re-consider this.
Maybe you should read my book, “Jesus is God”.
You will find a link somewhere to this book in this letter:
http://www.letterdash.com/HenryP/open-letter-to-radio-702
God bless you.
Henry

June 10, 2011 10:11 am

BTW, John, if you are interested to know as to why you won’t see me anymore on any of the other “forums”:
it is because they kept on erasing my responses if they did not like the truth as I saw it,
– or even if one goes slightly off topic – like here
just because the conversation leads us that way.
I don’t want to spend my time to type an answer only to find it censored or removed the next day or week/
God bless Anthony Watts & Co for free speech and all of that!!!

June 10, 2011 11:04 am

Dave Springer,
Antarctica is about as far from major anthropogenic sources as you can get and would be expected to have fewer spikes. There are more spikes in the early flask data for the South Pole (C-130 taking off ?). Look at Grifton NC that is 7km North East from a coal fired power plant and observe the spikes when the wind is blowing from the South West. There are a lot of coal fired power plants in NC but the monthly averages there are not significantly different from any other stations at the same latitude.

June 10, 2011 11:09 am

Dave,
The URL is http://gaw.kishou.go.jp/cgi-bin/wdcgg/catalogue.cgi?category=Stationary&map=world_map&mposi=?511,92. The MET data can be obtained from the Kinston, NC airport.

Dave Springer
June 12, 2011 7:47 am

Smokey says:
June 10, 2011 at 7:59 am
“The test apparatus used has been replicated and validated.”
Yup, they have. And the exercise revealed that the chemical CO2 test results prior to 1950 cannot be repeated.
Experimental repeatibility is a cornerstone of science. If the experimental apparatus and methods used prior to 1950 show different results to what is obtained today then the previous experiments are impeached by lack of repeatibility. The results today are definitely different. Neither chemical nor electronic CO2 measurements in well-mixed atmosphere today can replicate the fluctuation in the pre-1950 experimental results. If you’re adhering to the tenets of the scientific method then you have no choice but to throw out the data obtained by unreplicable experiments. Either that or you can impeach the post-1950’s measurements (not much possibility of that) or you can explain why the nature of CO2 in the atmosphere radically changed circa 1950 such that the pre and post 1950 experimental data are accurate.
So far you have done nothing in the way of impeaching post-1950’s measurements nor offered any explanation of why the nature of CO2 in the atmosphere changed. Given that I can’t do either of those things I’m left with no choice but to follow the scientific method and throw out results obtained from unrepeatable experiments. Given how easy it is to spoil a flask experiment with contaminated samples, non-representative samples, and/or contaminated reagents it shouldn’t be so difficult to accept the conclusion that CO2 is well-mixed in the atmosphere and doesn’t vary radically or quickly at any reasonable distance from active sources and sinks.

Dave Springer
June 12, 2011 8:04 am

Fred H. Haynie says:
June 10, 2011 at 11:04 am
“Dave Springer,
Antarctica is about as far from major anthropogenic sources as you can get and would be expected to have fewer spikes. There are more spikes in the early flask data for the South Pole (C-130 taking off ?). Look at Grifton NC that is 7km North East from a coal fired power plant and observe the spikes when the wind is blowing from the South West. There are a lot of coal fired power plants in NC but the monthly averages there are not significantly different from any other stations at the same latitude.”
This just seems to confirm what I already knew. Once you get away from active CO2 sources and sinks near the surface the spikes dissappear and you get consistent readings from samples obtained many thousands of miles apart in different hemispheres. There is not a shadow of doubt in my mind that Beck’s survey of past flask data includes local fluctuations not representative of well mixed atmosphere, samples contaminated by a stray breath exhaled air, and/or contaminated reagents. Under carefully performed flask experiments today in well-mixed atmosphere the fluctuations observed in the past cannot be replicated. The lack of replicability cannot be ignored without abandoning adherence to the scientific method.

June 12, 2011 8:17 am

Dave Springer,
I’m not sure I follow your post. Are you saying that the same measurement apparatus that has shown to be accurate to within ±3% when compared with current instrumental CO2 readings did not hold in the 1800’s?
There were over 90,000 CO2 readings taken, and as Beck et. al show, the vast majority were in very close agreement. Now, I understand that there is a question of why current readings do not show spikes like the ones in the early 1800’s and in the 1940’s. But I reject the likelihood that almost every measurement showed a spurious reading, all in the same direction and with the same amplitude.
Those readings were taken during mid-ocean crossings from the Arctic and Antarctic to the tropics; on mountaintops, on isolated shorelines, etc. They were taken by numerous internationally esteemed scientists and Nobel laureates who cared greatly about their reputations, and who knew that their work would be scrutinized by their peers. And their independent results are in close agreement with each other.
There is a dilemma here, but that certainly does not mean the measurements taken by those scientists were wrong. The question is, why were there spikes from a 280 ppmv baseline, and why are there not similar spikes from a 390 ppmv baseline?