CO2, Soot, Modeling and Climate Sensitivity

Warming Caused by Soot, Not CO2

From the Resilient Earth

Submitted by Doug L. Hoffman on Wed, 07/15/2009 – 13:19

A new paper in Science reports that a careful study of satellite data show the assumed cooling effect of aerosols in the atmosphere to be significantly less than previously estimated. Unfortunately, the assumed greater cooling has been used in climate models for years. In such models, the global-mean warming is determined by the balance of the radiative forcings—warming by greenhouse gases balanced against cooling by aerosols. Since a greater cooling effect has been used in climate models, the result has been to credit CO2 with a larger warming effect than it really has.

This question is of great importance to climate modelers because they have to be able to simulate the effect of GHG warming in order to accurately predict future climate change. The amount of temperature increase set into a climate model for a doubling of atmospheric CO2 is called the model’s sensitivity. As Dr. David Evans explained in a recent paper: “Yes, every emitted molecule of carbon dioxide (CO2) causes some warming—but the crucial question is how much warming do the CO2 emissions cause? If atmospheric CO2 levels doubled, would the temperature rise by 0.1°, 1.0°, or by 10.0° C?”

Temperature sensitivity scenarios from IPCC AR4.

The absorption frequencies of CO2 are already saturated, meaning that the atmosphere already captures close to 100% of the radiation at those frequencies. Consequently, as the level of CO2 in the atmosphere increases, the rise in temperature for a given increase in CO2 becomes smaller. This sorely limits the amount of warming further increases in CO2 can engender. Because CO2 on its own cannot account for the observed temperature rise in the past century, climate modelers assume that linkages exist between CO2 and other climate influences, mainly water vapor (for a more detailed explanation of what determines the Global Warming Potential of a gas see my comment “It’s not that simple”).

To compensate for the missing “forcing,” models are tuned to include a certain amount of extra warming linked to carbon dioxide levels—extra warming that comes from unestablished feedback mechanisms whose existence is simply assumed. Aerosol cooling and climate sensitivity in the models must balance each other in order to match historical conditions. Since the climate warmed slightly last century the amount of warming must have exceeded the amount of cooling. As Dr. Roy Spencer, meteorologist and former NASA scientist, puts it: “They program climate models so that they are sensitive enough to produce the warming in the last 50 years with increasing carbon dioxide concentrations. They then point to this as ‘proof’ that the CO2 caused the warming, but this is simply reasoning in a circle.”

A large aerosol cooling, therefore, implies a correspondingly large climate sensitivity. Conversely, reduced aerosol cooling implies lower GHG warming, which in turn implies lower model sensitivity. The upshot of this is that sensitivity values used in models for the past quarter of a century have been set too high. Using elevated sensitivity settings has significant implications for model predictions of future global temperature increases. The low-end value of model sensitivity used by the IPCC is 2°C. Using this value results, naturally, in the lowest predictions for future temperature increases. According to the paper “Consistency Between Satellite-Derived and Modeled Estimates of the Direct Aerosol Effect” published in Science on july 10, 2009, Gunnar Myhre states that previous values for aerosol cooling are too high—by as much as 40 percent—implying the IPCC’s model sensitivity settings are too high also. Here is the abstract of the paper:

In the Intergovernmental Panel on Climate Change Fourth Assessment Report, the direct aerosol effect is reported to have a radiative forcing estimate of –0.5 Watt per square meter (W m–2), offsetting the warming from CO2 by almost one-third. The uncertainty, however, ranges from –0.9 to –0.1 W m–2, which is largely due to differences between estimates from global aerosol models and observation-based estimates, with the latter tending to have stronger (more negative) radiative forcing. This study demonstrates consistency between a global aerosol model and adjustment to an observation-based method, producing a global and annual mean radiative forcing that is weaker than –0.5 W m–2, with a best estimate of –0.3 W m–2. The physical explanation for the earlier discrepancy is that the relative increase in anthropogenic black carbon (absorbing aerosols) is much larger than the overall increase in the anthropogenic abundance of aerosols.

The complex influence of atmospheric aerosols on the climate system and the influence of humans on aerosols are among the key uncertainties in the understanding recent climate change. Rated as one of the most significant yet poorly understood forcings by the IPCC there has been much activity in aerosol research recently (see Airborne Bacteria Discredit Climate Modeling Dogma and African Dust Heats Up Atlantic Tropics). Some particles absorb sunlight, contributing to climate warming, while others reflect sunlight, leading to cooling. The main anthropogenic aerosols that cause cooling are sulfate, nitrate, and organic carbon, whereas black carbon absorbs solar radiation. The global mean effect of human caused aerosols (in other words, pollution) is a cooling, but the relative contributions of the different types of aerosols determine the magnitude of this cooling. Readjusting that balance is what Myhre’s paper is all about.

Smoke from a forest fire.

Photo EUMETSAT.

Discrepancies between recent satellite observations and the values needed to make climate models work right have vexed modelers. “A reliable quantification of the aerosol radiative forcing is essential to understand climate change,” states Johannes Quaas of the Max Planck Institute for Meteorology in Hamburg, Germany. Writing in the same issue of Science Dr. Quaas continued, “however, a large part of the discrepancy has remained unexplained.” With a systematic set of sensitivity studies, Myhre explains most of the remainder of the discrepancy. His paper shows that with a consistent data set of anthropogenic aerosol distributions and properties, the data-based and model-based approaches converge.

Myhre argues that since preindustrial times, soot particle concentrations have increased much more than other aerosols. Unlike many other aerosols, which scatter sunlight, soot strongly absorbs solar radiation. At the top of the atmosphere, where the Earth’s energy balance is determined, scattering has a cooling effect, whereas absorption has a warming effect. If soot increases more than scattering aerosols, the overall aerosol cooling effect is smaller than it would be otherwise. According to Dr. Myhre’s work, the correct cooling value is some 40% less than that previously accepted by the IPCC.

Not that climate modelers are unaware of the problems with their creations. Numerous papers have been published that detail problems predicting ice cover, precipitation and temperature correctly. This is due to inadequate modeling of the ENSO, aerosols and the bane of climate modelers, cloud cover. Apologists for climate modeling will claim that the models are still correct, just not as accurate or as detailed as they might be. Can a model that is only partially correct be trusted? Quoting again from Roy Spencer’s recent blog post:

It is also important to understand that even if a climate model handled 95% of the processes in the climate system perfectly, this does not mean the model will be 95% accurate in its predictions. All it takes is one important process to be wrong for the models to be seriously in error.

Can such a seemingly simple mistake in a single model parameter really lead to invalid results? Consider the graph below, a representation of the predictions made by James Hansen to the US Congress in 1988, plotted against how the climate actually behaved. Pretty much what one would expect if the sensitivity of the model was set too high, yet we are still supposed to believe in the model’s results. No wonder even the IPCC doesn’t call their model results predictions, preferring the more nebulous term “scenarios.”

Now that we know the models used by climate scientists were all tuned incorrectly what does this imply for the warnings of impending ecological disaster? What impact does this discovery have on the predictions of melting icecaps, rising ocean levels, increased storm activity and soaring global temperatures? Quite simply they got it wrong, at least in as much as those predictions were based on model results. To again quote from David Evans’ paper:

None of the climate models in 2001 predicted that temperatures would not rise from 2001 to 2009—they were all wrong. All of the models wrongly predict a huge dominating tropical hotspot in the atmospheric warming pattern—no such hotspot has been observed, and if it was there we would have easily detected it.

Once again we see the shaky ground that climate models are built on. Once again a new paper in a peer reviewed journal has brought to light significant flaws in the ways models are configured—forced to match known historical results even when erroneous values are used for fundamental parameters. I have said many times that, with enough tweaking, a model can be made to fit any set of reference data—but such bogus validation does not mean the model will accurately predict the future. When will climate science realize that its reputation has been left in tatters by these false prophets made of computer code?

Be safe, enjoy the interglacial and stay skeptical.

==================================

ADDENDUM BY ANTHONY

I’d like to add this graph showing CO2’s temperature response to supplement the one Doug Hoffman cites from IPCC AR4. here we see that we are indeed pretty close to saturation of the response.

CO2_temperature_curve_saturation
click for larger image

The “blue fuzz” represents measured global CO2 increases in our modern times.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

209 Comments
Inline Feedbacks
View all comments
Vincent
July 16, 2009 11:28 am

This is, on the face of it, a very significant finding. However, the question on my mind is – what’s the bottom line?
I mean, the assumed negative forcing of aerosols has been reduced from
-0.5 W/M2 to -0.3, which has been billed as a 40% reduction. And so it is. But the difference is -0.2. In other words, if the negative forcing is less by 0.2, then the CO2 forcing must also be less by 0.2 to maintain the same model output. But if the CO2 forcing was believed to be 1.7 before, it would now be revised to 1.5.
In my simplistic opinion, this does not seem like a large revision. How much would that effect temperature sensitivity of CO2?

John F. Hultquist
July 16, 2009 11:32 am

Nick Stokes (00:44:05) : wrote:
“Mr. Hoffman clearly doesn’t understanding the usage of scenario. It is not another word for prediction. It’s an input that is determined by human decisions, and can’t be predicted scientifically. ” … and so on ….
Translation: In 1988 Hansen did not know what was happening, hasn’t learned anything since, and still doesn’t have a clue.
However, I agree with the point that all should attempt not to be misleading, such as by selecting a chart to push an issue when an equally available chart is not used. As none of the suggested charts are based on good science I’d prefer just saying so and not keep posting them for their visual effect. Translation: A scenario based on poor logic and bad science is useless, unless you are an activist/politician.

July 16, 2009 11:33 am

Bobn (10:59:42) :
How come the graph contains only scenario A and not scenario B which was the one the greenhouse forcing followed more closely. Why omit the correct line? Doesn’t this throw the rest of the article into doubt. I can’t trust it now on matters I don’t know much about when a matter I do know about isn’t presented correctly.

Let’s ask Hansen…Jimbo, why should we use Scenario A to evaluate your model?
Hansen said…“Scenario A assumes continued exponential trace gas growth, scenario B assumes a reduced linear linear growth of trace gases, and scenario C assumes a rapid curtailment of trace gas emissions such that the net climate forcing ceases to increase after the year 2000.”
Based my rough visual estimates…It looks to me like Scenario A grossly exaggerated the increase in atmospheric CO2…But Hansen himself described Scenario A as representing the “continued” trend. Well…the trend of CO2 emissions has actually accelerated since 1988.

Vincent
July 16, 2009 11:38 am

Bobn: “How come the graph contains only scenario A and not scenario B which was the one the greenhouse forcing followed more closely. Why omit the correct line? Doesn’t this throw the rest of the article into doubt. I can’t trust it now on matters I don’t know much about when a matter I do know about isn’t presented correctly.”
The article is about the effect of soot in aerosols, being higher than previously thought which means the negative (cooling) effect of aerosols is also less than previously thought. This leads to the conclusion that the climate models have overstated the temperature sensitivity of CO2 in order to hindcast the 20th Century climate. The research was carried out by Myhre. Myhre did not write this article and has nothing to do with the graph that has so unsettled you.
So no, that little graph does not throw the rest of the article in doubt.

George E. Smith
July 16, 2009 11:44 am

“”” tallbloke (23:36:52) :
Nice graph Anthony. Please can we have the equations for each of the three curves. “””
Y = e^(-1/x^2)
That pretty much fits all three; and all three are phony in any case.
In normal Optical absorption in a uniform medium, the transmission follows an e^-az formula, z being the distance covered, and a (alpha) the absortptin coefficient. This result assumes the energy is captured permanently; and not re-emitted due to some liminescence effect. Of course there will still be thermal radiation due to the temperature.
In the case of very low concentrations the absorption is linear with number of absorbing molecules or propagations district.
Remember e^-x = 1-x/1! +x^3/3! -x^5/5! etc
In the case of CO2 or water GHG absorption; the spectrum region being captured by the active molecules, is pretty much in hte same wavelength as the thermal emission spectrum of the atmosphere itself simply due to its temperature, so this thermal IR is constantly being captured; transmitted to the atmosphere in collisions, and the re-emitted as new thermal radiation (so it is not a photon capture to an excited state, followed by a re-emission and return to some ground state, as in atomic spectra)
And since the atmosphere is variable density and temeprature, the biggest effect is near the ground.
But the notion that atmospheric temperture rise is proportional to the log of CO2 concentration, is totally phony; there is simply no physics that causes that. It certainly isn’t linear, and it does follow a curve whose equation is : “It’s gonna take a lot more CO2 to cause the next temperature increment, than the last one; and then wait till you see how much it takes for the following one !” But that equation does not prescribe a logarithmic function.
Every mathematician knows that if you plot anything on a log/log graph paper, you get a nice straight line, which hides what really happens.
And the whole “climate sensitivity” notion is pretty silly anyway, because the “radiative forcing” that allegedly causes the warming due to thermal IR capture by CO2, depends completely on the radiant emittance of the earth surface in the location where the climate sensitivity” is being measured; and that emittance is known to vary by more than an order of magnitude depending on the local surface temperature, from place to place on the planet, and there is no global sampling network for collecting climate sensitivity values to eneble computation of a global average. So “climate sensitivity” is just another part of the ancient astrology that is climate science.
The IPCC view of climate is a house of cards built out of layer upon layer of unsubstatiated mathematical assumptions; and as they say ” assume makes an ass out of U and ME. “

Nogw
July 16, 2009 11:46 am

Jim (11:15:14) :
You are right as far as “settled” science, but as far as water droplets are electrically charged they could be considered as ionized. Water is, as you know hydrogen hydroxide (as “clouds” in a solution of zinc when neutralized as white fluffy zinc hydroxide). It rains after, not before, lightnings discharge clouds.

AnonyMoose
July 16, 2009 11:48 am

Using a cloud-clearing algorithm is an educated guess, not an experiment. To experiment, use a hydrogen bomb to remove clouds. Timing and placement of your instruments can be important when using this experimental procedure.

a jones
July 16, 2009 12:10 pm

Yes
Water waves in the oceans are produced by the action of the wind on the water in a way that is only imperfectly understood. Because these waves can travel huge distances, hundreds of miles, no part of the open ocean is ever completely free of them.
We can model the amplitude produced by a given wind force acting over a long distance, the fetch, by the means of the Darbyshire one dimensional wave equation. We can also observe the difference in the train velocity of a group of waves and how larger waves work their way forward through the train which no doubt is basis of the myths about every seventh breaker being higher etc.
In general open ocean wave heights do not exceed 10 to 15 metres because at these heights the force of the wind becomes so great that it topples much of the crest over flattening the sea. This effect on the peak of the crests is seen when wind speeds rise above 4 to 5 producing the famous white horses, but at force 10 or above the sea is described as becoming heavy and shocking as the whole wave crest tumbles.
Also when two wave trains cross each other at an angle then sometimes two large wave crests are superimposed on each other for brief period producing a freak which can have enormous amplitudes but these are seldom encountered because they are so very short lived.
Sea wave action is circular in nature and the surface shape of a wave in deep water is described as trochoidal. This circularity is seen in breakers such as those surfing enthusiasts love to ride under. It is also seen in test tanks used for demonstration where at some depth the circularity of the motion can be shown using a neutral density float.
It is the difference between the trochoidal form at the sea/air boundary and the circular motion below it that causes the wave train to impel some of the water along its line of travel. This is seen as the increase change in mean sea level when the wave train reaches land when, if the waves are large enough, they are said to cause the sea to pile up. To what extent it does so depends on the contours of the land.
It is this selfsame action that drives the great surface currents, such as the Gulf Stream where the direction of the winds is fairly constant as is their veloity hence the effects, and importance of the trade winds.
It is supposed that deep return currents are driven by convection and changes in density of the water with temperature and salinity but recent research suggests that unlike surface currents they are very diffuse, so much so that the description current, in the usually accepted sense of the word for water currents, hardly seems a proper description.
Kindest Regards

Alexej Buergin
July 16, 2009 12:21 pm

The important question about the Hansen scenarios concerns not the rise in the CO2-content of the atmosphere; that in itself is not a problem.
The real question is whether the corresponding RISE IN TEMPERATURE took place, or not.
That is why they used to call it AGW.

Nogw
July 16, 2009 12:35 pm

There is a huge lack of common sense in all this GW concocted fallacies…and There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy.
Only noaa colorful imagination set seas on red fire. Only intended IPCC demagogy imagines a blazing atmosphere.
Back to earth, our friendly CO2, the gas we all exhale and plants enjoy breathing is harmless as a blossoming flower. When it goes up in the sky, if warmed, then rapidly goes down after giving off its heat to limitless space, as it is heavier than air,..and if reacted with water on its downward journey, falls down as refreshing carbonic acid in mother nature’ s water diluted.

Philip_B
July 16, 2009 12:36 pm

There is a crucial error being made here.
The article focuses on the effect of aerosols on the climate. Whereas the real problem is the effect aerosols have on surface temperatures as they are measured, specifically on Tmin.
The surface record (on land) is comprised of Tmin and Tmax measurements with the assumption that these 2 measurement when averaged give an accurate measurement of the average temperature.
Since the late 1800s the average Tmin has been rising and accounts for most of the claimed warming. This is assumed to the ‘signature’ of GHG warming. It isn’t. It is the signature of reduced early morning aerosol pollution (smoke) in the developed world.
Some of you, like me, will be old enough to remember the morning ritual of lighting a fire in winter and how much smoke was produced in the process. This produce a surge in low level aerosol pollution at the the time Tmin usually occured (shortly after dawn) by reducing early morning sunlight and allowing a few more minutes of radiative cooling to bring Tmin down.
When this early morning smoke was removed by various clean air measures and the introduction of gas and electrical heating, Tmin was no longer delayed and consequently rose.
Not only does this explain most of the rise in Tmin. It explains why it has stopped in the last decade or so – early morning smoke has been pretty much eliminated in the developed world, so the effect on average Tmin stopped.
Note that while smoky fires are still widespread in the developing world, the time profile is different because in places like India and Africa they are used for cooking and not heating. Hence the smoke isn’t concentrated in the early morning.

July 16, 2009 12:40 pm

Robert van der Veeke (09:24:45) :
geo (08:24:11) :
There are three scenario’s, A, B, and C. A had an increasing rate of CO2 emissions, B had constant rate of CO2 emissions, whereas scenario C had reduced CO2 emissions rate from 1988 levels into the future.
And guess what, the global (even GISS) are below the prediction made for scenario C wich is pretty amazing since we are still following scenario A when it comes to CO2-emissions.

But we followed scenario C as far as methane, CFCs and other trace gases were concerned which were expected by Hansen to have a larger contribution than CO2 over this timeframe, also the CO2 trajectory hasn’t been as high as A. Hansen quite clearly indicated B as the most likely scenario. The graph shown above looks very like the one Michaels used when he erased lines B & C to make Hansen’s calculations look bad.
Nogw (11:46:20) :
Jim (11:15:14) :
You are right as far as “settled” science, but as far as water droplets are electrically charged they could be considered as ionized.

Very weakly, in pure water 2×10-7 moles/l of ions vs 55moles/l of H2O!
Nogw (10:22:42) :
Jim (06:19:16) : In clouds water is ionized, as OH- H+. When there is CO2 it reacts with water, in an endothermic (sucking heat in) reaction forming H2CO3 (carbonic acid). SO2 it is also as SO2-, when reacting with ionized water in clouds form H2SO3, sulphurous acid. Both after reacting are already droplets which not having electric charge cannot “float” any longer in clouds and fell down.

The H2SO4 will be completely in the form 2H^+ and SO4^2-
H2CO3 is slightly more complicated because it is a weak acid but the following will be present:
H^+, HCO3^-, CO3^2-, and H2CO3

MattN
July 16, 2009 12:43 pm

George: “Besides, anything that absorbs some incoming solar energy results in cooling, because that energy doesn’t reach the ground; and the higher up in the atmosphere it gets absorbed, the quicker it can be radiatied back into space.”
But that’s not how I’m under the impression CO2 works. I thought CO2 absorbed radiation coming back from the earth’s surface…..

MikeE
July 16, 2009 12:54 pm

My brother has done some work on climate models… and this is how my skepticism was born… to quote his own words “i have enough variables to get any climate scenario i want, water world, can do. Snow ball Earth, no problem.”
He’s a bit o a purist. He also told me, for the models too accurately predict future climate would require them too accurately predict the weather at every location around the globe for the duration of the time series. Just because its a chaotic system, so errors will compound, and each event effects the next. Butterfly effect and all that eh.
Obviously he wasn’t talking about a basic black body model, but predictive models.

Jim
July 16, 2009 12:57 pm

anna v (11:19:42) : Thank you Anna. I was just doing a sort of mental comparison of the momentum carried by ocean currents vs. momentum carried by wind. I’m (again guessing) that the wind’s momentum is negligible compared to that of the ocean. I can see how wind and waves could impart some energy to the ocean, but how much compared to the total energy carried by the ocean?

Ryan C
July 16, 2009 12:57 pm

Can someone please help me out. I found an article that is REALLY REALLY making me angry and I want to write a huge reply to it, but my account wont log in and it won’t let me create a new account at the National Post.
http://network.nationalpost.com/np/blogs/fullcomment/archive/2009/07/16/john-moore-one-world-government-and-global-warming-climate-change-whatever.aspx
This is the most insane pile of religious attacks I have ever heard and it angers me that no one has told this guy where to go. Thank you so much, Ryan!

Nogw
July 16, 2009 12:58 pm

AnonyMoose (11:48:52) :
Using a cloud-clearing algorithm is an educated guess, not an experiment. To experiment, use a hydrogen bomb to remove clouds
It seems a bit noisy and disgusting procedure. Why not trying discharging them?. Take a kite, a conductive string and try Franklin’s simple experiment and, please, forget Al..goreetms.

July 16, 2009 12:58 pm

George E. Smith (11:44:49) :
In the case of very low concentrations the absorption is linear with number of absorbing molecules or propagations district.
Remember e^-x = 1-x/1! +x^3/3! -x^5/5! etc

Correct
But the notion that atmospheric temperture rise is proportional to the log of CO2 concentration, is totally phony; there is simply no physics that causes that.
Not true George it’s due to line broadening for which a physical description exists. After starting out with a linear response for weak absorption you pass through ~log response at medium absorption to square root response at high absorption.

stumpy
July 16, 2009 1:08 pm

There have of course been other issues highlighted, the observed sensitivity is far lower than the IPCC’s. Evaporation is grossly underestimated in the models. The manner in which clouds cool the earth and help moderate temperature is not replicated in the models, resulting in an incorrect positive feedback and then none of the natural cycles are predicted correctly. Oh and the models only hind cast back to 1850, go back further and they fall over.
So :
a) if the natural is not explained, there is no way of knowing if there is any anthropogenic effect in the temperature record (if thats reliable).
b) the models are pointless unless used instead for sensitivity analysis – which they arnt, they are only used to predict dire warming with stupidly high confidence values that are political spin
c) the IPCC need to be more clear about the uncertainty and try and save some face before the public and policy makers loose trust in them and tarnish science.
As an engineering modeller I know too well you can fudge a model to hind cast well, but it doesnt mean its right or has any predictive ability. Something I have picked up other engineers on before. And it does often happen when modellers are not experianced or just lack common sense.
Having a model that represents the correct operation of a system is more usefull than one that does not but looks nice when hind cast in my opinion.

Nogw
July 16, 2009 1:29 pm

Phil. (12:58:45) :
Niels Bohr said about the so called “Green House effect”:
the absorption of specific wavelengths of light didn’t cause gas atoms/molecules to become hotter. Instead, the absorption of specific wavelengths of light caused the electrons in an atom/molecule to move to a higher energy state. After absorption of light of a specific wavelength an atom couldn’t absorb additional radiation of that wavelength without first emitting light of that wavelength.
And modelling it is just gaming/gambling, not serious experimental science.

July 16, 2009 1:54 pm

Phil. (12:40:13) :
[…]
But we followed scenario C as far as methane, CFCs and other trace gases were concerned which were expected by Hansen to have a larger contribution than CO2 over this timeframe, also the CO2 trajectory hasn’t been as high as A. Hansen quite clearly indicated B as the most likely scenario. The graph shown above looks very like the one Michaels used when he erased lines B & C to make Hansen’s calculations look bad.

From Appendix B, pg. 9361 of Hansen’s 1998 paper…
“Specifically, in scenario A CO2 increases as observed by Keeling for the interval 1958-1981 [keeling et al, 1982] and subsequently with a 1.5%/yr growth of the annual increment.”
“In scenario B the growth of the annual increment of CO2 is is reduced from 1.5%/yr today to 1%/yr in 1990, 0.5%/yr in 2000 and 0 in 2010; thus after 2010 is constant, 1.9 ppmv/yr.”
“In scenario C the CO2 growth is the same as scenarios A and B through 1985; between 1985 and 2000 the annual increment is fixed at 1.5 ppmv/yr; after 2000, CO2 ceases to increase, its abundance remaining fixed at 368 ppmv.”
If I take the average annual increment from 1958-1981 and increase it by 1.5% per year until 2008, I get 385.35 ppmv. Keeling’s value for 2008 is 385.57 ppmv.
If my math is right (and that’s often a big if)…Hansen’s scenario A matches the CO2 trend pretty well.

George E. Smith
July 16, 2009 2:01 pm

“”” Phil. (12:58:45) :
George E. Smith (11:44:49) :
In the case of very low concentrations the absorption is linear with number of absorbing molecules or propagations district.
Remember e^-x = 1-x/1! +x^3/3! -x^5/5! etc
Correct
But the notion that atmospheric temperture rise is proportional to the log of CO2 concentration, is totally phony; there is simply no physics that causes that.
Not true George it’s due to line broadening for which a physical description exists. After starting out with a linear response for weak absorption you pass through ~log response at medium absorption to square root response at high absorption. “””
Well Phil, I understand what a log function is; and every log function I’ve ever met was valid for real arguments from -infinity to + infinity, not counting the two end points.
So either the curve is a log function or it isn’t. And I understand the usual explanation that when you get the center of the transmission down to zero, the band starts to get a little wider.
I’m perfectly willing to accept that a logarithmic curve can be fitted to the real curve within certain error bands, over some finite argument range; that does not make the real funtion logartihmic.
Remember that the log function can also be represented by an infinite power series, and I am sure some finite power series can be curve fitted to some finite range of CO2 absorption. That does not make the real function logarithmic.
I’m sure the very same curve can be expressed as a series of Legendre Polynomials; or Tchebychev polynomials or any other orthoganal set of functions; but that does not make those functions a true physical description of the processes behind the measured values.
Linear-log-sqrt-??
I’m sorry, but to me the integral of (1/x) is a log function, but the relationship between mean global temperature rise, and atmospheric CO2 abundance isn’t, although I am sure the two curves can run alongside each other for a distance.
But that is just my opinion you understand.
The Planck Radiation Law is a mathematical expression that describes the real world measured values of black body radiation, and does so over the entire spectrum of values for which it has been tested; that is my idea of a true representation. Polynomial or Fourier fits over some limited argument range; are just that; fits, and almost any continuous function for which a set of sampled values exists, can be represented by a mathematical expression that contains fewer parameters, than the number of samples; to within some speciefied error bands. That doesn’t make those forced fits into explanations for the phenomenon represented.
So what functional regime was the :climate sensitivity” in back in the preCambrian when CO2 was up in the 8000 ppm range ?

KW
July 16, 2009 2:03 pm

So greatest X factor, or unknown…is?
H20 vapor?
Clouds?
CO2 forcing?
If indeed temperature vs. CO2 graph is close on…than at worse…is our greatest increase of temperature ~2.5*C?
That’s assuming it is more than 95% confidence levels? Hah.
It would be so nice if a climate modeler would consider some of these viewpoints, and lend some credibility to our point of view, rather than feeling as though we are trying to attack them.

Frank Lansner
July 16, 2009 2:04 pm

Figures illustrating CO2 absorbing effect:
http://www.klimadebat.dk/forum/vedhaeftninger/radi4.jpg
Fig 1: Co2 abs spectra
Fig2: Water abs spectra
Fig3: combined, Earh emits radiation under the black line (Planck)
– the white area under the black line is the “open window” where radiation goes out like a hole in a bucket.
Fig4: Higher up in the atmosphere, Simplyfied: Co2 abs lines are slightly changed. This might increase CO2´s role when combined with water.
But, high in the atmosphere, the water content is low, so this effect is questionable. The hole in the bucket, “the open window” is HUGE and radiation should be able to escape quite easy.
Buttom line: What Nature tells us very clearly is – as reported by Lubos Motl recently – that the higehr atmosphere teperature has stagnated for 15 years. If Co2 could anything, the primary place we should we warming was higher atmosphere.
But it aint happening. Co2 is dead.

MikeN
July 16, 2009 2:06 pm

Dave, you are right Scenario A is the best match for CO2 trend, however, the other trace gases have a very large effect in scenario A, and overall B could be the better match.

1 3 4 5 6 7 9
Verified by MonsterInsights