How Sensitive is the Earth’s Climate?

Guest Post By Steve Fitzpatrick

Fitzpatrick_Image1

Introduction

Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s).  This high sensitivity depends mainly on three assumptions:

1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.

2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases.  Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.

3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.

However, there is doubt about each of the above three assumptions.

1.  Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.  This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements.  While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked.  Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.

2.  Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings.  There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.  Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.

3.  Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined.  Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds.  Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired.  Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s.  This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.

Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.  A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.

In spite of the many problems and doubts with GCM’s:

1)       It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.

2)   Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.

3)   There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.

There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases.  This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years.  For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.  But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures.  What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period.  The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.

Climate Sensitivity

If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s.  Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity.   An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law.  Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.  With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.

But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles.  The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth.  Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.

Measuring Earth’s Sensitivity

The only way to accurately determine the Earth’s climate sensitivity is with data.

Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters:  1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration.  Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008.  He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend.  The overall trend correlated well with the log of the CO2 ratio.  In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.

There are a few implicit assumptions in Bill’s model.  First, the model assumes that all historical warming can be attributed to radiative forcing.  This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.).  The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.

Second, the model assumes the global average temperature changes linearly with radiative forcing.  While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings.  That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature.  So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.

Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2.  While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration.  In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.

To make Bill’s model more physically accurate, I made the following changes:

1.  Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.

2.  Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.

3.  The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.

4.  The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

Fitzpatrick_Image2
Figure 1 Model results with temperature projection to 2060

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing.  The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma).  All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity.  A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average.  As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.

Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle.  The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140.  Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter.  The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot.  When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.

Fitzpatrick_Image3
Figure 2 Scatter plot of the model versus historical temperatures
Fitzpatrick_Image4
Figure 3 Comparison of the model’s temperature projection under ‘Business as Usual’ with the IPCC projection of ~0.2C per decade, consistent with GCM projections.

Regional Sensitivities

Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter.  The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011.  An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K).  This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors.  The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.

Credibility of Model Projections

Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future.  But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation.  The probability of encountering important new variables increases with the length of the forecast, of course.  So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.

To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4).  The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years.  Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance.  The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

Fitzpatrick_Image5
Figure 4 Model temperature forecast for 1972 through 2008, with model constants based on 1871 to 1971. The model has no “knowledge” of the temperature record after 1971.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period.   The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2.  A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.

Emissions Scenarios

The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:

a)       The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060.  Atmospheric concentration reaches ~518 PPM by 2060.

b)       N2O concentration increases in proportion to the increase in CO2.

c)       CFC’s decrease by 0.25% per year.  The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.

d)       The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.

e)       Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.

The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s.  The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade.  This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.

Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

Fitzpatrick_Image7
Figure 5 Reduced warming via controls on non-CO2 emissions and gradually lower CO2 emissions growth.

One such scenario can be called the “Efficient Controls” scenario.  The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases.  These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency.   Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade.  Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions.  By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060.  So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.

A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions.  The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario.  Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100.  Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100.  Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Fitzpatrick_Image8
Figure 6 Draconian emissions controls may reduce average temperature in 2060 by ~0.21C compared to business as usual.

Conclusions

The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.  This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant).  Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.

Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average.  Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action.  A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
334 Comments
Inline Feedbacks
View all comments
Jacob Mack
August 10, 2009 5:40 pm

Tom: “Why doesn’t LWR get reemitted equally in all directions? You certainly aren’t implying that gravity comes into play are you?”
No, I am not implying gravity.
There are pressure changes at different altitudes as well as temperature, which will influence how a given gas will absorb and emit LWR. Check out Peter Atkins 2009 book titled Elements of Physical chemistry book on Google books, cowritten by Julio de Paula, but first check out the work of Spencer weart., “Discovery Of Global Warming,” also on Google Books.

Jacob Mack
August 10, 2009 5:44 pm

Pamela Gray (17:08:35) :
“Jacob, shortwave and longwave radiation of Sunlight 101. From the description found here, one can easily reason that these variables create a very noisy data stream of how much gets in, and how much is reflected.”
This site is an oversimplification of the empirical data, the physics, and not just the GCM approximations. You are also neglecting GHG mixing and the reduction in ice cover, albedo.

Steve Fitzpatrick
August 10, 2009 5:45 pm

Jacob Mack (16:28:50) :
“I would suggest that AGW skeptics [snip] see Spencer Weart’s work.”
Not a very constructive start.
Should I reply by sending you to read dissertations by Richard Lindzen? Better that you stick to issues related to the thread. Did you read my post? Did you have doubts, questions, or suggestions? Do you think that there is factually incorrect information presented? If so, then I would be happy to address those subjects.
Sending me to read what “this authority” says is at least a bit odd; I have some 35 years experience in chemistry, physics, and nanotechnology, and do not need you to suggest that I become better informed on the basic technical issues of AGW.
Pleeeease!

DaveE
August 10, 2009 5:50 pm

If I remember correctly, the absorption bands of CO2 are, ~4, 7.5 & 15µm bands.
4 & 7.5 are pretty well covered by H2O so that leaves 15!
That’s just the poles, North & South! South cooling, North warming, where’s the CO2?
DaveE.

crosspatch
August 10, 2009 5:50 pm

” Jacob Mack (17:14:41) : ”
I don’t believe I am.
And by that I mean that the pole is at the same altitude now as it was 50 years ago. Any increase in CO2 greenhouse should have a much greater impact in the polar region because the air is so dry. CO2 plays a much greater role in any atmospheric greenhouse at the pole than anywhere else on the planet. Most of Antarctica has been cooling with the exception of the Western peninsula and that is due to wind currents.
The impact of CO2 warming has not been documented anywhere. NONE of the predicted indications have been observed, not a single one.

Jacob Mack
August 10, 2009 5:57 pm

Steve,
the post was not directed at you for starters. Secondly, I did make a post regarding your work briefly; you grossly underestimate the climate sensitivity. I am still going through your post with a fine tooth comb, but I will comment directly regarding your calculations, assumptions, and methods. I am referring other posters to this work as it is important work that both you and they neglect to mention. Now, several times I have highlighted the physics and findings that you have neglected to cover; mainly the C02 in the upper atmosphere which will most certainly lead to global warming and the water vapor feedback which you clearly underestimate. I will be more specific soon and show you where the chemistry and physics does not add up, but that is for when I have more time to give you post my undivided attention. Also you still hold that GHG lead to some, albeit mild global warming, so you are hardly a [snip] and as far as skepticism, your work is not to dat repeatable and validated and subject to peer review, so we shall wait and see as to how valid many of your claims are. I can tell you though that it is impressive your long career and experience with chemistry and physics, but you make several minor errors that makes the warming look far more negligble for a future prediction than it is already, which is of course, impossible. You may need a review in atmospheric physics and physical chemistry, my friend.
Reply: Future use of the term “denier” as a pejorative will lead to deletion of posts without notice or explanation. ~ charles the moderator.

Jacob Mack
August 10, 2009 5:59 pm

I think Steve, that we should discuss this physics and chemistry of AGW in depth here real soon.

Jacob Mack
August 10, 2009 6:15 pm

Crosspatch you are mistaken:
Quote: “Scientists on Wednesday unveiled evidence to suggest global warming is affecting all of Antarctica, home to the world’s mightiest store of ice.
The average temperature across the continent has been rising for the last half century and the finger of blame points at the greenhouse effect, they said.
The research, published in the British journal Nature, takes a fresh look at one of the great unknowns — and dreads — in climate science.” End quote.
dsc.discovery.com/news/2009/01/…/antarctica-warming.html
Also see: http://www.cnn.com/2009/WORLD/…warmingantarctic/index.html
And: BE Barrett, KW Nicholls, T Murray, AM Smith … – Geophysical Research Letters, 2009 – agu.org
Also there are pressure and temp differences for the Antartic elevation and the atmpospheric; there is also more precipitation in the Anartic then the upper atmosphere.

George E. Smith
August 10, 2009 6:19 pm

“”” Jacob Mack (16:28:50) :
I would suggest that AGW skeptics [snip] see Spencer Weart’s work. Just google him, and you will find an immense resource of information regarding why AGW is a fact from the standpoing of solid physics. The upper atmosphere contains little to no water vapor and therefore any contribution made by CO2 would have a net warming effect, since it acts as a blanket. Also, the lower and middle troposphere is far from being satuarted as of yet, but even if it were, the C02 in the upper atmosphere where it is cool and dry would absorbs wavelengths at different bands at varying altitudes and thus reflect LWR back down towards Earth. “””
Now why would you say “the upper atmosphere contains little to no water vapor”. Why would that be; other than the upper atmosphere contains little to no gases of any kind. The upepr atmosphere no matter how rarified is perfectly capable of sustaining a water vapor content in accordance with the saturated vapor pressure of Water as a function of temperature; and even at -90 C, the earth’s atmosphere still contains water vapor; so I don’t see why it should disappear with altitude; any more than CO2 would.
And as that upper atmosphere becomes ever more rarified so does the density of CO2 molecules up there so the GHG warming effect also diminishes. Oh maybe the local atmospheric temperature still changes somewhat since the reduced amount of captured IR long wave radiation is shared with a reduced mass of atmospheric gases; or when high enough the mean free path may be long enough for the CO2 to simply decay to the ground state and re-emit the absorbed photon. And that re-emission spectrum would be quite narrow, because of the lowered temperature and density so the Doppler and collision broadening would be reduced.
That narrower CO2 absorption/emission spectrum would have quite a chore making it through the denser warmer lower atmosphere, with its broader CO2 absorption band. Remember that each re-absorption and eventual emission from either the excited GHG or the atmosphere, results in an essentially isotropic re-radiation pattern; so roughly half of the total flux can be expected to be up and half down at each such level. The upward path would be expected to be favored over the downward, becasue of the temperature and density relaxation with altitude.
And for one more time, can I re-iterate that the GHG components of the atmosphere do NOT reflect long wave radiation from the surface. The process is an inelastic scattering process, and not an optical reflection. Reflection does not involve a frequency shift.
As for learning from Spencer Weart; see letters to the editor in “Physics Today” for January 2005; where I casually mentioned that when floating sea ice melts; the laws of physics require that the sea level will fall; not rise, and not stay fixed either.
Weart pooh poohed that idea; and substituted his own problem in place of mine, simply asserting that when the oceans warm the water expands, and the sea level rises. No doubt true; but totally unrelated to my comment about “when floating sea ice melts.”
So I’ll find a more on the ball teacher thank you.

Jacob Mack
August 10, 2009 6:20 pm

Dave,
there is also 17 as well from C02 and varying behaviors of C02 under different altitude conditions and mixing ratios in relation to varying amounts of water vapor, N02, CH4 and of course, 1/2 RHO V^2 is the altitude density equation and at higher altitudes, air pressure decreases (density decreases as D= M/V) and thus pressure does as well so gases will tend to spreead out more under such circumstances, but become less thermally excited at higher altitudes, and yet in the absence of other significant GHG, some of the CO2 does go to space, while the rest is held in and re-radiated to the Earth.

Jacob Mack
August 10, 2009 6:23 pm

Quote: “Reply: Future use of the term “denier” as a pejorative will lead to deletion of posts without notice or explanation.” ~ charles the moderator.’
I do not engage in name calling hence why I put ” or ‘ around such words, Charles.
I in no way meant it with any intent of contempt; it was actually to indicate that I did not mean my statements in a prejorative manner.
[Reply: Best to avoid using the “D” word entirely. ~dbstealey, moderator]

Bill Illis
August 10, 2009 6:26 pm

I’ve redone some charts I posted from above.
Here is how it plays out when you separate the solar forcing from the greenhouse effect. This IS the greenhouse effect.
Each extra Watt of GHG forcing is really only adding 0.18C right now. The 2.4 extra Watts assumed to have occured since 1850 or so would translate into 0.5C of warming (convienently close to what has actually occurred).
To get to +3.0C by 2100, GHGs will have to add an extra 13 Watts [which is an impossible amount – you can do your own math for CO2 alone with this formula – Watts = 5.35 ln(CO2future/387)]
http://img524.imageshack.us/img524/6840/sbearthsurfacetemp.png
Each extra watt is now only adding 0.18C.
http://img43.imageshack.us/img43/2608/sbtempcperwatt.png
The climatologists rely on these equations for everything. It underpins most of the physics and the models themselves. As far as I can tell, they have not calculated how each extra Watt will affect temperatures, they are just using the averages over the whole spectrum (surprising since they should know these are logarithmic/exponential equations).
This is a falsification as far as I am concerned.
This is also more-or-less consistent with Trenberth’s new Earth Radiation Budget paper. He’s bumped the surface Watts from 390 (my charts) to 396 assuming there is a lag in emissions from the surface ocean and deserts but this change also seems like an impossible amount.
http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/BAMSmarTrenberth.pdf

August 10, 2009 6:27 pm

Steve Fitzpatrick (17:45:30),
He’s trolling. And he’s bringing up Spencer Weart, because Weart is realclimate’s tame pet. If Weart had the …um …gumption, he would write an article for the web’s “Best Science” site like you did, and let people try to knock it down if they can. That’s how real scientists do it. Even Dr. Steig wrote an article that was posted here.
But Mr. Weart likes being scratched behind his ears, so he hides out at RC and similar agenda-driven sites — where he never has to face any uncomfortable questions. Because he hides out from answering inconvenient questions, he carries little weight here. Zero, actually.
Kudos to you, BTW, for an interesting article — and for being willing to respond to numerous questions.

Steve Fitzpatrick
August 10, 2009 6:31 pm

Jacob Mack (17:59:21) :
“I think Steve, that we should discuss this physics and chemistry of AGW in depth here real soon.”
I would be happy to do so, if you can keep the conversation civil and constructive.
Most everyone honestly believes what they say, even if they may sometimes be mistaken. A constructive dialog requires that anyone involved enter admitting that they may sometimes be wrong. If you can enter an exchange honestly saying that you may sometimes be wrong, then it will be worthwhile. If you enter certain that you (or worse, some distant authority you will point to) is 100% correct, then any discourse would be a terrible waste of time.

Pamela Gray
August 10, 2009 6:35 pm

Wow. I din’t know that quotes were so powerful.

crosspatch
August 10, 2009 6:45 pm

““Scientists on Wednesday unveiled evidence to suggest global warming is affecting all of Antarctica, home to the world’s mightiest store of ice.”
Oh, I take it that you are not familiar with the errors that were discovered in that “study”. That is Steig’s paper, I believe. It has been shown to be in error. Steig has produced a corrigendum which you can read about here. Basically the error bars are so wide now that the result of his study is “temperatures have risen 0.12 degrees +/- 0.12 degrees.
A station had great weight in the study but the data attributed to that station didn’t come from there. It was actually a splicing of data from several other stations. That study is, at this point, pretty much debunked.
Please, feel free to try again.

Jacob Mack
August 10, 2009 6:48 pm

Steve, no not 100% certainty, but I find your doubts of the “assumptions,” to be questionable, without further reference to data. I am confused as to how you make a confident statement like:
(1.) “Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.”
Next: (2.) “Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings. There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.”
(1.)What about the high heat capacity and specific heat of water, the changes in salinity recently noted, the higher ocean C02 content, and the lagging conduction of heat to the atmosphere from the ocean? (only about 10% of total heat, but evaporation and water vapor feedbacks come into play as well)
I will stop there for now, but it seems to me that the physics and chemistry (and the intricate weather patterns and long term climate trends, (say 50 years to present?) indicate otherwise for aerosols. There is signifcant research on aerosol scattering effects, so I am confused by your statement that there is no evidence regarding their current and future effects.

tallbloke
August 10, 2009 6:50 pm

Steve Fitzpatrick (16:59:37) :
tallbloke (15:45:04) :
I obsess about nothing, and it would help maintain civility by not making this kind of non-constructiive comment.

Apologies, your asking a question and then ignoring the reply annoyed me.
“What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”
I replied that I knew of no data set that could be used to make solar contribution 75%.

Thus avoiding answering the question. Which you still are… I pointed out the uncertainty in TSI values, and asked you to treat my question as speculative. But instead of giving me a single, clear answer, you have asked another four questions.
When you’ve answered my single reasonable question, I’ll answer your additional ones.

crosspatch
August 10, 2009 6:53 pm

In fact, one might want to peruse the process of reconstructing Steig’s data and methods by perusing these threads (which continue beyond the first page).

August 10, 2009 6:58 pm

Steve Fitzpatrick (15:02:40) :
I was not aware that Lean had changed her mind about the 2 watts change since the little ice age. It certainly was not my intent to misrepresent her current views. The calculations I did were based on recently measured changes in intensity over the solar cycle (peak to valley) of ~1 watt per square meter at the top of the atmosphere, and the model assumed this variation was the same since 1871.
The peak to valley change depends on the size of the solar cycle and varies by a factor or 3 or more. A good median value is 0.1% of TSI or ~1.4 W/m2, for some cycles larger, for some smaller.
This works out to ~0.7 * 0.25 = 0.175 watt per square meter, and an expected solar signal from the solar cycle of 0.047C (peak to valley) for a sensitivity of 0.27 degree per watt per square meter.
A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].
What I found interesting was that the best model fit to the temperature data corresponded to ~0.168 watt per square meter, remarkably (at least to me) close to the 0.175 watt per square meter that would be expected based on the measured variation over the last few cycles.
In view of my simple calculation above where the sensitivity doesn’t enter at all I don’t see the relevance of the correspondence.
So for what is is worth: the model is consistent with no substantial change in cyclical variation over the past 130 years.
And I don’t understand this statement. Stefan-Boltzman’s law hasn’t changed. So what is this ‘cyclical variation’?

August 10, 2009 7:04 pm

tallbloke (18:50:01) :
I pointed out the uncertainty in TSI values
The uncertainty is smaller than the solar min to max variation so is hardly relevant. Even a 1 W/m2 uncertainty translates into a 0.05K temperature signal which is negligible in the current context.

Jacob Mack
August 10, 2009 7:25 pm

Fair enough Crosspatch, but here are other recent papers, some preliminary or up for peer review, while others are already published:
http://www.atmos-chem-phys-discuss.net/9/…/2009/acpd-9-1703-2009.pdf
http://www.sciencemag.org/cgi/content/abstract/311/5769/1914
http://www.newscientist.com/article/dn16740-global-warming-reaches-the-antarctic-abyss.html (the attribution is not made prematures to C02, in fact it is stated that it is too soon to know by the researchers)
http://netbnr.net/loc.html?http://www.climatehotmap.org/antarctica.html
http://netbnr.net/loc.html?http://www.sciencedaily.com/releases/2006/03/060330181319.htm
I also want to add that precipitation will slow down Anartic warming and some evidence suggests that El Nino will also temporarily suppress warming magnitude, and yet the Anartic is still warming, while Greenland is, and the Artic is warming at even a faster pace.
More on all this later… I also have preparations to make for Steve, soon as he answers my initial questions.

Steve Fitzpatrick
August 10, 2009 7:40 pm

Jacob Mack (18:48:21) :
“Steve, no not 100% certainty, but I find your doubts of the “assumptions,” to be questionable, without further reference to data.”
I should hope a lot less than 100% certainty. The IPCC models differ by a factor of about 3 in their projections of warming through 2100. At a minimum that ought to lower the certainty level a fair amount below 100%; they can’t all be correct. If you have one parti`cuylar model that you think is almost certainly correct, then OK, but please tell me which model that is.
I am confused as to how you make a confident statement like:
(1.) “Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.”
There have been four published studies (that I am aware of) where total heat content in the top 700 meters of ocean was calculated based on Argo float data (following the correction of errors in a small subset of floats, of course) as well as independent confirmation by ocean mass and altimeter readings (satellite). One showed a modest fall in heat from 2003 to 2008, two shows a very slight fall toflat in heat, and one a slight increase in ocean heat. The best available data is that there has been no heat accumulation (or a very slight fall) in the top 700 meters of ocean since 2003.
“Next: (2.) Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings. There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.”
The IPCC’s uncertainty limits for net areosol focings range from tiny to huge. All global estimates based on measurements I have seen indicate a gradual fall (a total reduction amounting to about 50% of the 1993 value) since the effects of Pinatubo ended in about 1993. Measured aerosol effects declined through at least 2005, (the well known global brightening) which should have increased solar intensity and heat accumulation in the ocean… it did not happen.
“What about the high heat capacity and specific heat of water, the changes in salinity recently noted, the higher ocean C02 content, and the lagging conduction of heat to the atmosphere from the ocean? (only about 10% of total heat, but evaporation and water vapor feedbacks come into play as well)”
I honestly have no idea what you are trying to say in the above paragraph. Perhaps you could explain in a different way.
Jacob, what I see is that for extreme greenhouse forced warming to be correct, you have to believe that 1) ocean heat accumulation is extremely slow (lagging far behind the surface, if you will), that 2) human generated aerosols have canceled a large fraction of radiative warming, 3 ) that this “aerosol cancellation” is going to decline in the near future, and 4) that the total of water vapor and cloud feedbacks is strongly positive.
In addition, all these things must be correct for the whole structure to “hold together”; if ocean lags do not extend to hundreds of years, then the forcing can’t be what is claimed, if the forcing is not what is claimed then the feedbacks can’t be right, etc., etc. Simulating the atmosphere and ocean is a remarkably difficult problem, and this explains the wide range of model predictions (produced by groups of dedicated and honest scientists and programmers, no doubt)…. but none of it inspires confidence in their predictions. Finally, please note that most of the models do not even correctly predict the average surface temperature of the Earth…. today.

Jacob Mack
August 10, 2009 7:43 pm

Please see below, regarding the higher volume of salt water due to its higher density. (D=M/V) Again the differences in water’s physical characteristics due to salinity cannot be ignored.
“In a paper titled “The Melting of Floating Ice will Raise the Ocean Level” submitted to Geophysical Journal International, Noerdlinger demonstrates that melt water from sea ice and floating ice shelves could add 2.6% more water to the ocean than the water displaced by the ice, or the equivalent of approximately 4 centimeters (1.57 inches) of sea-level rise.
The common misconception that floating ice won’t increase sea level when it melts occurs because the difference in density between fresh water and salt water is not taken into consideration. Archimedes’ Principle states that an object immersed in a fluid is buoyed up by a force equal to the weight of the fluid it displaces. However, Noerdlinger notes that because freshwater is not as dense as saltwater, freshwater actually has greater volume than an equivalent weight of saltwater. Thus, when freshwater ice melts in the ocean, it contributes a greater volume of melt water than it originally displaced.”
Also this is discussed in General Chemistry (by most college professors) as is thermal expansion. So, there is a double net positive effect here; many HS textbooks and 8-9th grade science websites get this wrong and assert (incorrectly) that sea ice melt would contribute little to nothing to sea level rise. Once you take a college level physics/chemistry course it becomes clear that salinity levels affect the heat capacity of water, the density/volume of melting ice which displaces the water, and thus sea level rise. If we were discussing melting of fresh ice into fresh water than the displacement would be almost zero in net water rise. I suggest you read http://www.fas.org/spp/military/docops/afwa/ocean-U1.htm and
http://www.fas.org/spp/military/docops/afwa/ocean-U2.htm, U3, etc…

Pamela Gray
August 10, 2009 7:44 pm

1. Jacob, you still haven’t responded to my post other than to say that I oversimplified. Do you have shortwave radiation data over time measured at Earth’s surface before it gets converted to LWR, compared to what hits the outer part of the atmosphere? It appears that your premise is that there is no difference or variation between the two, thus allowing continued increase in CO2 to heat the Earth from here to Armageddon. There is quite a difference and the data is noisy. But if you think not, show me.
2. SST’s and oceanic oscillations oscillate around a rather cherry picked zero. Yes. But over what time scale? And is it an even oscillating swing or one that is predominantly lopsided one century, and lopsided in some other way the next? During the industrial age, are you saying that the swing is even, can be canceled out, and therefore leave us with AGW? Are you kidding? Remember, most average “normal” lines on temperature graphs are no more than 30 years long. Yet we know that some oscillations are at least twice that length and have a rather chaotic swing.

1 5 6 7 8 9 14