How Sensitive is the Earth’s Climate?

Guest Post By Steve Fitzpatrick

Fitzpatrick_Image1

Introduction

Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s).  This high sensitivity depends mainly on three assumptions:

1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.

2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases.  Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.

3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.

However, there is doubt about each of the above three assumptions.

1.  Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.  This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements.  While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked.  Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.

2.  Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings.  There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.  Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.

3.  Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined.  Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds.  Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired.  Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s.  This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.

Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.  A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.

In spite of the many problems and doubts with GCM’s:

1)       It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.

2)   Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.

3)   There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.

There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases.  This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years.  For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.  But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures.  What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period.  The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.

Climate Sensitivity

If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s.  Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity.   An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law.  Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.  With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.

But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles.  The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth.  Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.

Measuring Earth’s Sensitivity

The only way to accurately determine the Earth’s climate sensitivity is with data.

Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters:  1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration.  Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008.  He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend.  The overall trend correlated well with the log of the CO2 ratio.  In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.

There are a few implicit assumptions in Bill’s model.  First, the model assumes that all historical warming can be attributed to radiative forcing.  This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.).  The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.

Second, the model assumes the global average temperature changes linearly with radiative forcing.  While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings.  That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature.  So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.

Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2.  While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration.  In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.

To make Bill’s model more physically accurate, I made the following changes:

1.  Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.

2.  Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.

3.  The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.

4.  The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

Fitzpatrick_Image2
Figure 1 Model results with temperature projection to 2060

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing.  The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma).  All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity.  A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average.  As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.

Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle.  The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140.  Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter.  The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot.  When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.

Fitzpatrick_Image3
Figure 2 Scatter plot of the model versus historical temperatures
Fitzpatrick_Image4
Figure 3 Comparison of the model’s temperature projection under ‘Business as Usual’ with the IPCC projection of ~0.2C per decade, consistent with GCM projections.

Regional Sensitivities

Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter.  The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011.  An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K).  This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors.  The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.

Credibility of Model Projections

Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future.  But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation.  The probability of encountering important new variables increases with the length of the forecast, of course.  So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.

To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4).  The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years.  Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance.  The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

Fitzpatrick_Image5
Figure 4 Model temperature forecast for 1972 through 2008, with model constants based on 1871 to 1971. The model has no “knowledge” of the temperature record after 1971.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period.   The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2.  A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.

Emissions Scenarios

The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:

a)       The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060.  Atmospheric concentration reaches ~518 PPM by 2060.

b)       N2O concentration increases in proportion to the increase in CO2.

c)       CFC’s decrease by 0.25% per year.  The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.

d)       The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.

e)       Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.

The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s.  The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade.  This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.

Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

Fitzpatrick_Image7
Figure 5 Reduced warming via controls on non-CO2 emissions and gradually lower CO2 emissions growth.

One such scenario can be called the “Efficient Controls” scenario.  The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases.  These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency.   Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade.  Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions.  By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060.  So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.

A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions.  The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario.  Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100.  Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100.  Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Fitzpatrick_Image8
Figure 6 Draconian emissions controls may reduce average temperature in 2060 by ~0.21C compared to business as usual.

Conclusions

The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.  This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant).  Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.

Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average.  Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action.  A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
334 Comments
Inline Feedbacks
View all comments
Allen63
August 10, 2009 1:10 pm

Steve Fitzpatrick,
Thanks for your substantial reply to my posting of the obvious. From time to time I post the seemingly obvious to see if others feel the same.
I can accept your reasoning for what it is — an attempt to project warming during the next few decades assuming CO2 has been and will be the sole source of the warming. Your results indicate that some climate models over estimate the impact of CO2. I think your effort adds value to the debate.
In general, I agree with you that CO2 is probably causing some warming — however, not enough to worry about — assuming the “official” historical global temperature anomaly plots are “accurate” (they may not be — and that’s another issue for another time).

George E. Smith
August 10, 2009 1:52 pm

“”” DennisA (02:59:26) :
In 2000, Dr Robert E Stevenson, (now deceased), Oceanographer and one-time Secretary General of the International Association for the Physical Sciences of the Oceans (IAPSO), wrote an assessment of Levitus et al (2000) on global heat budget.
http://www.21stcenturysciencetech.com/articles/ocean.html
Yes, the Ocean Has Warmed; No, It’s Not “Global Warming”
This is a small extract:
“How the Oceans Get Warm
Warming the ocean is not a simple matter, not like heating a small glass of water. The first thing to remember is that the ocean is not warmed by the overlying air.
Let’s begin with radiant energy from two sources: sunlight, and infrared radiation, the latter emitted from the “greenhouse” gases (water vapor, carbon dioxide, methane, and various others) in the lower atmosphere. Sunlight
penetrates the water surface readily, and directly heats the ocean up to a certain depth. Around 3 percent of the radiation from the Sun reaches a depth of about 100 meters.
The top layer of the ocean to that depth warms up easily under sunlight. Below 100 meters, however, little radiant energy remains. The ocean becomes progressively darker and colder as the depth increases.
The infrared radiation penetrates but a few millimeters into the ocean. This means that the greenhouse radiation from the atmosphere affects only the top few millimeters of the ocean. Water just a few centimeters deep receives none of the direct effect of the infrared thermal energy from the atmosphere! Further, it is in those top few millimeters in which evaporation takes places. So whatever infrared energy may reach the ocean as a result of the greenhouse effect is soon dissipated. “””
Well I had to cut and paste this piece of history. I have been harping on this question for some time now; but make no claim to having said so first; although it came to me quite independent of any earlier publications; it’s so obvious that anyone could think of it.
And my only amendment to the late Dr Stevenson’s comments would be to say that the long wave IR from the atmospheric radiation is absorbed in the top ten microns of the ocean water not “a few millimeters”.
So I concur with Dr Stevenson that atmospheric warming of the oceans is a losing thesis; surface evaporation quickly removes any surface energy supplied by the atmospheric long wave downdraft.
Any simple analysis of the up/down propagation of long wave infra-red radiation in a non-uniform atmosphere that has both a density and temperature gradient and a principally CO2 (other than water) GHG component, will clearly demonstrate that upward propagation is favored over downward; simply because of the way that the CO2 absorption band changes in width with altitude (gets narrower at greater heights).
As for high Cirrus clouds creating a positive feedback warming of the surface (and the higher and less dense those clouds, the warmer the surface gets). That’s just plain silly; those high cirrus clouds are there because of the warmer surface; they are not the cause of the warmer surface, and because of the usual temperature relaxation with altitude, the warmer the surfrace is, the higher the water vapor has to rise (due to convection) before the dew point is reached and clouds can form, and if the water content is lower so the relative humidity is lower; the vapor has to go higher still so the clouds get less and less dense as a result.
And like ANY other cloud; they still reflect sunlight from the tops (albedo enhancement, and they still block additional solar radiation from the surface; and it still gets colder when one of those clouds passes in front of the sun; it never gets warmer in the shadow zone as a result of those clouds; at any height.
So I didn’t know the late Dr Stevenson; but I’m happy to know there have been others who find the standard line to be ludicrous.
George

tallbloke
August 10, 2009 2:15 pm

George E. Smith (11:39:18) :
“However, there is doubt about each of the above three assumptions. ”
Well I hope to shout there is doubt about those three assumptions that are part of the GCM models.

Don’t waste your time here George. Lip service is paid to doubt, but little heed is given to any serious aberration from the orthodoxy.

Pamela Gray
August 10, 2009 2:21 pm

George, I am with you on your post. CO2 and other greenhouse gasses, regardless of source are poor ways to heat water. Can you imagine using that method on a camping trip to heat water for morning coffee? It is the most amateurish part of global warming notions, let alone that the heat from air is somehow locked away in a vault to be spewed onto land like some B monster movie.

Steve Fitzpatrick
August 10, 2009 2:22 pm

Willis Eschenbach (10:32:08) :
“I say again: including observational data cannot improve the accuracy of model predictions. It can only improve the accuracy of model hindcasts.”
I understand exactly what you are saying and why you think that these indexes do not “improve the model forecast”. As I already said, removing the AMO and Nino 3.4 indexes from the regression does not significantly change the calculated climate sensitivity, nor should they… they are detrended indexes!
Let me try to explain why I use them.
1. If you plot up Nino 3.4 against global average SST, you find virtually no correlation. Yes, Nino 3.4 is an index that comes from measured SST in a certain ocean region. No, it is not a simple proxy for average SST as you appear to be suggesting. Nino 3.4 does provide information about the current state of the Earth’s climate system relative to an “average” state. To be more specific, Nino 3.4 helps us understand if a currently measured “higher than normal” or “lower than normal” global average temperature is a result of specific short term conditions in the ENSO or if that measured average temperature is more likely associated with a “background” trend in temperature. If someone says to me “Look how hot last month was!” and I know that we are in the middle of the biggest El Nino in history, then I can pretty confidently reply: “I’ll bet it will cool off by more than 0.2C in the next year of two.”
2. AMO is a bit more complicated, since it comes from a much larger ocean area, and does correlate with global average SST (R^2 about 0.4), and some of the AMO index is simply a proxy for average SST. But AMO is detrended over 100+ years of temperature records; when the AMO index is well above or well below zero (well above or well below the long term trend line), it is telling us that the current measured global average temperature is not typical in a historical sense, and that the current measured average temperature is probably not an accurate representation of the underlying long term temperature trend. A very high AMO index fairly well screams that the temperature will fall back toward the long term trend line within several years.
AMO and Nino 3.4 are not perfect stand-ins for short term variation, but are much better than nothing. AMO and Nino3.4 are determined each month, just as is the global average temperature, and can be used to better evaluate the ‘true’ underlying warming (or cooling!). Most everyone who thinks about climate change already knows this, and people often use these indexes just like the model does. For example, the current El Nino, which started just a month or two ago, “suggests” that the global average temperature will be above the trend line for at least a few months. Most every climate blog that you can think of has probably discussed this expected “El Nino warming” at least once in the last month or so, and official climate and weather organizations have also had press releases about it.
If they were really just a proxy for the average SST (as you suggest), then why would anyone even bother to calculate them?
If you believe that these indexes are not useful in the model, that is OK with me. But you can count on people to continuing to look at these indexes to better understand what is shorter term variation and what is longer term change.

tallbloke
August 10, 2009 2:23 pm

How Sensitive is the Earth’s Climate?
Very.
But not to co2.
This is because co2’s puny near surface radiative activity is completely drowned by far larger atmospheric processes which only have to make a very minor adjustment to deal with it’s effect.
However extra insolation does warm the ocean’s, and so, the Earth. Enjoy it while it lasts, the oceans are losing heat.

Pamela Gray
August 10, 2009 2:25 pm

I would also add that Sunlight is EASILY reflected away from warming the oceans. It’s measure at the surface is very noisy while highly stable just outside our atmosphere! And what varies it? Earth’s atmosphere. One of the most variable planets in the solar system in terms of its climate and weather.

Steve Fitzpatrick
August 10, 2009 3:08 pm

tallbloke (14:23:29) :
and
Pamela Gray (14:25:11) :
Please tell me which of the following you disagree with:
1. The concentrations of CO2, chloro-fluorocarbon’s, N2O, and methane in the atmosphere have increased by some amount in the last 100 years.
2. All the above gases have well characterized infrared absorption spectra.
3. Based on these known spectra, the escape of infrared radiation from Earth’s surface through the atmosphere to space should be slightly inhibited compared to escape under the same conditions, but with the concentrations of these gases reduced to what they were 100 years ago.
4. It is therefore reasonable that all else being equal, some warming of the Earth’s surface (however small) should result from the increased concentration of these gases in the atmosphere.
My understanding of chemistry and physics suggests that the above statements are not at all speculative. I am really trying to understand why you seem to object so strenuously to my post, which says clearly that the projections of warming based on GCM’s are much too high. What exactly do you take issue with?

Stevo
August 10, 2009 3:23 pm

John Finn,
“Have you a link to your explanation.”
I initially wasn’t going to bother replying (no offence intended, but I didn’t really have the time to go through it all again should anyone want to debate it), but I notice some people have spent time arguing against the wrong model of the greenhouse effect (the radiative one), so I’ll refer to it again for anyone interested. My comment above was meant to be more light-hearted.
The first comment was here. More further down.

tallbloke
August 10, 2009 3:30 pm

Steve Fitzpatrick (14:22:17) :
If you plot up Nino 3.4 against global average SST, you find virtually no correlation.

If you go to Bob Tisdale’s website you’ll find a post on how you can add nino3.4 values cumulatively (along the lines of what I did with sunspot numbers as I described in the post answering your question which you ignored), to get an uncannily accurate history of SST’s.

Jacob Mack
August 10, 2009 3:32 pm

The physics is off on this post; there is no way the increase in global mean temp would be so low when C02 is double that of pre-industrial levels; when I have time I will come back to this point.
Pamela what references are you using? I would love to see those if you would paste them up. You may want to see this:
http://www.fas.org/spp/military/docops/afwa/ocean-U1.htm
here:
http://www.theallineed.com/biology/07052901.htm
and here:
http://www.learner.org/courses/envsci/unit/text.php?unit=12&secNum=0

Jacob Mack
August 10, 2009 3:35 pm

Steve,
they take issue with any global warming due to greenhouse gases from the burning of fossil fuels; some here, will agree that man has slightly helped along natural variation so as long as the warming stated is so negligible that it has no potential detrimental effect.

tallbloke
August 10, 2009 3:45 pm

Steve Fitzpatrick (15:08:01) :
4. It is therefore reasonable that all else being equal,

They are not. Lots of other things have changed. The atmosphere is a big place. These gases you obsess about occupy a vanishling small part of it, and although they may have some properties which might have some effects, they are a drop in the bucket which the massive processes continuing in the atmosphere can shrug off with a tiny average shift of the jetstreams towards the poles here, or a changing of the extratropical hadley cell boundaries there.
This is the second set of questions you’ve asked me that I’ve replied to. Are you going to continue ignoring the first?

crosspatch
August 10, 2009 4:16 pm

” Steve Fitzpatrick (15:08:01) : ”
“3. Based on these known spectra, the escape of infrared radiation from Earth’s surface through the atmosphere to space should be slightly inhibited compared to escape under the same conditions, but with the concentrations of these gases reduced to what they were 100 years ago.”
Maybe, maybe not. What if increased CO2 concentrations in the atmosphere displaces H2O and results in lower absolute humidity in response to the increased CO2 content and total greenhouse impact is reduced? Suddenly what was thought to be a positive feedback turns into a negative feedback as a less absorptive gas displaces one with a wider absorption range.
What if the atmosphere is already practically opaque to IR at the most important wavelengths and adding CO2 is like putting a shade across an already bricked up window?
And the bottom line is, based on my understanding of physics, is that if you have this increase in IR absorption, you should see that elusive hot spot. If you are catching radiated heat from the surface and re-radiating it back down, you should be able to measure it. So far that hasn’t happened. That heat isn’t there. There is currently no indication that the atmosphere is increasing its absorption of IR the way the models predict it would.
Then we have the whole problem of convection that the models ignore. If the gasses DID absorb more IR and heat up, they would rise and give the heat up as they do. Direct radiation would convert to convection/radiation and the atmosphere would still give up its heat to space. The models (according to what I have read in the writing of others) seem to rely on a static atmosphere that collects IR, heats up, and just sits there re-radiating the heat back to the ground or is “infinitely thick” and never gives up the heat to space the way a gas would in real life.
Earth has a natural refrigeration system using water vapor as the working fluid. The only other atmospheres we have to look are made up of mostly CO2 with no water to speak of (Venus and Mars). Hansen’s group was originally formed to model the atmospheres of these (and the other) planets. They have spent a lot of their life modeling CO2 greenhouse atmospheres. But things might not work the same way here. It would take, I believe, a HUGE amount of CO2 increase to even make as much difference as the normal variability of H2O. I believe CO2 adds so little greenhouse warming that it gets “lost in the noise”.
Looking at NCDC’s data for the continental US, we are looking at a warming rate of 1.2 degrees Fahrenheit per century since 1895 to present and the most recent 12 month period is below the trend line. ( Go here and enter “most recent 12-month period in the “Period” pull down, its the last item in the list and select “Submit).
Since 1999 to present we see a cooling trend of 8.6 degrees Fahrenheit per century. That is quite dramatic cooling over CONUS over the past 10 years.
Why? And why aren’t the global satellite averages tracking with CO2 emissions?

Jacob Mack
August 10, 2009 4:28 pm

I would suggest that AGW skeptics [snip] see Spencer Weart’s work. Just google him, and you will find an immense resource of information regarding why AGW is a fact from the standpoing of solid physics. The upper atmosphere contains little to no water vapor and therefore any contribution made by CO2 would have a net warming effect, since it acts as a blanket. Also, the lower and middle troposphere is far from being satuarted as of yet, but even if it were, the C02 in the upper atmosphere where it is cool and dry would absorbs wavelengths at different bands at varying altitudes and thus reflect LWR back down towards Earth.

Jacob Mack
August 10, 2009 4:30 pm

Pamela Gray, what are your references regarding: “I would also add that Sunlight is EASILY reflected away from warming the oceans. It’s measure at the surface is very noisy while highly stable just outside our atmosphere! And what varies it? Earth’s atmosphere. One of the most variable…”?

Jacob Mack
August 10, 2009 4:32 pm

I again see a temporary disappearance of my posts, but I will wait…

Pamela Gray
August 10, 2009 4:45 pm

#1. Disagree. We don’t know the long term trend or average during the past 500 years. All we have is AIRS showing us observed measures during a single warm oceanic oscillation. All the rest is proxied calculations, which have a reasoned large standard deviation compared to a gas chamber. We do not know what happens to CO2 under different oceanic oscillation conditions in terms of measuring small changes in ppm. What we do know is that torrential rains can send tons of CO2 captured by plants out to sea and down the ocean floor. In essence, bad weather can scrub carbon out of the air. Bad weather comes when oscillations fight each other, IE one is cool while the other is warm.
#3. Disagree. Real world conditions include stormy weather and uncooperative jet streams as well as aerosols that change the amount of short wave solar radiation reaching the surface and then longwave radiation available to be absorbed by CO2 and other GHG’s to add warmth. There is no such thing as same conditions in the actual world and besides, CO2 is only available to warm the air after the Sun’s rays are changed by natural and highly variable parameters. CO2 cannot overcome that. Its ability to warm is stable. It’s all the other variables that do not let it do its job very well. With full amount of incoming solar shortwave radiation reaching the ground and with the full amount of outgoing longwave radiation reaching greenhouse gas layers, we get about 30 degrees Celsius of warming. But we never actually get that because of the highly variable atmosphere of our planet.
#4. Nothing is equal in the real world.
On the contrary, all the other variables create lots of speculation. See:
http://www.arm.gov/publications/proceedings/conf02/extended_abs/ellingson_rg.pdf
and Anthony, help me understand this ppt:
http://clarreo.larc.nasa.gov/workshops/2009-02-24/docs/Huang_Langley-visit_20090224.ppt

Pamela Gray
August 10, 2009 4:57 pm

BTW, this is an interesting site with data available. Well worth a visit. There maybe evidence of coolaid drinking but they are collecting longwave radiation data. Something that is highly variable depending on how much shortwave radiation actually hits the surface.
http://www.arm.gov/acrf/

Steve Fitzpatrick
August 10, 2009 4:59 pm

tallbloke (15:45:04) :
I obsess about nothing, and it would help maintain civility by not making this kind of non-constructiive comment.
“What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”
I replied that I knew of no data set that could be used to make solar contribution 75%. You replied with an address that held a series of graphs, without any description of those graphs that I could find. I really have no idea how those graphs relate to solar forcing. That is why I did not reply further. I still have no idea what the graphs mean or how they might relate to solar forcing.
With regard to:
“If you go to Bob Tisdale’s website you’ll find a post on how you can add nino3.4 values cumulatively (along the lines of what I did with sunspot numbers as I described in the post answering your question which you ignored), to get an uncannily accurate history of SST’s.
I have not seen this post, though I have seen Bob’s site a few times. I am not sure I understand what the connection might be between a sum of historical values of Nino 3.4 and historical sea surface temperatures. If you can offer a brief summary it might help. It would also help if you could explain how/why the sum of historical sunspots relates to total solar forcing; I have never heard of this before. How far back do you sum, and how do you choose the starting point for the sum? What information does this sum of sunspots provide about solar forcing?

crosspatch
August 10, 2009 5:05 pm

“The upper atmosphere contains little to no water vapor and therefore any contribution made by CO2 would have a net warming effect,”
The same can be said for polar winter … and no such warming has happened.

Jacob Mack
August 10, 2009 5:05 pm

Crosspatch: “Maybe, maybe not. What if increased CO2 concentrations in the atmosphere displaces H2O and results in lower absolute humidity in response to the increased CO2 content and total greenhouse impact is reduced? Suddenly what was thought to be a positive feedback turns into a negative feedback as a less absorptive gas displaces one with a wider absorption range.”
For one CO2 does not displace H20 and the lower atmosphere is not “saturated.” Also the absolute humidity/specific humidity do flucuate, but generally the relative humidity remains stable. In the upper atmosphere, it is cooler, and dry, an the presence of water vapor begins to fade, but lo and behold, C02 is still on the incline where it was formerly virtually non-existent. Also keep in maind that water vapor tends towards equilibrium, in relative humidity, but water vapor levels are also currently rising.

Pamela Gray
August 10, 2009 5:08 pm

Jacob, shortwave and longwave radiation of Sunlight 101. From the description found here, one can easily reason that these variables create a very noisy data stream of how much gets in, and how much is reflected.
http://www.physicalgeography.net/fundamentals/7f.html

Jacob Mack
August 10, 2009 5:14 pm

“The same can be said for polar winter … and no such warming has happened.”
You are neglecting altitude dependent changes.

Tom in Florida
August 10, 2009 5:29 pm

Jacob Mack (16:28:50) : ” the C02 in the upper atmosphere where it is cool and dry would absorbs wavelengths at different bands at varying altitudes and thus reflect LWR back down towards Earth.”
Why doesn’t LWR get reemitted equally in all directions? You certainly aren’t implying that gravity comes into play are you?

1 4 5 6 7 8 14