How Sensitive is the Earth’s Climate?

Guest Post By Steve Fitzpatrick

Fitzpatrick_Image1

Introduction

Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s).  This high sensitivity depends mainly on three assumptions:

1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.

2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases.  Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.

3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.

However, there is doubt about each of the above three assumptions.

1.  Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.  This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements.  While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked.  Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.

2.  Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings.  There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.  Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.

3.  Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined.  Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds.  Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired.  Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s.  This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.

Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.  A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.

In spite of the many problems and doubts with GCM’s:

1)       It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.

2)   Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.

3)   There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.

There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases.  This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years.  For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.  But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures.  What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period.  The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.

Climate Sensitivity

If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s.  Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity.   An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law.  Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.  With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.

But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles.  The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth.  Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.

Measuring Earth’s Sensitivity

The only way to accurately determine the Earth’s climate sensitivity is with data.

Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters:  1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration.  Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008.  He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend.  The overall trend correlated well with the log of the CO2 ratio.  In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.

There are a few implicit assumptions in Bill’s model.  First, the model assumes that all historical warming can be attributed to radiative forcing.  This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.).  The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.

Second, the model assumes the global average temperature changes linearly with radiative forcing.  While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings.  That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature.  So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.

Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2.  While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration.  In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.

To make Bill’s model more physically accurate, I made the following changes:

1.  Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.

2.  Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.

3.  The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.

4.  The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

Fitzpatrick_Image2
Figure 1 Model results with temperature projection to 2060

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing.  The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma).  All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity.  A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average.  As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.

Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle.  The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140.  Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter.  The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot.  When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.

Fitzpatrick_Image3
Figure 2 Scatter plot of the model versus historical temperatures
Fitzpatrick_Image4
Figure 3 Comparison of the model’s temperature projection under ‘Business as Usual’ with the IPCC projection of ~0.2C per decade, consistent with GCM projections.

Regional Sensitivities

Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter.  The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011.  An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K).  This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors.  The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.

Credibility of Model Projections

Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future.  But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation.  The probability of encountering important new variables increases with the length of the forecast, of course.  So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.

To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4).  The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years.  Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance.  The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

Fitzpatrick_Image5
Figure 4 Model temperature forecast for 1972 through 2008, with model constants based on 1871 to 1971. The model has no “knowledge” of the temperature record after 1971.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period.   The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2.  A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.

Emissions Scenarios

The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:

a)       The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060.  Atmospheric concentration reaches ~518 PPM by 2060.

b)       N2O concentration increases in proportion to the increase in CO2.

c)       CFC’s decrease by 0.25% per year.  The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.

d)       The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.

e)       Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.

The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s.  The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade.  This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.

Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

Fitzpatrick_Image7
Figure 5 Reduced warming via controls on non-CO2 emissions and gradually lower CO2 emissions growth.

One such scenario can be called the “Efficient Controls” scenario.  The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases.  These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency.   Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade.  Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions.  By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060.  So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.

A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions.  The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario.  Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100.  Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100.  Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Fitzpatrick_Image8
Figure 6 Draconian emissions controls may reduce average temperature in 2060 by ~0.21C compared to business as usual.

Conclusions

The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.  This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant).  Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.

Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average.  Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action.  A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
334 Comments
Inline Feedbacks
View all comments
Bill Illis
August 9, 2009 3:46 pm

Regarding Willis’ comments about the AMO and Nino 3.4 region being part of the temperature record – they are. But they are only a small part of it.
The AMO region represents about 5.5% of the globe (probably less compared to the regions which are actually counted in the global temperature record since Hadcrut3 and GISS don’t use the whole AMO region in their global temperature record.)
So one is using (up to) 5.5% of the dataset to predict 100% of the dataset.
The Nino 3.4 region as well represents about 0.7% of the globe and is fully counted in the global temp record. But the correlation for Nina 3.4 is lagged 3 months if one is using a monthly model so one is using 0.7% of the temperature record of 3 months ago to predict today’s temperature record.
Willis is, thus, partly correct but If 5% of the globe and 0.7% of the record of 3 months ago, can predict up to 70% of the temperature variation, then it is certainly something worth looking at.
If anything, they are responsible for a large part of the “noise” in the temperature record which is rather easy to demonstrate. Take out some of the noise and the underlying trends are more evident.
Both indices are detrended, so over the long-term, they are not adding to the underlying trend. But on short time-scales, they obviously have an impact on the trend. Hadcrut3 increased by 0.6C from the beginning of the 1997-98 El Nino to the end – only 15 months. Why wouldn’t one want to adjust for that?

August 9, 2009 3:54 pm

Nogw (13:41:21) :
If atmospheric CO2 falls to 220 ppm, plants get sick. They die at
160 ppm.…and if plants die..your beautiful lives little global warmies will end too!

Yeah! What you say is true. Warmies cannot understand that plants survive better to droughts at higher levels of CO2:
http://www.ars.usda.gov/research/publications/Publications.htm?seq_no_115=220520

Willis Eschenbach
August 9, 2009 3:54 pm

Steve Fitzpatrick, thanks for your reply. You wrote:

If you look at the AMO and Nino 3.4 historical data you will see that in spite of overall warming since 1871, these indexes have shown essentially no net trend, and so appear to have contributed virtually nothing to the observed total warming. Graphs showing the historical trends of these two indexes are included in the spreadsheet. AMO and Nino 3.4 most certainly are related to “climate/weather noise”, and that is the point of including them in the model: these indexes account for most (not all) of the variation around the long term trend. AMO and Nino 3.4 can be measured at any time you want, and their contributions subtracted from the currently measured global average temperature to reveal the “true” temperature trend (or at least a much “truer” trend). Indexes like the AMO and Nino3.4 are well known to capture shorter term climate variation, and I was not suggesting that including them in a model was any kind of “revelation”; they were included in Bill Illis’s model (based on only CO2, AMO, and Nino 3.4) back in 2008.

You are correct that there is no trend in the AMO or the Nino3.4. This is because they are a ratio of SST values, and not SST itself.
However, they are measurements of the climate system, and as such, you can’t use them to reduce the variance in the data. You treat them as though they were external forcings, or new data which you could subtract from the existing measurements to “reveal the true temperature trend”.
But they are not external forcings or new data in any sense. They are temperature measurements of the system. You can’t use them to “reveal the true temperature trend”, that’s simply not possible. You can’t “bootstrap” more information out of measurements by subtracting some subset of those measurements from the data. It’s the same as just smoothing out the data to get rid of short term variability. Makes your data look better … but it doesn’t make your model more accurate in the slightest.
This is a fundamental and central point, which obviates your basic thesis. Please do some research on the question, as your claims as they stand are simply not tenable.
Heck, if you want to take your path to the ultimate, just detrend the SST. This gives you the ultimate measure of the natural variability. Then subtract the detrended SST from the air temperature, and voila!! The true temperature trend is revealed!
But that doesn’t make your model any more or less accurate, not by one bit. It does show the trend … but we knew that already.
w.

Steve Fitzpatrick
August 9, 2009 4:01 pm

Dr A Burns (14:31:15) :
“I wonder what effect man’s deforestation and desertification has had on climate in the past century ? Australia alone has lost 70% of its natural vegetation.”
Very complex question. A dense cover of trees has low albedo, while a desert has much higher albedo, so at first glance you might think that conversion of forest to desert would have a net negative effect on net solar heating. However, there are other issues like rainfall patterns being changed by forests which could modify the heat balance.

Steve Fitzpatrick
August 9, 2009 4:04 pm

Anthony (or moderator) how can I best send you the spreadsheet so people can paly with it if they want? Should I send it to Anthony’s email?
REPLY: WordPress.com does not allow hosing of Excel spreadsheets or Zip files for security. Best to publish it to a 3rd party file service and provide a URL – Anthony

Willis Eschenbach
August 9, 2009 4:08 pm

Bill Illis, good to hear from you. You say:

Regarding Willis’ comments about the AMO and Nino 3.4 region being part of the temperature record – they are. But they are only a small part of it.
The AMO region represents about 5.5% of the globe (probably less compared to the regions which are actually counted in the global temperature record since Hadcrut3 and GISS don’t use the whole AMO region in their global temperature record.)
So one is using (up to) 5.5% of the dataset to predict 100% of the dataset.
The Nino 3.4 region as well represents about 0.7% of the globe and is fully counted in the global temp record. But the correlation for Nina 3.4 is lagged 3 months if one is using a monthly model so one is using 0.7% of the temperature record of 3 months ago to predict today’s temperature record.
Willis is, thus, partly correct but If 5% of the globe and 0.7% of the record of 3 months ago, can predict up to 70% of the temperature variation, then it is certainly something worth looking at.
If anything, they are responsible for a large part of the “noise” in the temperature record which is rather easy to demonstrate. Take out some of the noise and the underlying trends are more evident.
Both indices are detrended, so over the long-term, they are not adding to the underlying trend. But on short time-scales, they obviously have an impact on the trend. Hadcrut3 increased by 0.6C from the beginning of the 1997-98 El Nino to the end – only 15 months. Why wouldn’t one want to adjust for that?

Certainly we can use those measures to reduce the variance of the temperature record. But how does this differ from just smoothing the record? It has the same advantages (reduction of short-term variability) and the same disadvantages (reduction of degrees of freedom). How does it help the modeling effort?
A model contains one or more dependent variables (temperature, precipitation) and a number of independent variables (changes in CO2, aerosols, water vapor, black carbon, and the like). Removing the effect of one of the independent variables helps us to establish the true strength of the remaining independent variables.
However, AMO and Nino3.4 are dependent variables, not independent variables. As such, removing them does not improve the model at all. So yes, you can do what you propose … but how does it help?
w.

Steve Fitzpatrick
August 9, 2009 4:25 pm

Richard Sharpe (07:45:19) :
” A new paper by Lindzen and Choi (described at WUWT on August 23, 2009)
Do you have something scheduled to drop on August 23?”
Sorry, a simple error: July 23, 2009; I guess I was getting ahead of myself.

Steve Fitzpatrick
August 9, 2009 4:36 pm

tallbloke (00:52:31) :
“What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”
There is no data I am aware of that would lead to assignment of 75% of warming to solar contributions. Let me know if you have this data and where it comes from.

August 9, 2009 4:54 pm

Steve said
“On the other hand, adding any infrared absorbing gas to the atmosphere makes it more difficult for infrared radiation to escape from the Earth’s surface.”
Steve, thanks for an interesting article. Under what circumstances could infrared radiation leave the earth?
On a similar tack I read that co2 molecules could leave the earth provided they attained sufficient velocity but how that was achieved and what % of co2 ‘leaks’ from Earth the article did not say.
Anyone able to eleboarate on either of these issues?
tonyb

Steve Fitzpatrick
August 9, 2009 5:10 pm

Allen63 (05:24:10) :
“But, AGW is all about global heat accumulation — for which global temperature is only a proxy.”
Of course.
Unfortunately, we do not have 137 years of Argo heat data (which would completely settle the question of climate sensitivity). We only have temperature data, with all it’s warts, uncertainties, and problems. We do not have reliable temperature records from before the 1800’s, so it is not possible to verify if the model results would be consistent with earlier periods. Substantial warming and cooling certainly have taken place over very long periods (hundreds to thousands of years) including the medial warm period, little ice age, Roman warm period, and the holocene optimum.
My intent was not to explain the recent climate history of the Earth. I was trying only to make a reasonable prediction for the next 50 years (about two generations), assuming that the recorded warming since the 1800’s has been almost all due to greenhouse forcing. Will the prediction be perfect? For sure not. Will the prediction be petty close? Probably. If I were young enough to have a chance to be around in 30 or 40 years, I would happily take bets on the accuracy of the prediction. The standard error of the temperature estimate is about 0.095C, so there is about a 2/3 chance that the model’s prediction will be within ~+/-0.1C of the measured temperature 30 or 40 years from now.

August 9, 2009 5:25 pm

Bill Illis: A few clarifications.
You wrote, “The AMO region represents about 5.5% of the globe (probably less compared to the regions which are actually counted in the global temperature record since Hadcrut3 and GISS don’t use the whole AMO region in their global temperature record.)”
NOAA ESRL calculates the AMO as detrended North Atlantic SST anomalies from 0 to 70N.
http://www.cdc.noaa.gov/data/timeseries/AMO/
HADSST2 (used by GISS up to November 1981, also used in HADCrut) appears to capture the North Atlantic as far as 80N. It should vary with ice extent:
http://hadobs.metoffice.com/hadsst2/
OI.v2, which GISS has used since December 1981, appears to capture anything that’s not ice. http://i26.tinypic.com/2v0hbid.png
(And, curiously, on occasion appears to indicate SST anomalies where ice exists.)
http://i42.tinypic.com/2ms27a1.jpg
So both GISS and HADCrut should capture all of the AMO. And what’s the surface area of the North Atlantic, about ½ of the Atlantic? So, if the Atlantic represents 30% of the global ocean area, and if the North Atlantic occupies half of it, and if the oceans represent 70% of the global surface area, then the North Atlantic should represent about 10% of the global surface area, should it not?
You wrote, “The Nino 3.4 region as well represents about 0.7% of the globe and is fully counted in the global temp record. But the correlation for Nina 3.4 is lagged 3 months if one is using a monthly model so one is using 0.7% of the temperature record of 3 months ago to predict today’s temperature record. ”
Not to be nitpicky but… The distance between 5N and 5S is 1111 km. The distance between 170W and 120W is 5533 km.
http://www.nhc.noaa.gov/gccalc.shtml
The surface area for the NINO3.4 area is ~6.147 million sq km. And if the surface of the globe is 510.072 million sq km, then the NINO3.4 area represents ~1.2% of the globe.
BUT
El Nino events affect more of the eastern tropical Pacific than the NINO3.4 area:
http://i25.tinypic.com/t8t1lw.png
The SST anomalies of the NINO3.4 area are used for comparison to global temperatures because they agree statistically with global temperature variations better than the SST anomalies of the NINO3, NINO4, and NINO1+2 areas. Thus your reason for using the NINO3.4 region for your predictions.
Regards

Steve Fitzpatrick
August 9, 2009 5:28 pm

Willis Eschenbach (16:08:58) :
“However, AMO and Nino3.4 are dependent variables, not independent variables. As such, removing them does not improve the model at all. So yes, you can do what you propose … but how does it help?”
If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871. They were included to improve the accuracy of the model predictions. Yes, the short term variation in global temperature is closely correlated with these indexes, but these indexes are completely independent of radiative forcing and clearly are not responsible for any significant net global warming over the entire period…. their trends are completely flat.

Nogw
August 9, 2009 5:36 pm

I was wondering, when reading in Ian Plimers book “Heaven and earth”, how is anybody going to enforce tax payment on CO2 emissions made by Mammoth Hot Spring at Yellowstone, which emits from 160 to 190 tonnes per day of CO2?
It will be a bit troublesome, though visitors could be charged instead of the spring itself…

Steve Fitzpatrick
August 9, 2009 5:55 pm

TonyB (16:54:04) :
“Steve, thanks for an interesting article. Under what circumstances could infrared radiation leave the earth?”
Infrared radiation leaves the Earth continuously, with an average rate that is very is very close to the average rate of solar heating. Any difference shows up as heat gain or loss in the oceans. How much infrared radiation is lost per square meter varies a lot. In general, the rate is highest in the tropics (or close to them) where the rate of solar input is highest, and lowest near the poles, where the solar input is small. The rate of loss also varies with time of day, weather, season, if ocean or land area, and with local geography on land.

Patrick Davis
August 9, 2009 6:15 pm
Editor
August 9, 2009 6:26 pm

Steve Fitzpatrick (17:28:10) : “If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871.”
Well, that’s exactly what I said a while ago. When Nino and AMO indices are taken away, the only factor left is CO2. So all the model is doing is ascribing all the observed temperature trend to CO2.
What value is the model? IMHO precisely zero.

Jim
August 9, 2009 6:30 pm

***************
Nogw (17:36:06) :
I was wondering, when reading in Ian Plimers book “Heaven and earth”, how is anybody going to enforce tax payment on CO2 emissions made by Mammoth Hot Spring at Yellowstone, which emits from 160 to 190 tonnes per day of CO2?
It will be a bit troublesome, though visitors could be charged instead of the spring itself…
****************
What with all the talk about the role of water in general and clouds in particular, I wonder how much CO2 cold rain in the tropics sweeps into the ocean? The cold rain should be pretty efficient at absorbing CO2. I guess being fresh water, it wouldn’t mix well with the ocean water and end up out-gassing pretty quickly.

Steve Fitzpatrick
August 9, 2009 6:33 pm

“Adam from Kansas (14:41:13) :
The paper is interesting, but if the draconion restrictions of emissions isn’t enough to stop AGW like that paper says the models suggest, does that mean the only way would be an unprecedented reduction of global population by more than 90 percent and maybe even 99 percent? Would we have to completely and rapidly de-populate Africa, China, and India and make that whole continent and the whole of those countries massive plant and animal preserves to make the temps. stay even?”
Is your question tongue in cheek? If not, then yes, there are a lot of green crazies out there who advocate drastic reductions in present worldwide populations, numbers like 50% to 80% fewer people than today are kicked about, combined with drastic reductions in per-capita fossil fuel use. They basically want very few babies born over the next 100 years (Worldwide lotteries for the right to have offspring, or will the IPCC just pick the winners? And what to do with those pesky babies born to people who didn’t have permission?).
If you want get really depressed about what sane people have to overcome in the age of Obama, read a while at the Green Hell Blog. James Hansen (a very main stream green, not nearly as extreme as many) calls for reducing CO2 to 350 PPM through a combination of herculean efforts over the next 100 years. It is truly mind boggling.

Dr A Burns
August 9, 2009 6:55 pm

“does that mean the only way would be an unprecedented reduction of global population by more than 90 percent and maybe even 99 percent? ”
A back-of-the-envelope calculation shows that body heat from the current 6.7 billion people is enough to heat the atmosphere by 0.8 degrees C in 100 years.
The point is that the changes in temperature being discussed as so small that almost anything can affect them.

August 9, 2009 6:58 pm

John (03:06:17) :
You wrote.
“The author talks of infra-red absorbing gases such as CO2. My understanding is CO2 on received a quantum of infra-red, instantly radiates in random direction a quantum of infra-red at the same wavelength and energy. If CO2 absorbs IR it must get warm.
I thought this was one of the main misdirections used in the so-called greenhouse gas theory.
Not so?”
Please, anybody correct me if I have got this wrong since I am not a physicist.
If a CO2 molecule is excited to a higher energy level by a photon of radiation and “immediately” re-radiates the same energy photon, there is no heating of the CO2 molecule. This extended to multiple absorptions and emissions is the simple but not very correct model that some use to illustrate the so called greenhouse effect. In other words, these photons with wavelengths corresponding to the absorption bands of CO2 are shown to go ricocheting in random directions and eventually escape to space or collide with the earth where the process starts all over again. The delay in the escape of the photons within the absorption bands is put forth as creating the greenhouse effect.
The above description may well be accurate in a rarefied gas where the decay time for the raised energy state in the CO2 is substantially shorter than the mean time between collisions with other molecules but in the lower atmosphere, this is not the case.
So what we have in the lower atmosphere is the earth emits approximately like a black body with a small portion of this energy being in the absorption bands of CO2. The photon travels only a few metres before exciting a molecule of CO2. Most of these excited CO2 molecules collide with adjacent molecules before the high energy state can decay. The photon energy is converted into heat energy and the heated gases again radiate as a black body with the radiated energy spread out over the infrared spectrum. This “dilution” of the original energy in the absorption bands means that the CO2 has done its thing in the lower atmosphere and is of diminishing importance as the concentration rises and this is exemplified in the logarithmic relation of CO2 concentration to temperature rise.
Of course, the thing is immensely more complicated when you introduce clouds, albedo, convection but I cringe when I hear what I call the ping pong ball explanation of the greenhouse effect.

Kevin Kilty
August 9, 2009 7:04 pm

TonyB (16:54:04) :
Steve said
“On the other hand, adding any infrared absorbing gas to the atmosphere makes it more difficult for infrared radiation to escape from the Earth’s surface.”
Steve, thanks for an interesting article. Under what circumstances could infrared radiation leave the earth?
On a similar tack I read that co2 molecules could leave the earth provided they attained sufficient velocity but how that was achieved and what % of co2 ‘leaks’ from Earth the article did not say.
Anyone able to eleboarate on either of these issues?

Q1: IR leaves Earth surface upward toward space as long any surface material has an absolute temperature above 0K, which includes everything. Certain bands in the IR leave Earth unimpeded simply because no gaseous material in the atmosphere absorbs this radiation. In other bands, however, there are gases that absorb IR strongly. CO2 for instance absorbs strongly in the band from about 12 to 16 micrometers wavelength. IR absorbing gases do not store IR, they absorb then re-emit in new directions including back toward the Earth. It is the radiation emitted back toward Earth that we call the “greenhouse effect.”
Q2: Gases do escape Earth all the time, but do so only up in the exosphere weher the atmosphere is so tenuous that the distance between successive collisions of gas molecules with one another is large. If you took any high school chemistry and remember such, you will recall that at any temperature all molecules in gas possess the same mean kinetic energy, therefore the least massive molecules possess the highest speed. These are those that escape Earth most easily; so, as hydrogen and helium manage to reach the exosphere they will leave the Earth quite quickly. For example, there is no primordial helium left in the atmosphere. A gas like CO2, on the other hand, doesn’t even reach the exosphere because of the cold trap at the mesopause. The principle means by which CO2 leaves our atmosphere is through weathering of surface rocks, which produces bicarbonate and carbonate minerals carried to the oceans in rivers, and the direct solution of CO2 into ocean water. This dissolved CO2, in turn, reacts with oceanic crust and is stored more permanently in minerals there. There are other parts to the carbon cycle was well.

Steve Fitzpatrick
August 9, 2009 7:14 pm

Mike Jonas (18:26:22) :
“Well, that’s exactly what I said a while ago. When Nino and AMO indices are taken away, the only factor left is CO2. So all the model is doing is ascribing all the observed temperature trend to CO2.
What value is the model? IMHO precisely zero.”
Well actually, the model includes trends in radiative forcings from CO2, N2O, chloro-fluorocarbons, and methane. These separate trends were included so that divergence in their trajectories could be considered (instead of just a single trajectory for CO2, which is really not as accurate a representation of radiative forcing). The value of the model is to make reasonable predictions over reasonably long periods, under a worst case assumption that greenhouse forcing has caused all of the observed warming.
“Splitting the time period, and curve-fitting from 1871 to 1971 then comparing “predictions” with post-1971 sounds impressive, but all it means is that if factors which actually caused the overall trend from 1871 to 1971 remained in place from 1971 to 2008, then the model would match neatly. The actual factors could be the sun, clouds, shipping volumes, or world use of soap. The model would still give a good match.”
Of course; and if the world use of soap continues to match the radiative forcing for the next 50 years, then the model will continue to make accurate predictions. Remember this is a WORST CASE prediction. If there are other factors that are:
1) truly “causative”
2) independent of radiative forcing
3) which by coincidence have historically tracked radiative forcing over 100+ years, and
4) which will now no longer do so
then the model predictions will be way too high.
On the other hand, if radiative forcing has caused all or even most of the warming, then the model should make pretty accurate predictions. The US Congress is currently working on an absolutely horrible cap-and-trade scheme, which will cost a fortune, reduce carbon emissions very little, and which is justified only on the basis of extreme predictions of global warming. If a realistic projection of warming in the worst case shows the warming will be lower (eg. <0.7C in 50 years instead of 1.4C), then perhaps this dreadful legislation loses some of it's presumed justification. Is this not a good thing?

John S.
August 9, 2009 7:20 pm

Dennis A (02:59:26):
Thank you for unearthing Stevenson’s realistic description of how differently the oceans respond to SW and LW radiation. That description should be studied carefully by all would-be climatologists, too many of whom are still stuck on simplistic blackbody concepts.
Willis Eschenbach (09:13:53):
Thank you for bringing a sorely lacking physical distinction between external sources and internal redistribution of heat into the discussion, which seems myopically centered on curve-fitting. You seem to be one of the few who truly grasps the categorical difference between active forcing and passive system response.

TD
August 9, 2009 7:30 pm

John (03:06:17) :
You need to stop thinking of absorption and emission as ying and yang, they are not that closely related.
Absorption is governed by the number of absorbers and the incoming IR.
Emission is governed by the number of emitters and the temperature of the local gas.

Adam Grey
August 9, 2009 8:45 pm

I’ve read that a climate sensitivity that is too low means that ice age changes are not possible. The ~3c sensitivity is corroborated in various ways, and one of them is to estimate from large-scale global temp changes – quaternary ice age cycles serving well because the land masses, and hence distribution of ice sheets, ocean/air currents etc are very similar to today.
Perhaps the author could make a global model of ice age changes (specifically deglaciation), and plug in the lower climate sensitivity posited here to see if it accomodates (proxy) observations.
This is the main problem I’ve read regarding lower climate sensitivities – whether Lindzen’s Iris effect or whatever. If the climate doesn’t respond as much as is thought, then the extreme swings in the geoliogical record, allegedly, aren’t possible.