How Sensitive is the Earth’s Climate?

Guest Post By Steve Fitzpatrick

Fitzpatrick_Image1

Introduction

Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s).  This high sensitivity depends mainly on three assumptions:

1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.

2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases.  Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.

3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.

However, there is doubt about each of the above three assumptions.

1.  Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.  This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements.  While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked.  Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.

2.  Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings.  There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.  Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.

3.  Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined.  Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds.  Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired.  Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s.  This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.

Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.  A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.

In spite of the many problems and doubts with GCM’s:

1)       It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.

2)   Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.

3)   There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.

There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases.  This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years.  For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.  But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures.  What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period.  The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.

Climate Sensitivity

If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s.  Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity.   An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law.  Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.  With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.

But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles.  The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth.  Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.

Measuring Earth’s Sensitivity

The only way to accurately determine the Earth’s climate sensitivity is with data.

Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters:  1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration.  Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008.  He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend.  The overall trend correlated well with the log of the CO2 ratio.  In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.

There are a few implicit assumptions in Bill’s model.  First, the model assumes that all historical warming can be attributed to radiative forcing.  This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.).  The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.

Second, the model assumes the global average temperature changes linearly with radiative forcing.  While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings.  That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature.  So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.

Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2.  While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration.  In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.

To make Bill’s model more physically accurate, I made the following changes:

1.  Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.

2.  Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.

3.  The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.

4.  The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

Fitzpatrick_Image2
Figure 1 Model results with temperature projection to 2060

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing.  The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma).  All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity.  A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average.  As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.

Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle.  The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140.  Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter.  The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot.  When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.

Fitzpatrick_Image3
Figure 2 Scatter plot of the model versus historical temperatures
Fitzpatrick_Image4
Figure 3 Comparison of the model’s temperature projection under ‘Business as Usual’ with the IPCC projection of ~0.2C per decade, consistent with GCM projections.

Regional Sensitivities

Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter.  The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011.  An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K).  This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors.  The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.

Credibility of Model Projections

Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future.  But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation.  The probability of encountering important new variables increases with the length of the forecast, of course.  So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.

To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4).  The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years.  Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance.  The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

Fitzpatrick_Image5
Figure 4 Model temperature forecast for 1972 through 2008, with model constants based on 1871 to 1971. The model has no “knowledge” of the temperature record after 1971.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period.   The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2.  A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.

Emissions Scenarios

The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:

a)       The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060.  Atmospheric concentration reaches ~518 PPM by 2060.

b)       N2O concentration increases in proportion to the increase in CO2.

c)       CFC’s decrease by 0.25% per year.  The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.

d)       The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.

e)       Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.

The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s.  The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade.  This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.

Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

Fitzpatrick_Image7
Figure 5 Reduced warming via controls on non-CO2 emissions and gradually lower CO2 emissions growth.

One such scenario can be called the “Efficient Controls” scenario.  The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases.  These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency.   Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade.  Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions.  By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060.  So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.

A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions.  The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario.  Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100.  Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100.  Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Fitzpatrick_Image8
Figure 6 Draconian emissions controls may reduce average temperature in 2060 by ~0.21C compared to business as usual.

Conclusions

The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.  This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant).  Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.

Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average.  Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action.  A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

0 0 votes
Article Rating
334 Comments
Inline Feedbacks
View all comments
Lachlan O'Dea
August 9, 2009 8:57 pm

Mr Fitzpatrick’s model and the IPCC models seem to both be based on the assumption that GHG increases are the only significant cause of temperature change. Yet, they produce greatly different values for the climate sensitivity. If someone could give a quick summary of how Mr Fitzpatrick’s approach differs from the IPCC’s, that would really help me.

Larry
August 9, 2009 9:31 pm

Steve, thanks for your excellent post. I’m just a poor dumb lawyer, so it was hard for me to sort through some of your discussion, but I appreciate what you were attempting to demonstrate in terms of a “worst case” scenario if what the AGWers are saying proves to be correct (although I doubt all of it can be proven correct, and I also tend to doubt whether your method of calculating climate sensitivity is really useful in making long term climate predictions). I also agree with your policy suggestion concerning making nuclear power the source of our future electricity. When John McCain was looking for an “insurance policy” in connection with this overall question, he didn’t stress this particular solution enough. Your post makes a compelling case for long-term further rational study and gradual action as opposed to short-term hysteria and draconian solutions.

Patrick Davis
August 9, 2009 11:26 pm

It’s funny, last week we had a couple of days which were a little warmer than average and then we get this…
http://www.smh.com.au/environment/chilliest-sydney-morning-for-a-year-20090809-ee8v.html
I can confirm it was cold this morning, and in fact last night, working on my car with a flat battery. Car broke down, left out for about an hour and a half, condensation on the rear windscreen and roof as well as my breath condensing too. This was about 9-10pm on Sunday the 9th.

Allan M R MacRae
August 10, 2009 1:53 am

Building on Willis’ comments, here is an excerpt of a file from 2005.
Ascribing all of the alleged 0.6C rise in global temperature to increased atmospheric CO2 gives a climate sensitivity to CO2 doubling of ~1.2C (1.189C) from a one-line solution.
But there is, imo, no evidence to ascribe such warming to increased CO2.
The global cooling from ~1945 to 1975 and the cooling since ~2000 are not explained by this assumption.
As Willis suggests, such cooling cannot be properly explained by ascribing it to another measured temperature, whether it be PDO, AMO or other.
IPCC modelers have attempted to attribute the 1945-75 cooling to aerosols, principally SO2, but they have had to invent the data to do so – I accept Douglas Hoyt’s comment that there are no such trends in his real measured data, save volcanoes which are clearly apparent.
Here is the 2005 Excel file, copied onto WORD – hope it is legible.
9. EXTRAPOLATING OBSERVED WARMING TRENDS
by Jarl Ahlbeck (Turku, Finland)
We should not confuse the word “possibility” with “probability” as some
people do when they compare different simulated results with each other.
Everything is possible, but probability has a mathematical definition and
should not be used when comparing simulated results. These reported
(Nature, 27 Jan 2005) values of 1.9 to 11.5 deg C warming are
possibilities, computerized speculations, nothing else. Also: Let’s not
to talk about percent possibilities. All possibilities are
100% possible.
But of course, a kind of reality check can be made very easily: Say that
half of the observed 20th century warming of 0.8 deg is due to greenhouse
gases (CO2 increase from 280 to-370 ppm) and half is due to increased sun
activity. As the relation is logarithmic, 0.4 deg=k*ln(370/280), giving
k=1.435. For 2*CO2 (560 ppm), an additional warming of 1.435*ln(560/370)
=0.59 deg C could be expected. This is a speculation as good as any
produced by a computer climate entertainment program.
In fact, 0.59 deg may be an overprediction as the observed warming has been
partly caused by CFCs and CH4. As we know, the atmospheric concentration of
CFC has decreased, and there is no more increase in CH4. This means that
the k-value for CO2 should be lower than 1.435.
k = deltaT/ln(CO2b/CO2a)
deltaT = k*ln(CO2b/CO2a)
For various % of 0.8 degree C temp rise in 20th century ascribed to CO2:
(MacRae calculations and comments below)
k CO2a CO2b deltaT As Above Case
1.435 280 370 0.4 checks Assumes 50% deltaT
1.435 370 560 0.595 checks due to >CO2.
2.870 280 370 0.8 Assumes 100% deltaT
2.870 370 560 1.189 due to >CO2.
Both 50% and 100% seem too much high, given the better correlations below.

crosspatch
August 10, 2009 1:56 am

I think our climate can be very sensitive but maybe not to the things people have put forth so far. Take the Bering Strait as an example. It is pretty shallow, only about 55m at its deepest point. The average depth is about 40m. Maximum ice thickness in winter has approached 15m in a cold year in the 20th century. General flow is Northward from the Pacific into the Arctic.
Now imagine we have some really cold years and the ice freezes to 20 or 25m. That would mean that the ice will freeze completely to the bottom over a larger area and will freeze the mud under it. Ice floating on water mainly melts from the bottom up. While this is going on, the amount of water transported North will decrease because the size of the channel available will decrease. This could act as a positive feedback that causes the Arctic waters to become even colder. So now we have a situation where it takes much longer to melt the ice as it must melt from top down or from the edges in rather than bottom up as is the case with floating ice. If combined with that, we see an accumulation of ice on land, sea levels could begin to drop exacerbating the problem even more.
At some point you don’t need to wait for sea levels to drop completely to expose the “land bridge” between Asia and North America. Once the Bering Strait freezes all the way to the bottom, the cutoff of warm water to the Arctic could trigger a situation where ice accumulation in the Northern hemisphere rapidly increases and you get rapid sea level decline. In fact, it might not need to freeze all the way to the bottom to have a significant impact. Simply freezing 5 more meters of depth might be enough to reduce the volume of Pacific flow to have an impact on Arctic ocean temperatures.
Once the Arctic cools significantly and we see ocean levels begin to drop or more area freezing all the way to ground, we begin to see other bodies begin to constrict such as the English Channel, North Sea, and Irish Sea further modifying the amount or exchange between colder and warmer regions of water.
And you can have an alignment of several things that could combine to produce a larger impact. For example, the mean flow through the Bering is Northward but sometimes weather patterns and pressure gradients can set up in such a way as to create a mean Southerly flow instead. If such a pattern was unusually persistent, it could be the first domino that sets things in motion. So a combination of declining insolation due to orbital change, an unusual weather/wind pattern, and a colder than normal winter or series of winters could result in a dramatic change in climate that might cause the Northern Hemisphere to tip rapidly into glacial conditions.
So yeah, I believe Earth’s climate can be very sensitive but I also believe that changes in the ocean temperatures, currents, and flow volumes might have a much greater impact on climate than human emissions. Reduction of exchange between Pacific and Arctic might be enough to start the ball rolling and the freezing of the Bering to the bottom would be the final step in tipping the Northern Hemisphere into glaciation.
Once that freezes all the way to bottom, it would be difficult to get open again but once it did open, it could also mean a very quick transition to interglacial conditions again.

Allan M R MacRae
August 10, 2009 2:15 am

Further to the above post:
Another issue is the divergence between satellite and Hadcrut3.
I estimate that Hadcrut3 ST has a warming bias of 0.07C per decade over UAH LT, since ~1979.
See the first graph at
http://www.iberica2000.org/Es/Articulo.asp?Id=3774
Furthermore, it is clear that CO2 lags temperature at all measured time scales, from ice core data spanning thousands of years to sub-decadal trends – the latter as stated in my 2008 paper, and previously by Kuo (1990) and Keeling (1995) .
My 2008 paper is located at
http://icecap.us/images/uploads/CO2vsTMacRae.pdf
Considering all the evidence, and the work of Roy Spencer , Richard Lindzen and others, it is difficult to attribute more than 0.3C average global warming to a hypothetical doubling of atmospheric CO2.
The actual sensitivity could be much less, approaching 0.0.
In any case, less than 0.3C seems inconsequential to me.
Those who feel the need to panic should find something more credible to panic about.
Regards, Allan

Allan M R MacRae
August 10, 2009 2:23 am

Sorry – typo in the above, should be 0.8C not 0.6C – correction reads:
“Ascribing all of the alleged 0.8C rise in global temperature to increased atmospheric CO2 gives a climate sensitivity to CO2 doubling of ~1.2C (1.189C) from a one-line solution.”

tallbloke
August 10, 2009 2:24 am

Steve Fitzpatrick (16:36:50) :
There is no data I am aware of that would lead to assignment of 75% of warming to solar contributions. Let me know if you have this data and where it comes from.

Hi Steve, it’s my data. Like all of us who are doing a bit of home brewed modelling, I’m working on scenarios in which the parameters are a bit different to the ones generally accepted by the modelers who’s models don’t work.
In my case, I’ve worked out some values from the satellite altimetry showing sea level rise, the amount of solar energy retained in the oceans needed to get the thermal expansion component of that rise, and a value I’ve estimated for the level of solar activity at which the oceans neither gain nor lose heat. Coupled with a correlation Ive found between small changes in the length of day and changes in global temperature, I’ve come up with this graph:
http://s630.photobucket.com/albums/uu21/stroller-2009/?action=view&current=temp-hist-80.gif
The mismatch around the war years is due to the LOD proxy not capturing el nino events very well, plus a well known bias introduced to the SST data by the engine cooling water inlet sensors used by military vessels.
I haven’t yet worked out all the energy relationships, but given the uncertainty over TSI measurements and the poor state of knowledge regarding the amount of heat coming through the relatively thin seabed from changing and overturning currents of radioactive molten stuff in the earths mantle which are responsible for about 90% of changes in LOD, it seems plausible to me at the moment.
So I’m just asking a hypothetical question for now, if you don’t mind a little speculation:
“What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”

Patrick Davis
August 10, 2009 4:14 am

IPCC mass extinctions due to “climate change”….NOT.
http://news.smh.com.au/breaking-news-world/flying-frog-among-353-new-himalayan-species-wwf-20090810-efjo.html
Until our Sun decides to “destroy the Earth” our pautry efforts, in terms of emissions and control of said emissions, are so trivial it beggars belief.

timetochooseagain
August 10, 2009 6:08 am

Adam Grey (20:45:45) : The problem with that is that it assumes that 1. The forcings that control the glaciations are all known and determined 2. That the sensitivity is independent of the climate state and timescale and most importantly 3. That the influence of Milankovitch effects are adequately described by the very small net changes in recieved solar radiation at the top of the atmosphere.
You see, the concept of climate sensitivity is really only appropriate for dealing with effects of forcings which are more or less spatially homogeneous (that is, the global mean forcing is essentially the same as that anywhere else). This is the case with CO2, essentially, but it is NOT true of orbital effects, which vary strongly not only with latitude but also season. One reason this matters is that, as was pointed out by Lindzen and Pan, 1994, such variations would mean that heat fluxes between the Equator and Poles would be greatly altered by Milankovitch effects-if, as the Iris hypothesis suggests, Tropical climate is strongly constrained, then such changes in transport would lead to large changes in global mean temperatures, and Polar temperatures would be additionally boosted in variation by ice-albedo feedback.
So the Ice Age comparison is ill posed ( not to mention that arguing against observed negative feedbacks because it becomes to hard for you to explain ice ages is pretty silly).
Lindzen, R.S., and W. Pan (1994) A note on orbital control of equator-pole heat fluxes. Clim. Dyn., 10, 49-57.

Fred Middleton
August 10, 2009 6:10 am

Common Sense and Politics are opposing electrical view points.
The complexity of Climate is confusing to those of use that are told to shut up and sit down by government. No easy pill to swallow if the Captain, leaning over the rail, throws you a brick and says “catch this, it will keep you afloat”. The guy floating next to me says “don’t believe, its just a brick”. I can see its just a brick, but why does the Captain (if the ship is really sinking) keep throwing a brick?
Spotted owl: “experts (gov’t)” say that it is endangered (30 years ago) Habitat Loss. But wait, another expert, private biologist says ” no, there are sick communities, that is what needs to be studied”. Today, the Feds have hired shooters. Yep. To shoot the invading eastern Barred Owl that says to the Spotted Owl, “move out or I will kill you”. The Spotted Owl says to self, move out this guy is tougher than me” and too many Spotted Owls were then living in the same location, creating sickness loss in hatch survival.
Watts the point: This site is progressively being attacked(my suspicions by an intentional seed plant) at chit-chat blogs-common citizen type, on an increasing frequency. Name calling, sometimes with foul language. Constantly siting this expert and that expert (no names or data) but an identical government sermon similar to the 30 yr ago sick owls, with repeated talking points condemning the Surface Station project. The Fed grants have not begin to study with conviction the migration over the Rocky Mt. of the Barred Owl.
Blow vi ate is akin to Bovine chewing cud and expelling (burp) the large quantities of greenhouse gas. Shame on me.

Steve Fitzpatrick
August 10, 2009 6:20 am

Adam Grey (20:45:45) :
“I’ve read that a climate sensitivity that is too low means that ice age changes are not possible. The ~3c sensitivity is corroborated in various ways, and one of them is to estimate from large-scale global temp changes – quaternary ice age cycles serving well because the land masses, and hence distribution of ice sheets, ocean/air currents etc are very similar to today.”
I think it is fair to say that nobody knows for certain all the causes for ices ages, which by the way the Earth is currently in if you consider the history over the last hundered million years; significant ice was not present over the much of the last 100 million years. Over the past ~3 million years ice has always been present at high latitudes, with relatively rapid advances and retreats of ice sheets and significant (eg 3-6 C) shifts in temperature. For the vast majority of the last 3 million years, Earth was substantially less hospitable to land animals than it is today, because a significant fraction of the total land area was covered with ice sheets. It is clear that variations in the shape of Earth’s orbit and axial inclination are correlated with the repeated ice ages of the last few million years.
Some people have suggested that climate sensitivity to radiative forcing is not a fixed value, but rather depends on feed-backs from albedo changes caused by ice sheets and the sea level drops that go along with ice sheets. The atmospheric concentrations of CO2 and methane were also substantially lower 25,000 years ago than at any time during the holocene. Since the net forcing from these gases goes as the log of the concentration, the additional forcing for a fixed change in concentration from a lower base (200 to 225 ppm for example) would be larger than the additional forcing for the same change from a higher base (375 to 400 ppm for example). For these example numbers, the change at the lower level has ~83% more net radiative forcing than the change at the higher level, independent of any ice sheet or sea level feed-backs. So net sensitivity could have been substantially higher 25,000 years ago (maximum ice sheet coverage) than today, allowing relatively small forcings to make relatively big net changes in climate.
Whatever causes substantial long term (glacial/interglacial) variation, it is not possible to reliably infer from these variations that the present climate sensitivity to radiative forcing is high today
GCM’s have quite a large range of climate sensitivities (http://en.wikipedia.org/wiki/File:Global_Warming_Predictions.png), and a similarly wide range of projected warming, with the least sensitive models predicting only about 50% more warming than my curve fit model; the GCM’s can’t all be right, and it’s quite possible that none of them are right. The temperature history of the past 100+ years is consistent with relatively low sensitivity.

August 10, 2009 7:15 am

Steve Fitzpatrick (17:28:10) : “If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871.”
Well, that’s exactly what I said a while ago. When Nino and AMO indices are taken away, the only factor left is CO2. So all the model is doing is ascribing all the observed temperature trend to CO2.

So let me see if I can parse this.
the long term trend in both indexes is flat since 1871.
When Nino and AMO indices are taken away, the only factor left is CO2.
I believe that makes the contribution of CO2 approximately zero.

August 10, 2009 7:24 am

I like the Bering Straight hypothesis.
Once that freezes all the way to bottom, it would be difficult to get open again but once it did open, it could also mean a very quick transition to interglacial conditions again.
The answer is high explosives. Lots of them. And nuclear powered icebreakers.

Tenuc
August 10, 2009 7:24 am

crosspatch (01:56:05) :
“…And you can have an alignment of several things that could combine to produce a larger impact…”
Yes, I think this is the point which is missed by many climatologists who forget that climate is a chaotic system and who try to extract bits of the ‘machine’ to treat in a linear way. Observing historic behaviour, our climate seems to have warm and cool periods, with cool being the dominant trend. We need to treat the sun and planets (including Earth) as one complete system so that bifurcation points can be better predicted.
Doing this will allow us to plan how to mitigate the effects of change before crisis point is reached, rather than reacting to red-herrings like GHGs.

Steve Fitzpatrick
August 10, 2009 9:18 am

M. Simon (07:15:12) :
“So let me see if I can parse this.
the long term trend in both indexes is flat since 1871.
When Nino and AMO indices are taken away, the only factor left is CO2.
I believe that makes the contribution of CO2 approximately zero.”
Let me say one time more:
1. The calculated sensitivity (0.27 degree per watt) is based on the ASSUMPTION that all net warming since 1871 was due to radiative forcing. I did not say that radiative forcing is the only cause for observed warming, nor even that it is the most important cause. The calculated sensitivity represents a worst case estimate for sensitivity to radiative forcing. The measured variation in TSI over the last three solar cycles (about 1 watt per sq. meter) shows up in the temperature record quite clearly over the last 100+ years, with a best estimate solar cycle effect that is almost exactly what would be expected for a radiative sensitivity of 0.27 degree/watt. This does not prove 0.27 is the correct sensitivity, but it certainly shows the measured solar cycle signal is at least consistent with a sensitivity in this range.
2. Total radiative forcing is not the same as forcing from CO2. Total forcing includes radiative forcing from N2O, methane, chloro-fluorocarbons topospheric ozone from VOC’s, and solar cycle forcing based on measured varaition in TSI over the last three solar cycles. The reason for including all these forcings individually is that they may not (indeed, we already know they do not) follow the trajectory of forcing from CO2, and any projection for warming based only on forcing from CO2 ignores that these some of these other forcings are likely to not increase as much as CO2 in the future, and may actually decrease, canceling some expected forcing from CO2.
3. You may believe that all observed warming has been caused by other factors. My post does not and was never intended to address other possible causes, nor to suggest that greenhouse gases have caused a known X% of total warming. It was intended to place a realistic ceiling on the warming that could possibly be attributed to greenhouse gases.
If you do not believe there is any possibility that radiative forcing has contributed to observed warming, that is OK with me, but this really has nothing to do with my post.

Steve Fitzpatrick
August 10, 2009 10:03 am

Dr A Burns (18:55:43) :
“A back-of-the-envelope calculation shows that body heat from the current 6.7 billion people is enough to heat the atmosphere by 0.8 degrees C in 100 years.
The point is that the changes in temperature being discussed as so small that almost anything can affect them.”
No doubt true if you fed everyone food from somewhere outside the earth or made all food using only fossil fuels (with no sunlight involved). But since essentially all caloric value in the foods people eat (animal or plant) comes from chemical conversion of the energy in sunlight to the energy in carbohydrates, the heat given off by people’s bodies can have no effect on the whole of the climate, not even a tiny one. The total heat input to the earth is unchanged by our food’s caloric value.
The inside of a plane filled with people, sitting at the gate for an hour, presents a micro-climate with a very different response to body heat….

SteveSadlov
August 10, 2009 10:13 am

This appears to be an excellent step ahead in the quest to better define the positive forcing factors. Now we need to get a handle on negative forcings and feedback loops, both positive and negative. Perhaps a better model may be possible within the next 5 years.

crosspatch
August 10, 2009 10:20 am

Hmmm I wonder if that is real.
““A back-of-the-envelope calculation shows that body heat from the current 6.7 billion people is enough to heat the atmosphere by 0.8 degrees C in 100 years.”
What is the heating caused by several hundred million automobiles sitting in the sun heating up the air inside them? Open a window a little and you have enough circulation so you have hundreds of millions of little solar heaters sitting in the sun.

Jim
August 10, 2009 10:31 am

*****************
Steve Fitzpatrick (06:20:56) :
Adam Grey (20:45:45) :
Some people have suggested that climate sensitivity to radiative forcing is not a fixed value, but rather depends on feed-backs from albedo changes caused by ice sheets and the sea level drops that go along with ice sheets. The atmospheric concentrations of CO2 and methane were also substantially lower 25,000 years ago than at any time during the holocene.
****************
It seems even with less ocean liquid water volume, the colder ocean water due to the ice age could soak up a lot more CO2.

Willis Eschenbach
August 10, 2009 10:32 am

Steve, thanks for your response. You say:

If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871. They were included to improve the accuracy of the model predictions. Yes, the short term variation in global temperature is closely correlated with these indexes, but these indexes are completely independent of radiative forcing and clearly are not responsible for any significant net global warming over the entire period…. their trends are completely flat.

The misunderstanding seems to be in this statement:

[Nino3.4 and AMO] were included to improve the accuracy of the model predictions.

I say again: including observational data cannot improve the accuracy of model predictions. It can only improve the accuracy of model hindcasts.
But improving the accuracy of your hindcasts by including actual observations is a mug’s game. Sure, you can improve hindcast accuracy by including PDO (Pacific Decadal Oscillation), or AMO (Atlantic Multi-Decadal Oscillation), or MJO (Madden-Julian Oscillation), or SOI (Southern Ocean Index), or NAO (North Atlantic Index), or any other index you choose. Any one of these, or any combination of them, will improve the accuracy of your hindcast. See the NOAA Climate Indices web page for a complete list, you can play with and compare any and all of these indices to any other or to global temperature.
But using your method as shown in the head post of this thread, all you have proven is that model estimates of past observations can be improved by using past observations in creating your model estimates …
Surely you can see how pointless that exercise is.
w.

tallbloke
August 10, 2009 10:54 am

Steve Fitzpatrick (09:18:45) :
1. The measured variation in TSI over the last three solar cycles (about 1 watt per sq. meter) shows up in the temperature record quite clearly over the last 100+ years, with a best estimate solar cycle effect that is almost exactly what would be expected for a radiative sensitivity of 0.27 degree/watt. This does not prove 0.27 is the correct sensitivity, but it certainly shows the measured solar cycle signal is at least consistent with a sensitivity in this range.

Hmmm.
1) El nino tends to occur at solar min, and is the manifestation of solar input to the oceans at solar max. This masks some of the true solar input to the climate system by ‘flattening’ the temperature curve.
2) PMOD data uses a model to calculate TSI, based on the splicing together of records from several satellites used to measure irradiance over the last 30 years. PMOD and the IPCC prefer the use of ERBS data to calibrate the change during the ‘ACRIM gap’. The ACRIM team maintain this is not as good as the data from the other satellite, NEPTUNE which was working when the gap occurred and that consequently, TSI shows a little trend when it should show a rising trend at the end of the C20th.
3) Additionally, the Acrim data shows that cycle 21 had a difference of nearer 2W/m^2 between solar max and min than the 1W/m^2.
All this adds up to a spread of uncertainty about the effect of Solar max-solar min on temperatures.
PMOD says 0.05 to 0.1C
I say it could be more like 0.35-0.4C depending how you account for heat storage in the oceans and heat energy release in el nino.
If correct, this means the temp change over the C20th can mostly be explained by the sun, as the lower, longer cycles with longer minima of the early part of the C20th averaged out means a lot less TSI recieved at Earth.
By the way, I replied to your question earlier.

George E. Smith
August 10, 2009 11:39 am

“”” Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s). This high sensitivity depends mainly on three assumptions:
1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.
2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases. Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.
3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.
However, there is doubt about each of the above three assumptions. “””
Well I haven’t read the paper yet; but I printed it out so I can study it and let it sink in.
But I couldn’t easily get past the introduction; pasted above.
Well I hope to shout there is doubt about those three assumptions that are part of the GCM models.
#1 The “slow heat accummulation” in the oceans. What is so darn slow about it ? A solar photon is going to get absorbed in those oceans in less than one millisecond; not >20 yrs; and the “heat” in the ocean comes from those solar photons. The long wavelength re-radiation from a solar and GHG warmed atmosphere is completely absorbed in the top ten microns or less of the ocean surface, and that promotes prompt evaporation from the locally warmed surface. I wouldn’t be hunting for a lot of that long wave energy being stored in the oceans for any length of time.
#2 Cut with the cloaking hypotheses; aerosols (aka clouds) have always been an integral part of the earth’s atmosphere and always will; so stop making up excuses, and properly account for clouds in those silly GCM computer models; they aren’t cloaking anything, they are a part and parcel of the water NEGATIVE feedback effect.
#3 Balderdash ! Water vapor is a GHG; the most prominent GHG, and it doens’t need any other GHG to spur it into action; it is perfectly capable of causing all the warming the atmosphere needs, and in cloud form of causing all the cooling the earth surface needs. The notion of H2O positive feedback “enhancement” of some other GHG caused atmospheric warming; is simply a crutch to ignore the fact that it is the water that is controlling the whole temperature balance; not the GHGs.
And as for buying any notion that a linear approximation to a highly non-linear process is valid; don’t count on planet earth approximating the heating effect of incoming radiant energy, and cooling from outgoing IR by any linear approximation. The earth will apply the correct physics to the situations, and compute the correct answer, not some linear guess of unreality.
The problem with the predictions of the GCMs is simply the GCMs themsleves; Use the earth’s own GCM and then you will get the right answers.

Rik Gheysens
August 10, 2009 12:15 pm

Kevin Kilty (06:59:28) :
“By the way, we can arrive at roughly the same value of sensitivity in three more different ways. Set W=e(sigma)T^4 (Stefan Equation), differentiate T with respect to W, and the result is the sensitivity. If one plugs in e=0.98, sigma = 5.67×10^(-8), and T as 288K, then one gets 0.25.”
Your view is correct! May be there is a fault in the calculation.
sensitivity = (dW/dT)^–1=(4 ε σ T^3)^–1
If one plugs in the given values, one gets 0.188 (not 0.25). Do you agree with this?
I found a remarkable article, where much is explained: http://www.webcommentary.com/climate/monckton.php
I have not read it yet entirely because it requires some attention…

Steve Fitzpatrick
August 10, 2009 12:54 pm

Rik Gheysens (12:15:39) :
“sensitivity = (dW/dT)^–1=(4 ε σ T^3)^–1
If one plugs in the given values, one gets 0.188 (not 0.25). Do you agree with this?”
The value of T in the above equation is the effective emitting temperature of the Earth, not it’s surface temperature. The infrared headed out to space is emitted over a range of effective temperatures (this is clear from looking at the NASA graphic of infrared intensity at the beginning of the post), but all these emissions are at blackbody equivalent temperatures well below the surface temperature that lies under the emitting atmosphere.
The average emission temperature is the blackbody temperature which will balance the solar energy absorbed by the Earth’s surface: about 0.7 * 0.25* 1365 watts/M^2 = 239 watts/M^2. The blackbody temperature in equilibrium with the absorbed solar energy is ~255K, and the corresponding blackbody sensitivity is about 0.266 degree per watt/M^2. If you assumed 288K as the average emission temperature, the associated sensitivity would be 0.185 degree per watt/M^2, but the heat loss to space would then be (288/255)^4 = 1.672 times the solar energy that is actually absorbed.

1 3 4 5 6 7 14