How Sensitive is the Earth’s Climate?

Guest Post By Steve Fitzpatrick

Fitzpatrick_Image1

Introduction

Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s).  This high sensitivity depends mainly on three assumptions:

1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.

2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases.  Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.

3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.

However, there is doubt about each of the above three assumptions.

1.  Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.  This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements.  While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked.  Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.

2.  Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings.  There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.  Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.

3.  Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined.  Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds.  Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired.  Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s.  This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.

Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.  A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.

In spite of the many problems and doubts with GCM’s:

1)       It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.

2)   Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.

3)   There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.

There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases.  This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years.  For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.  But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures.  What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period.  The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.

Climate Sensitivity

If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s.  Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity.   An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law.  Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.  With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.

But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles.  The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth.  Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.

Measuring Earth’s Sensitivity

The only way to accurately determine the Earth’s climate sensitivity is with data.

Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters:  1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration.  Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008.  He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend.  The overall trend correlated well with the log of the CO2 ratio.  In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.

There are a few implicit assumptions in Bill’s model.  First, the model assumes that all historical warming can be attributed to radiative forcing.  This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.).  The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.

Second, the model assumes the global average temperature changes linearly with radiative forcing.  While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings.  That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature.  So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.

Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2.  While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration.  In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.

To make Bill’s model more physically accurate, I made the following changes:

1.  Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.

2.  Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.

3.  The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.

4.  The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

Fitzpatrick_Image2
Figure 1 Model results with temperature projection to 2060

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing.  The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma).  All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity.  A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average.  As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.

Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle.  The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140.  Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter.  The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot.  When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.

Fitzpatrick_Image3
Figure 2 Scatter plot of the model versus historical temperatures
Fitzpatrick_Image4
Figure 3 Comparison of the model’s temperature projection under ‘Business as Usual’ with the IPCC projection of ~0.2C per decade, consistent with GCM projections.

Regional Sensitivities

Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter.  The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011.  An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K).  This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors.  The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.

Credibility of Model Projections

Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future.  But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation.  The probability of encountering important new variables increases with the length of the forecast, of course.  So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.

To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4).  The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years.  Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance.  The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

Fitzpatrick_Image5
Figure 4 Model temperature forecast for 1972 through 2008, with model constants based on 1871 to 1971. The model has no “knowledge” of the temperature record after 1971.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period.   The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2.  A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.

Emissions Scenarios

The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:

a)       The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060.  Atmospheric concentration reaches ~518 PPM by 2060.

b)       N2O concentration increases in proportion to the increase in CO2.

c)       CFC’s decrease by 0.25% per year.  The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.

d)       The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.

e)       Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.

The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s.  The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade.  This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.

Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

Fitzpatrick_Image7
Figure 5 Reduced warming via controls on non-CO2 emissions and gradually lower CO2 emissions growth.

One such scenario can be called the “Efficient Controls” scenario.  The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases.  These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency.   Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade.  Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions.  By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060.  So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.

A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions.  The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario.  Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100.  Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100.  Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Fitzpatrick_Image8
Figure 6 Draconian emissions controls may reduce average temperature in 2060 by ~0.21C compared to business as usual.

Conclusions

The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.  This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant).  Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.

Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average.  Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action.  A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

0 0 votes
Article Rating
334 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
anna v
August 8, 2009 9:02 pm

It is evident that a lot of work and thought is being presented with this entry.
This caught my attention:
Second, the model assumes the global average temperature changes linearly with radiative forcing. While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings. That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature. So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.
The habit of thinking of the “average” earth makes it easy to forget that responses to “radiative forcings” in this average scenario depend on real temperatures that on a daily basis may change from 0C to over 60C between day and night in some deserts, for example. That is the temperature that the black body emissivity sees, not small at all for T**4 changes, with all the variations of the earth’s surface.

AnonyMoose
August 8, 2009 9:30 pm

“… based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.”
So WMGG’s caused the warming from 1871 to 1945, before we started burning petroleum?

Kum Dollison
August 8, 2009 10:13 pm

Atmospheric CO2 was 314.69 (seasonally adjusted)in Mar. 1958.
It’s 387.11 (seasonally adjusted,) today.
That’s 72.42 ppm increase since 1958, (or, 51 yrs.)
We know the Logarithmic function of CO2, so, here’s the question:
What was the CO2 ppm of the atmosphere needed in 1871 to make this work?
ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt

Kum Dollison
August 8, 2009 10:20 pm

One more question. If it’s, simply, increasing CO2 dragging us out of the LIA, What put us in the LIA? And, was it CO2 that put us into the Medeivel Warm Period? What about the Holocene Optimum?
I’m confused.

Richard111
August 8, 2009 10:42 pm

George E. Smith posted a simplyfied calculation method for obtaining an aproximate surface emission in watts per square metre which showed how T**4 changed the outgoing long wave radiation.
Anyone keep a copy please? (I wiped mine accidentally and now can’t remember the formulae :()

August 8, 2009 10:57 pm

“For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming”
She doesn’t believe that anymore, neither does anybody else [except some climatologists].

August 8, 2009 11:04 pm

For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter
Using your own numbers this gives an increase of 2*(239/1366)*0.267 = 0.09 degrees, hardly a “significant fraction of the observed warming”.

Richard111
August 8, 2009 11:45 pm

I am sure there was a post earlier today about the Sun and 30 days spotless!!!
The “Watts effect” was mentioned. 🙂
Is this the reason the post was removed?
http://sohowww.nascom.nasa.gov/data/realtime/mdi_mag/512/

Editor
August 8, 2009 11:55 pm

Maybe I am misunderstanding something, but Bill Illis’ model looks very suspect.
He created a “simple curve-fit model of the Earth’s average surface temperature based on only three parameters”: 1) AMO, 2) ENSO, and 3) CO2. He adjusted parameters until he got a fit to “observed temperature variation” [isn’t that what “simple curve-fit model” means?] and – surprise surprise – found that factors 1 and 2 contributed to fluctuations over the period, while 3 provided all the underlying increase.
I would argue that no other result is possible. AMO and ENSO are fluctuating phenomena so can only provide fluctuations and by their very nature cannot provide any long-term trend. CO2 concentration, however, increased monotonically over the whole period, so is the only factor capable of providing a long-term trend, and by its very nature cannot provide fluctuations.
Splitting the time period, and curve-fitting from 1871 to 1971 then comparing “predictions” with post-1971 sounds impressive, but all it means is that if factors which actually caused the overall trend from 1871 to 1971 remained in place from 1971 to 2008, then the model would match neatly. The actual factors could be the sun, clouds, shipping volumes, or world use of soap. The model would still give a good match.

stumpy
August 9, 2009 12:31 am

Thanks for an excellent and very balanced post. The IPCC could learn something from this method of observation based prediction which is likely far more accurate.
My only comment would this…
In 1850 we in the middle of a cold period, what caused this is another issue but i doubt it was co2. Lets assume it was due to GCR increasing low level clouds which led to reduced SST (which if correct would throw out the whole model anyways). My question is this, if the earth was out of equilibrium at this time due to an external forcing, the earths sensitivity (which I beleive is dynamic i.e. either positive or negative depending in the energy imbalance) would have to be increased to enable the system to return the “normal” climate state (if there is such a thing). I.e. a cooler sea meant less cloud cover and more incoming solar energy to warm the sea. I believe once the system reaches a certain point, the sensitivity will change, i.e. a warmer sea will cause increased low level cloud reducing incoming solar energy. This helps to maintain the earth climate around an ideal point.
I think for this reason assuming the sensitivty will remain constant over long periods of time is unlikely to be correct and may throw out hindcasts and predictions.
In order to confirm this theory we would need long term satellite data of cloud cover %, but unfortunetly this is unavailable. However, Lindzens recent paper attempts to address this using recent data and yeilds a lower sensitivity, this could imply that the climate sensitivity is reducing as the sea / earth warms.
I am wandering, what happens if you derive your climate sensitivity over say 3 distinct periods, does it reduce with time? In your post you mentioned that the 1980 – 2008 data yields a lower value. If a trend was apparent, this could better aportion the role of the various forcings and enable more reliable forecasts assuming a linear relationship of sensitivtiy with global temperature.

August 9, 2009 12:41 am

For ease of referral, the original model by Bill Illis was presented here:
http://wattsupwiththat.com/2008/11/25/adjusting-temperatures-for-the-enso-and-the-amo/
That post generated ~300 comments, most of them very perceptive (or at least interesting). Many of the comments dealt with weaknesses in the assumptions of the model, not the least of which are the artificial adjustments in the Hadley Centre historical temperature data. The limited thermal absorption capacity of CO2 is another weakness in the assumptions, and a corollary to that is the assumption that CO2 is the likely “unknown” forcing agent.
Fitzpatrick’s adjustments to Illis’ original model uses these questionable assumptions, but he states (more or less) that they represent a “worst case” scenario. That is, if one provisionally accepts all the assumptions (that point to CO2 as the primary global warming forcing agent), then the worst case is that a doubling of CO2 concentration will force a global temperature rise of ~1C. If one does not accept all the assumptions, then the sensitive of the global climate to CO2 is something less than that.
That is my (condensed) interpretation, at any rate.

michel
August 9, 2009 12:41 am

Congratulations both to WWWT and the writer. Its in the great tradition of the informal publication of amateur (ie outside the mainstream of academic research) science resulting from people seriously trying to come to grips with the core of a problem, rather than publish articles in the tenure obstacle course. Whether its right? Well, a different issue.
Also reveals, along with the use of R on CA, what a revolution has occurred due to the availability of what by previous generations’ standards were super computers on our desktops. When the study of global warming started, to do this sort of work on the desktop would have been inconceivable.
Is the model source code available someplace for other interested tinkerers? Hopefully the author will not deliver a freezing cold shower suddenly by revealing that it is written in Excel! But even if it is, it can always be rewritten in something more sensible.

August 9, 2009 12:42 am

sensitivity

tallbloke
August 9, 2009 12:52 am

What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?
Thanks

Mac
August 9, 2009 1:02 am

What is clear that both these curved fit models, with and without decadel events, understimate the rate of increase and decrease in temperatures, most evidently from 1910 to 1940. There is something else missing, a natural forcing, from the overall picture.

Larry Poe
August 9, 2009 1:04 am

“A new paper by Lindzen and Choi (described at WUWT on August 23, 2009)”
August 23, 2009?

John Peter
August 9, 2009 1:10 am

Well, it would seem to me as an interested observer that the infamous Mann Hockey Stick is still alive and well despite the hard work produced by Steve McIntyre. This whole document seems to implicitly assume a steady global temperature until man began to emit CO2 in increasing volumes through industrialisation.

August 9, 2009 1:10 am

I take issue with this statement:
“it is impossible to refute that greenhouse gases should lead to increased global average temperatures.”
For the rationale see http://climatechange1.wordpress.com/2009/04/24/the-gaping-hole-in-greenhouse-theory/
Before we attribute a change in surface temperature to a process that can not be demonstrated, or evidence found of its existence, we should investigate the simple stuff, the coming and going of clouds, and in particular ice clouds that are highly reflective of incoming short wave radiation.
There is a strong flux in the temperature of the atmosphere above 200hPa (including therefore the top third of the troposphere and the stratosphere) associated with seasonal, biennial, decadal and centennial change in its ozone content. As the temperature in this layer rises (due to increasing ozone content) so does the temperature of the surface of the sea.
That is the sort of observation that should excite our attention if we wish to explain the ups and downs of surface temperature over short and long time scales.
Manifestly, we do not understand the warming and cooling of the sea. Until we do, we should steer away from attributing change to the activities of man. To say that we can’t account for change and therefore it must be due to man is just stupid. The fact is, we don’t understand the simplest processes that bring about the change that we observe on a seasonal and inter-annual basis.
More about the symmetry between the temperature of the tropical stratosphere/upper troposphere and sea surface temperature at: http://climatechange1.wordpress.com/2009/06/29/the-southern-oscillation-the-young-persons-guide-to-climate-change/

Neven
August 9, 2009 1:26 am

This looks very interesting. Have you considered writing a paper and submitting it for peer review?

Stevo
August 9, 2009 2:22 am

“2) Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.”
Dear oh dear. And after I had spent all that time last week explaining why that mechanism was wrong! Does nobody listen? 🙂
(I expected no different, of course. Everybody gets it wrong. We must take this sort of thing in good humour, but Al Gore’s movie has a lot to answer for…)
With careful parsing, the statement can be interpreted in such a way that it is technically true, but I’m not totally convinced that was deliberate, and it’s terribly misleading. It doesn’t affect the rest of the post at all, but even so it would be nice if we could get the “Basic physics” right.

Rhys Jaggar
August 9, 2009 2:32 am

What’s actually CRITICAL for global policy right now is not the carbon dioxide-only projection but the REAL projection for the next 25 years.
There are many predicting:
i. Decreased solar output.
ii. At a time of cool phase PDO and AMO also declining.
If that’s the case, would your model predict little if any warming for the next 25 years?
Because if so, I’d say it was a time window to really understand the total interplay of climate forces, whilst generating new CCS technologies and trialling them in ways which don’t bankrupt the economic system.
At the same time, switching all homes to energy-neutral running, both through new stock and retrofitting old stock, would be important.
Finally, you’d force industry to retrofit CCS and aerosol/particulate control technologies to existing power stations by, say 2030.
What’s clear though is that all this Armageddon doomsday stuff without showing you understand the system won’t work.
This paper is a welcome addition to furthering the COMMUNICATION of understanding beyond Goreisms.
Gore needs to ground his Learjet. And to do that, the first thing to do is to impose 50 times the amount of green taxes on corporate jets that you do on Joe Schmo’s car. Because the high rollers aren’t going to continue their giddy life of pleasure whilst imposing sanctions on the rest of us………..
And you make it a condition of working in Cleantech PE funds that you don’t use Learjets. At all.
That’d make Gore practice what he preaches, eh?

August 9, 2009 2:46 am

Good Post – thanks.
I need to read it again but it seems to sum up pretty much where I stand.
A couple of points on the solar effect, though. Sean F writes
For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.
1. It should be made clear that the 2 w/m2 increase is the increase in solar intensity at the “top of the atmosphere” and not the average increase received by the earth’s surface. Due to albedo and the earth’s geometry this will be ~0.35 w/m2 (i.e. 2 * 0.25 *0.7). There’s lots of confusion here particularly as the current ghg forcing is reckoned to be ~1.6 w/m2.
2. Leif Svalgaard may be the best person to answer this question. Does Judith Lean still stand by her TSI reconstruction. I know there are a number of other reconstructions, including one by LS himself, which show much less variability. Has a consensus (I tried to avoid this word, but …) been established.

August 9, 2009 2:51 am

Stevo (02:22:38) :
.
.
Dear oh dear. And after I had spent all that time last week explaining why that mechanism was wrong! Does nobody listen? 🙂

Have you a link to your explanation.

DennisA
August 9, 2009 2:59 am

In 2000, Dr Robert E Stevenson, (now deceased), Oceanographer and one-time Secretary General of the International Association for the Physical Sciences of the Oceans (IAPSO), wrote an assessment of Levitus et al (2000) on global heat budget.
http://www.21stcenturysciencetech.com/articles/ocean.html
Yes, the Ocean Has Warmed; No, It’s Not “Global Warming”
This is a small extract:
“How the Oceans Get Warm
Warming the ocean is not a simple matter, not like heating a small glass of water. The first thing to remember is that the ocean is not warmed by the overlying air.
Let’s begin with radiant energy from two sources: sunlight, and infrared radiation, the latter emitted from the “greenhouse” gases (water vapor, carbon dioxide, methane, and various others) in the lower atmosphere. Sunlight
penetrates the water surface readily, and directly heats the ocean up to a certain depth. Around 3 percent of the radiation from the Sun reaches a depth of about 100 meters.
The top layer of the ocean to that depth warms up easily under sunlight. Below 100 meters, however, little radiant energy remains. The ocean becomes progressively darker and colder as the depth increases.
The infrared radiation penetrates but a few millimeters into the ocean. This means that the greenhouse radiation from the atmosphere affects only the top few millimeters of the ocean. Water just a few centimeters deep receives none of the direct effect of the infrared thermal energy from the atmosphere! Further, it is in those top few millimeters in which evaporation takes places. So whatever infrared energy may reach the ocean as a result of the greenhouse effect is soon dissipated.
The concept proposed in some predictive models is that any anomalous heat in the mixed layer of the ocean (the upper 100 meters) might be lost to the deep ocean. It is clear that solar-related variations in mixed-layer temperatures penetrate to between 80 to 160 meters, the average depth of the main pycnocline (density discontinuity) in the global ocean. Below these depths, temperature fluctuations become uncorrelated with solar signals, deeper penetration being restrained by the stratified barrier of the pycnocline.
Consequently, anomalous heat associated with changing solar irradiance is stored in the upper 100 meters. The heat balance is maintained by heat loss to the atmosphere, not to the deep ocean. “

John
August 9, 2009 3:06 am

The author talks of infra-red absorbing gases such as CO2. My understanding is CO2 on received a quantum of infra-red, instantly radiates in random direction a quantum of infra-red at the same wavelength and energy. If CO2 absorbs IR it must get warm.
I thought this was one of the main misdirections used in the so-called greenhouse gas theory.
Not so?

Richard
August 9, 2009 3:14 am

Steve Fitzpatrick: The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, BASED ON THE ASSUMPTION THAT RADIATIVE FORCING FROM WMGG’S HAS CAUSED ALL OR NEARLY ALL THE MEASURED TEMPERATURE INCREASE SINCE ~1871.
The question I would like to ask is – what would be the climate sensitivity if the radiative forcing from WMGG’s caused only 50% of the warming? As this is the approximate position of the IPCC? What would it be if it caused only 25% of the warming and if it caused only 10% of the warming?
In each of the above scenarios what would be the average temperature in 2060 compared to the 2008 average? (And I presume this assumes that the average solar intensity and the Earth’s average albedo do not change?)

Barry R.
August 9, 2009 3:18 am

A couple of things common to most models:
1) This all assumes that the concentration of greenhouse gases is the same over every part of the planet. Once you state that assumption it becomes obvious that it isn’t going to be completely true since there are both sources and sinks for the emissions, especially for CO2. How big are the variations? I don’t know. It could be that they are insignificant compared to the overall ratio. However, I suspect that you will find that concentrations of any man-made greenhouse gas will be highest in the likely source areas–Northern hemisphere over land primarily, and lower in probable sink areas like forested tropical areas and over tropical oceans. What does that do to the overall impact? It would be interesting to model that.
2) This all assumes that the measured temperatures are reasonably accurate representations of overall temperatures for the planet. There are a lot of reasons to doubt that. Temperatures tend to get measured in areas where it’s convenient for people to measure them. That often means in cities or near airports. However, you have to be careful about measurements from rural areas too. How many of the sensors are close to hog or cattle confinement operations? Both are major producers of Methane, CO2, Ammonia, and Hydrogen Sulfide. If any of those gases have an impact on temperature they would have their greatest impact near the source. Anthony’s surface station project should probably look at how close rural temperature sensors are to confinement operations.
3) This all assumes that there will be no biological response to increased CO2. It’s more likely that after a lag of a few decades there will be biological shifts that favor plants (especially microscopic ones) capable of exploiting higher CO2 levels, reducing or at least partially balancing increased emissions.

August 9, 2009 3:19 am

FWIW use of the outdated lean 2000 paper should be avoid as Leif stated. Also minor typo. “described at WUWT on August 23, 2009”

M White
August 9, 2009 3:22 am

“World temperatures are set to rise much faster than expected as a result of climate change over the next ten years, according to meteorologists.”
This is how the people in power see the earth’s climate sensitivity. Unfortunately they seem to share a lot in comman with an end of the world cult, they day after the day of judgement another future date will be picked.
http://www.telegraph.co.uk/earth/earthnews/5925523/World-temperatures-set-for-record-highs.html
“a new study by Nasa said the warming effect of greenhouse gases has been masked since 1998 because of a downward phase in the cycles of the sun that means there is less incoming sunlight and the El Nino weather pattern causing a cooling period over the Pacific.”
“The new study adds the effect of El Nino, which is entering a new warm phase and of the impact of the solar cycle.”
“Gareth Jones, a climate research scientist at the Met Office, said the effect of global warming is unlikely to be masked by shorter term weather patterns in the future. He said that 50 per cent of the 10 years after 2011 will be warmer than 1998. After that any year cooler than 1998 will be considered unusual.”

tallbloke
August 9, 2009 3:40 am

Leif Svalgaard (23:04:09) :
For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter
Using your own numbers this gives an increase of 2*(239/1366)*0.267 = 0.09 degrees, hardly a “significant fraction of the observed warming”.
Blimey, Groundhog day again.

Allan M R MacRae
August 9, 2009 3:43 am

So according to the above, the Worst Case Sensitivity for a doubling of CO2 is ~1.0 degree C.
Other analyses, and current cooling suggests a lower figure, between 0.0 and 0.3 C; either way, so low as to be inconsequential.
I accept 0.0 to 0.3C.
I accept inconsequential.

Curiousgeorge
August 9, 2009 3:53 am

What’s really important with all this is not the absolute precision of any estimate of warming or cooling or sea levels, etc.
What is important is whether the majority of people in major countries (and therefore their political leadership ) believe one position or another. Those beliefs will drive political, economic, demographic and military decisions and actions regardless of any scientific pronouncements that contradict those beliefs. Some countries may decide that their survival hinges on preparations for repelling a perceived political, economic, demographic or military threat directly related to a belief in global warming and institute policies and actions that exacerbate tensions that already exist either internally or externally. Those actions inevitably result in other countries developing countermeasures to the above to ensure their own survival and prosperity. It quickly becomes a sort of arms race, in which the actual behavior of the climate over time is irrelevant. We have seen the beginnings of this in the recent disputes over Arctic oil and gas resources, as well as other natural resources around the world.
Perception is everything, and if the future is perceived as a zero sum game then we are all in a lot more trouble than anything that could be brought about by a few degrees of climate change.

tallbloke
August 9, 2009 3:57 am

DennisA (02:59:26) :
In 2000, Dr Robert E Stevenson, (now deceased), Oceanographer and one-time Secretary General of the International Association for the Physical Sciences of the Oceans (IAPSO), wrote an assessment of Levitus et al (2000) on global heat budget.
http://www.21stcenturysciencetech.com/articles/ocean.html
It is clear that solar-related variations in mixed-layer temperatures penetrate to between 80 to 160 meters, the average depth of the main pycnocline (density discontinuity) in the global ocean. Below these depths, temperature fluctuations become uncorrelated with solar signals, deeper penetration being restrained by the stratified barrier of the pycnocline.
Consequently, anomalous heat associated with changing solar irradiance is stored in the upper 100 meters.

While I agree with Stevenson on his analysis of the non-heating of the ocean by greenhouse gases, I take issue with him on this part.
My calculations (confirmed by Leif Svalgaard) show that to account for the thermal expansion component of sea level rise between 1993 and 2003, the oceans must have retained around 14×10^23J from the sun over and above the energy they receive and retransmit. This is equivalent to a 4W/m^2 forcing and must be solar in origin, plus cloud modulation. There was less cloud in the tropics during the period in question according to ISCCP data.
The global rise in SST over the same period was around 0.3C. The falloff of temperature to the thermocline is approximately linear below the mixed surface layer and this is consistent with an average increase in the temperature of 0.15C for the top 700m of ocean.
I asked James Annan, an oceanologist how warmth got mixed down to those depths far below the mixing in the top layer performed by waves. He replied that tidal action, and strongly subducting currents at high latitudes taking down warm water arriving from the tropics explained the deeper mixing.
This is consistent with extra warmth at deeper levels apparently unconnected with solar forcing, but still leaves room for another possibility : that the amount of heat coming through the thin seabed from the earths interior may vary over time due to changes in the subcrustal currents within the earths mantle. These appear to be connected to variations in Length of Day.

Chris Wright
August 9, 2009 4:07 am

Steve Fitzpatrick:
“There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.”
This makes no sense at all. The consensus appears to admit that all warming up to about 1970 was natural. There simply wasn’t enough CO2 to have any effect. In fact the Hadley two-graphs ‘proof’ depends on this.
I’m sorry, but anyone who publishes a graph showing the global temperature over the next fifty years is probably deluded. No one knows what the climate will be in 2060. It may be warmer. But it may also be colder. About the only thing we can agree on is that the IPCC projections are wildly exaggerated, probably for political reasons.
Probably the best long-term climate records are provided by the ice cores. They appear to show that CO2 is an effect and not a cause. And over hundreds of millions of years there is essentially no correlation between CO2 and temperature. This strongly suggests that the effect of CO2 on climate is negligible. We were fortunate to have enjoyed a modest warming during the 20th century, but it may not last.
Although I’m sure that the author is correct when he says the IPCC projections are too high, he may have fallen into the same trap as many other modellers. It’s pretty easy to predict what has already happened. The trick is to accurately forecast the future. Due to the chaotic nature of weather and climate, it’s probably impossible to predict beyond a few weeks. The ridiculous Met Office quarterly forecasts are a good example of this.
It will be interesting to see how well that straight green line predicts the global temperature over the coming years. My guess is that it’s probably wrong. Sorry.
Also, congratulations to WUWT for publishing this essentially pro-AGW article. It shows a good sense of balance, something lacking in some other web sites we could name!
Chris

Charlie
August 9, 2009 4:29 am

Some editing comments —
Fig 4 is a duplicate of Fig 3. The other figures are all moved down one. The real Figure 6 is missing.

Bill Illis
August 9, 2009 4:35 am

This is a very good paper.
One of the most important insights is that under the Stefan-Boltzmann equations, the very equations thats underpins most of climate science, the surface temperature should only increase 0.27C per watt/metre^2 increase in forcing. Stefan-Boltzmann is actually a logarithmic equation (like the ln(GHG) versus temperature is a logarithmic equation).
Global warming theory for the long-term climate sensitivity uses 0.75C per watt/metre^2, but the point we are on in Stefan-Boltzmann will only result in 0.27C per watt/metre^2. It is also interesting that the climate models also use 0.27C to a 0.32C per watt/metre^2 for hindcasts and short-term predictions but over time the number is increased.
I like adding in the other GHGs (which I didn’t do), the expected lags in the climate system are not occurring, and I think the conclusions presented by Steve are quite accurate.

August 9, 2009 4:44 am

In the report I read ” With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.”
I am not sure quite how to interpret this. It seems to assume that one can solve all there is to solve about greenhouse gases, by assuming that one need only to consider radiation in the way energy moves through the atmosphere. This ignores the effects of conduction, convention and the latent heat of water. I sort of find this simplification, if that is what it is, to be not very believable.

pinkisbrain
August 9, 2009 4:54 am

ok, and what was the co2 concentration in 1850?
ipcc 280, others 320 to 345ppm….big differences!
nobody knows, how long human co2 emissions will last in the atmosphere. 10y 50y, 200y?
why is the CH4 curve flat for some years now?

Jimmy Haigh
August 9, 2009 5:07 am

M White (03:22:28) :
Gareth Jones also said:
“The amount of warming we expect from human impacts is so huge that any natural phenomenon in the future is unlikely to counteract it in the long term”.
We’ll see…

Allen63
August 9, 2009 5:24 am

A very thoughtful article. It makes a plausible case. My comments should not be taken as negative towards the author. Rather, these are thoughts I often have regarding any models I have seen.
As someone above mentioned, the forcing phenomena/mechanisms proposed include cyclic and one monotonically increasing, and the fit is to a temperature series that shows increase; thus, the predicted temperature must increase. This and other models seem to say — it must be CO2 because I can’t think of anything else it could be, given our lack of understanding.
What I question with any model is:
Start point — By chance, good temperature records and good CO2 records begin at a local minimum — so only net increase is shown. Better if a model could go back a couple thousand years and work its way up to the present. In the process showing how it would account for historical heating and cooling in the absence of anthropogenic CO2.
The basic historical temperature data itself — The accuracy of historical temperature data itself is suspect due to many factors and Hadley data is manipulated. Can one actually believe the Hadley (or GISS, etc) anomalies are accurate and precise representations of the actual historical global temperature. I honestly don’t think that has been definitively shown.
And, lastly (for today at least), the global land and sea temperatures are measured at sites and in ways that may not accurately indicate the heat accumulation (or lack thereof) over the entire earth land surfaces to a depth and throughout the ocean depths. But, AGW is all about global heat accumulation — for which global temperature is only a proxy.

Tom in Florida
August 9, 2009 5:39 am

Has anyone ever tried to model a future where CO2 begins to decrease at the current rate of increase? What would happen if we took this steady decrease down to 0? If everything else remained the same, at what level of CO2 would the “tipping point” of unstoppable cooling be achieved? Would that be a reverse demonstration of whether CO2 drives climate or not? Would that be of any use?

August 9, 2009 5:46 am

“congratulations to WUWT for publishing this essentially pro-AGW article. It shows a good sense of balance, something lacking in some other web sites we could name!
Chris”
Hear, Hear!

ecarreras
August 9, 2009 6:18 am

Is the increase of CO2 due to the burning of “fossil” fuels? When I look at the CO2 data over the last 10 years the increase in atmospheric CO2 concentrations look almost linear. When I look at the last 10 years of energy consumption (excluding nuclear, solar, hydroelectric, wind and geothermal) the increases in consumption are anything but linear. How correlated is atmospheric CO2 increases to changes in energy consumption? Can anyone point me to any papers on this subject?

Terry
August 9, 2009 6:27 am

Please check the figures, I believe a number of them are incorrect. Fig 4 appears to be a repeat of Fig 3 and that maybe what is throwing the other off.

August 9, 2009 6:36 am

Leif Svalgaard (22:57:52) :
“For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming”
She doesn’t believe that anymore, neither does anybody else [except some climatologists].

How did you do that? You answered my question before I’d asked it.
John Finn (02:46:54) :
2. Leif Svalgaard may be the best person to answer this question. Does Judith Lean still stand by her TSI reconstruction. I know there are a number of other reconstructions, including one by LS himself, which show much less variability. Has a consensus (I tried to avoid this word, but …) been established.

Jens
August 9, 2009 6:38 am

Leif Svalgaard (23:04:09) :
“For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter. Using your own numbers this gives an increase of 2*(239/1366)*0.267 = 0.09 degrees, hardly a “significant fraction of the observed warming”.”
Please forgive a comment from someone with absolutely no credentials in this field whatsoever, but haven’t you just plugged the 0.267 answer back into the calculation? I took the 0.267 to be the sensitivity of temperature in C per W per m2. For a 2W increase, the temperature would rise 2*0.267 = 0.534C . Or am I doing something fundamentally daft (wouldn’t be the first time) ?
I do disagree that a rise of 0.09C is ‘hardly a significant fraction’. If my “one minute Google” research of a 1C rise since the Little Ice Age is correct, then the sun has caused 9% of the temperature rise since then. I call that significant, from my point of view.
Thanks for your attention,
Jens.

eric
August 9, 2009 6:40 am

Steve Fitzgerald,
One of the key problems with an analysis of climate sensitivity from temperature data, such as you have performed is the estimation of lag time for the ocean surfaces to heat up. The use of the solar cycle versus temperature data is problematic. It is ok if the system consists of only one heat resevoir, so a single time constant is appropriate. The problem is that the ocean has a shallow and a deep resevior with different time constants, and the easily observed smaller resevoir, which has a 2 year time constant, will give you too small an answer for the climate sensitivity.
A more complex model is required to get a correct answer.

Kevin Kilty
August 9, 2009 6:59 am

Barry R. (03:18:11) :
A couple of things common to most models:
1) This all assumes that the concentration of greenhouse gases is the same over every part of the planet. Once you state that assumption it becomes obvious that it isn’t going to be completely true since there are both sources and sinks for the emissions, especially for CO2. How big are the variations? I don’t know. It could be that they are insignificant compared to the overall ratio. However, I suspect that you will find that concentrations of any man-made greenhouse gas will be highest in the likely source areas–Northern hemisphere over land primarily, and lower in probable sink areas like forested tropical areas and over tropical oceans. What does that do to the overall impact? It would be interesting to model that.

NASA mid-troposphere measurements (worldwide) in July are roughly 375ppm +/- 10ppm, with highest values over western North America and western Atlantic basin adjacent North America, Middle East. So exactly the region you expected.
Chris Wright (04:07:24) :
Also, congratulations to WUWT for publishing this essentially pro-AGW article. It shows a good sense of balance, something lacking in some other web sites we could name!

Congratulations to this site for welcoming all sorts of opinions and thinking, but I don’t see this as necessarily pro AGW. The real issue it seems to me is not whether CO2 or temperature has increased over the past 50 years…we can see the measurements ourselves, but rather, what these observations mean for the future. In this case climate sensitivity is very important, and people who are quite alarmed by the situation see a sensitivity above 0.5C/(W/m^2), while this result places the value at half that. Much less alarming.
By the way, we can arrive at roughly the same value of sensitivity in three more different ways. Set W=e(sigma)T^4 (Stefan Equation), differentiate T with respect to W, and the result is the sensitivity. If one plugs in e=0.98, sigma = 5.67×10^(-8), and T as 288K, then one gets 0.25. If one takes the estimated cooling of Earth from Pinatubo and the estimated insolation change one gets about 0.5K/2.7(W/m^2)=0.19. If one takes the mean Earth temperature change over the glacial cycle (10C or so), and divides by the change in insolation (50W/m^2), then one finds 0.2 as the implied sensitivity. Of course these are simplified “models” and would be ridiculed in debate…why use a simple idea when a complicated one will do as well? (I jest).
The issue is more complex, of course, with people worrying about the effect of all the feedback influences. These feedback signals have all sorts of varying time scales, and perhaps regional influences (which makes me wonder the usefulness of mean Earth temperature in the first place), and this is what makes the issue interesting.
Since longwave radiation depends on temperature to the fourth power, mean temperature in our radiation equations should more meaningfully be fourth root of the mean of temperature to the fourth power. Does anyone know if someone has made such a calculation?

John G
August 9, 2009 7:04 am

I read this as an attempt at assuming the AGWarmers are right, the warming in the last century and one half is due to greenhouse gases, so make simplifying assumptions that won’t distort that hypothesis too much and see how bad it gets in fifty years. Further check the numbers under the assumptions we make some civilization bending efforts to reduce greenhouse gases, and under the assumption we make some civilization ending efforts to reduce greenhouse gases and see how we do. The answer seems to be if we do nothing it gets a little warmer (~1C), if we bust our collective butts it gets a little less warm (~.9C), and if we destroy civilization it will be less warm still (~.8C). He leaves it up to the politicians to decide which course of action makes the most sense . . . in which case we’ll likely take the civilization ending route.

August 9, 2009 7:32 am

Minor type alert, beginning of third paragraph. “his” should be “this”:
Many people, including his author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.

August 9, 2009 7:35 am

The map of the globe at the head of this post is too important to ignore.
One must understand the geographical aspects of climatology if one is to come to grips with how the globe warms and cools. One can then appreciate the real place of long wave radiation, cloud cover and the Southern Oscillation in this fascinating process. Greenhouse theory can then be put in its proper context. It is almost totally irrelevant.
Referring now to the map, notice the low levels of long wave radiation from the three centers of strong convection namely the Amazon, the Congo and the Indian Ocean between India and New Guinea. The air above these regions is characterized by de-compressive cooling associated with strong uplift. The amount of long wave radiation emanating from these regions is slight. Its as slight as that from the coolest parts of the globe. In these locations the air cools via the same de-compressive mechanism that is utilized in a domestic refrigerator. It does not cool by emitting long wave radiation.
What goes up must come down. If the air rises in strong centers of convection it must fall somewhere else. Where the air descends it will warm via compression and in so doing it loses cloud. That descending air emits high levels of long wave radiation. Notice that high levels of radiation are associated with dry cloud free air and low levels are associated with wet cloudy air. Without water vapor, the presence of the so called greenhouse gas has little effect in trapping outgoing long wave energy. To the extent that a ‘greenhouse gas’ is present it will have the effect of reducing cloud cover in relatively cloud free zones. (No amplifier).
Notice the extent of the oceans of the southern hemisphere where outgoing long wave radiation is relatively more intense. ( This is a simple function of the shortage of land mass in the Southern Hemisphere). The atmosphere above these areas is relatively cloud free because the air is descending and warming. This ocean accordingly receives a lot of direct sunlight.
The flux in cloud cover above the southern oceans is the basis of the Southern Oscillation. Cirrus cloud forms on the margins of, and between the zones of descending air over the southern hemisphere oceans. There is a strong seasonal warming of the entire atmosphere between April and September due to radiation of solar energy by the land masses of the northern hemisphere. This causes a loss of cloud cover globally (about 3%) and a strong loss of cloud in the southern tropics. Superimposed on this seasonal oscillation there is a warming and a cooling of the stratosphere and the upper troposphere down to about 200hpa based on a flux in ozone content associated with the relative strength of the polar vortexes. The Arctic vortex is weak, operates only in winter and fluctuates in its strength on decadal and longer time scales. The resulting flux in stratospheric ozone causes a parallel change in the extent and opacity of high altitude cirrus cloud above 200hPa. It has long been known that a sudden stratospheric warming is associated with warming of the tropical ocean and it has become apparent in recent times that this warming is most intense in the southern hemisphere between 20° and 40° of latitude between November and March. Some three or four months following the sudden stratospheric warming, the sea at the equator reflects the intensity and timing of that stratospheric warming. By that time, the stratospheric warming responsible for the sea surface warming is well past.
On long time scales one must look to the forces that determine the concentration of ozone in the stratosphere if one wants to explain surface warming. Chief amongst these is the flux of nitrogen oxides from the mesosphere, a factor that relates directly to solar activity.
The change in the temperature of the stratosphere/upper troposphere is a fascinating area of study. Good data is available from 1948. The Southern stratosphere is the most volatile. It warmed strongly up to 1978 and has cooled since that time. That trend continues. Our globe is gaining cloud in direct relation to the diminishing temperature of the upper troposphere/lower stratosphere where cirrus cloud forms. The warming between 1948 and 1978 was abrupt. The cooling since that time has been slow but relentless. In response, the atmospheric windows that allow solar radiation to reach the surface of the ocean in the southern hemisphere are gradually shrinking in extent.
The forces that determine the character of the Southern Oscillation operate on very long time scales. The Oscillation is constantly changing. A recent study suggested that 70% of the variability in global temperature could be attributed to the Southern Oscillation. On the basis of my knowledge of the temperature of the stratosphere I would guess that this figure is an underestimate.
A mathematician who does not understand the dynamics of climate is in no position to predict anything. His tools are of no value.
Climatology, as we know it today, has nothing to say about the causes of the Southern Oscillation. Until this phenomenon is understood we will be at the mercy of snake oil salesmen , charlatans and cranks.

Richard Sharpe
August 9, 2009 7:45 am

A new paper by Lindzen and Choi (described at WUWT on August 23, 2009)

Do you have something scheduled to drop on August 23?

Pamela Gray
August 9, 2009 7:51 am

There is another assumption that needs to be adjusted to reality. The notion of “well mixed”. GHG’s are not well mixed. Water vapor is not well mixed. Aerosols are not well mixed. And salt spray is the leading contender for Aerosols by lengths. Storms, jet streams, and just plain ol’ wind can knock down both GHG’s and aerosols as well as remix them in ever changing globby concentrations. The rotation away from the Sun can change ozone amounts here and there. Seasons change the globby mix. Ocean conditions change the globby mix. If we were to color each of these components of our atmosphere and then take a video of our planet from the outside looking in, we would find a swirling ball of ribbonned colors. Quiet pretty actually. But well mixed it ain’t.

Pamela Gray
August 9, 2009 7:55 am

The other thing I have sticking in my craw has to do with the dynamical nature of models. They have yet to be proven in chaotic systems with multiple variables that are not readily predicted. The proper use of dynamical models is to include a control set of statistical models. Whatever happened to the idea of comparing something new to a gold standard? Did that get nixed from proper scientific research too?

JP
August 9, 2009 8:07 am

If the models works so well, why can’t they with any precision outside of 30-60 days render a decent temperature forecast. The GCMs (using the assumption that WMGHG are the driving mechanism for climate) cannot provide a decent forecast of changes in ENSO, the AMO, etc… Niether NASA, NOAA, nor HadCrut have any skill in predicting even regional short-term climatic changes. Almost invariably they come off too warm.
To make matters worse, they are trapping themselves into a corner by shortening the time periods in question. Climate Science deals with anomalies that cover hundreds of years. But these people now are in the business of making seasonal variations predictions, and in the process are getting burned. Climate is now defined (according to our experts) as changes from year to year. When they come out wrong (which is quite often), they point out it on just “weather”; when their predictions come out correct, it is AGW or Climate Change.
.

August 9, 2009 8:25 am

John Finn (06:36:29) :
Does Judith Lean still stand by her TSI reconstruction.
Lean is continuously updating her reconstruction [as is proper when new data or insight comes along]. Here latest view [which I share and which most other researchers are coming around to] were expressed by her at TSI-meeting in 2008 [SORCE Santa Fe]: “no long-term variation has been detected – do they occurs?”. All reconstructions have been converging to the ‘flat’ version with little or no long-term variation.
It is unfortunate that people [deliberately?] confuse the Top of Atmosphere and actual insolation at ground level. The TOA is 1361 [or 1366 if you like that better – makes to difference]. Lean’s obsolete 2 W/m2 were for this figure, so translated to the ground would only be 239/1361 times 2W/m2, corresponding to 0.09 degrees. But even that 0.09 did not occur as TSI [at TOA] did not change the 2 W/m2. So, if you want to ascribe the LIA to the Sun, you can’t invoke TSI. Cosmic ray proxies do not show any marked change over that time either.

timetochooseagain
August 9, 2009 8:40 am

Interesting. Speculative, but interesting.
I once create a simple EBM which could explain the effect of the Eruption of Pinatubo with a very low sensitivity (about .6 K per CO2 doubling). Maybe I can dig out the excel file…

jh
August 9, 2009 8:57 am

Kum Dollison (22:13:46) : asked
What was the CO2 ppm of the atmosphere needed in 1871 to make this work?
seems to me this comment deserves an answer
also this model is a tad different from this one
http://wattsupwiththat.com/2009/05/22/a-look-at-human-co2-emissions-vs-ocean-absorption/

Willis Eschenbach
August 9, 2009 9:13 am

Steve, thanks for your contribution to the discussion. However, I don’t understand why you think you can use AMO and ENSO in a model. You say:

4. The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

The problem is that the AMO Index and the Nino 3.4 index are both measurements of sea surface temperature (SST). You present it as some kind of revelation that changes in SST today will affect the global temperature in the near future … but this is trivially true for any autocorrelated dataset, and is particularly true for SST and global temperature.
Since AMO and Nino3.4 are measurements of today’s temperatures, they are absolutely useless as model “input”. It’s like saying “I can hindcast yesterday’s temperature with success way better than chance … if I can use day before yesterday’s temperature as input.” Yes, this is also trivially true, and works very well on hindcasts … but if you think that will allow you to forecast the next decade you are very mistaken.
So no, the AMO and the Nino 3.4 do not predict the drop in temperature 1940 – 1970, nor the drop 1875 – 1900. They do not forecast those temperature drops at all, they measure those drops. So claiming that they make your model more accurate, while true, is meaningless.
You present it as if it were a surprising result that if you remove measured temperature variations for the Atlantic and the Pacific oceans, the variance in global temperature is reduced … but doing that has no effect on the accuracy of any model. It also means nothing about the credibility of any model.
Absent the AMO and the Nino 3.4, you are simply asserting (without providing a scintilla of evidence) that temperature will rise with log CO2 … but that is what the debate is about, you can’t just assume that.
w.

August 9, 2009 9:34 am

Quote:
1) It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.
Isn’t this the definition of positive forcing?
The article mentions the effect of aerosols and carbon particles in the atmosphere. I have never seen any discussion of the effect of the various “Clean Air Acts” introduced over the last 60+ years.

Jim
August 9, 2009 10:04 am

Steve F. – I have a full-time job, kids, wife, house, etc. and would like to understand more about climate models. I really don’t have time to learn everything necessary to start from scratch. I was wondering if you could share your model. I might help those of us who are not professionals, but have a background in science that would allow us to (eventually) come to a better understanding of climate models.

Jim
August 9, 2009 10:07 am

Steve – does your model take into account the dark side of the Earth that is exposed to near zero radiational temperature?

AlanG
August 9, 2009 11:03 am

Thanks for the great post. It’s posts like this that reinforce my rejection of positive feedback and Hansen’s ‘x3 CO2 forcing factor’, a factor that was decided before most of the science was even started. No feedback has a better fit with past temperature. So a 1-1.2C rise for a doubling of CO2 is the default case.
CO2 can only warm the oceans if it warms the atmosphere first. The atmosphere has no thermal inertia so warming from CO2 must happen every time the sun comes up – something we would see by now. How can warming be hiding in the oceans if the warming of the atmosphere is the very mechanism that is supposed to warm the oceans?
Whole world cloud feed back is near to zero – strongly negative in the tropics, less so at middle latitudes and positive at the poles. Roy Spencer has intimated that negative feedback lessens away from the tropics and we know that clouds warm the poles so it’s only a matter of degree in between.
The whole positive feedback case is based upon an increase of CO2 causing a small increase in temperature which causes an increase of absolute humidity. This in turn leads to a second round of warming. But the 1998 El Nino raised temperatures quite a bit. If positive feedback was true then the temperature should have stayed permanently higher. It didn’t so how can positive feedback possibly be right? The reason it didn’t is because the atmosphere isn’t saturated with humidity – the extra humidity gets taken out of the atmosphere.
Climate isn’t complicated. It’s actually very simple. The problem we are up against is, it doesn’t matter what is said, only who says it.

Nogw
August 9, 2009 11:15 am

My grand children (she is three years old) also plays computer games for fun.

Pamela Gray
August 9, 2009 11:21 am

Willis, I disagree. Using statistical models you can get in the ballpark with predictions. What you can’t predict as well is whether or not the conditions will happen as they have happened in the past. So you run several statistical models. One will win out by pulling ahead of the others. Dynamical models don’t work so well because we don’t know how the chaotic system works. Statistical models don’t care. They just spit out what happened in the past given the current set of conditions. And usually a number of things happened more than others under the same setting in the past so you set your confidence level and hit the button to get the most likely temp outcome.

Bill Illis
August 9, 2009 11:37 am

Just to add to Steve’s point about the 0.27C per Watt/metre^2.
I built a few charts of how temperatures are related to the forcings and the change in the forcing using the Stefan-Boltzmann equations.
Here is how Earth’s Surface temperature changes with changes in solar radiation (at the top of the atmosphere). The point of this is to show that the equation is logarithmic and as one goes up in Watts, the successive temperature increase is less and less as it increases. [I used solar since so many people are interested in that.]
http://img34.imageshack.us/img34/3668/sbsolarforcing.png
Now here is how the surface temperature will change for each 1 Watt/metre^2 of forcing at the surface over the range of Watts we are concerned about. (this is now the surface so it is solar/4 plus the greehouse effect).
http://img33.imageshack.us/img33/2608/sbtempcperwatt.png
This really means global warming is only one-third of that estimated give or take other changes such as albedo which could happen with global warming – it might also explain why temperatures have not kept up with the predictions of the models – whatever forcing is being absorbed in the oceans is actually taking away Watts – so there is less forcing at the surface than predicted, not less Temp C response per Watt than expected – the limit is still 0.27C (which declines to 0.26C in a few more watts of increase).
I think many people have been using averages for these calculations and one really needs to get down to the “each successive Watt equals Y temp increase”. The Sun is giving us 240 watts at the surface which translates into 255K or 1.06C per Watt. But actually, the first Watt was 64C, the next one was 34C and so on. It is only 0.27C now.
[Steve actually tuned me onto this in another venue which is going to help me finish another project I’m working on. Thanks.]

Willis Eschenbach
August 9, 2009 12:58 pm

Pamela, I guess my writing is not clear. The problem is not that it is a statistical model. It is the idea that you can remove variation by using AMO and Nino3.4.
In general, you can’t use observations to remove variance from observations. In particular, you can’t use sea temperature observations to remove variance from air temperature observations.
Or to be more accurate, you can do that, but you add absolutely nothing to the predictive ability of a model by doing so.
Sea surface temperature (SST) is very, very closely related to air temperature. The Hadley sea surface temperature (HadSST) enjoys a correlation of 0.93 with the Hadley global surface air temperature (HadCRUT). Steve is using a subset of the SST (Nino3.4 and AMO) to remove variations from the air temperature … I hope you can see the huge problems with that approach.
If Steve’s approach worked, we could use the SST to remove 93% of the variance from the air temperature, leaving a straight line … but how on earth would that improve the model?
His model is nothing but “Temperature is proportional to CO2” dressed up in fancy mathematical clothes. Open your eyes, folk … it doesn’t help to do that, it makes no difference, at the end of the day you’re just left with “temp ~ CO2”.
w.

Nogw
August 9, 2009 1:41 pm

If atmospheric CO2 falls to 220 ppm, plants get sick. They die at
160 ppm.
…and if plants die..your beautiful lives little global warmies will end too!

Alex Harvey
August 9, 2009 2:10 pm

Steve Fitzpatrick,
You Wrote:
“So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.”
There is a real puzzle here. Thoughout the course of the year each hemisphere cycles between hot summers and cold winters. The non-linerarity, if present, might show in the climatology. Basically the individial local climatologies (e.g. as in HadCRU’s abstem3) ought show a marked asymmetry between the (hottest month – annual average) and the (annual average – coldest month). Given that the range of temperatures in some continental areas is extreme (+/- 30C) the asymmetry ought to be marked. But the asymmetry is not apparent in the climatogies.
Now I do not no why this should be, so it puzzles me. It could look horribly like negative feedback. Over the oceans and in maritime areas one can rightly argue that the oceans are exporting there low annual range to the land and somehow this removes the asymmetry, but I can not see how this applies to the areas like Eastern Siberia that have the largest annual range, which implies that the are largely cut off from the oceans.
As best as I can calculate from the S-B equation the asymmetry there should be around +9% of the annual range, but is actually very small ~1% and in the opposite sense.
Now this may all be just rubbish, so if anyone knows better please tell.
*******
On another tack, one area that I feel is not much commented on is the abilites of the GCMs to reproduce Earth Like Climatologies. That is getting the mean temperature, annual range, and annual phase lags correct for each location. As I recall they do not do very well. There really ought to be a climatology test. Such as, if I was to wake up in a modelled world, could I tell within the course of a year whether the climatology was right or wrong. Now I am not talking about whether which nobody can predict, but the climate (strictly speaking the climaotolgy).
One yet another tack, a guy from the MET was interviewed on the box a week or so back over the seasonal forecast for the UK and its failure. He siad that it was easier to predict the climate in the distant future (I can not remember but I think ~30 years out) than it is to predict the coming seasons. Could that be because it takes a lot longer before you can be judged.
Alexander Harvey

Rob R
August 9, 2009 2:30 pm

What happens to the scenario presented by Steve Fitzpatrick if one factors out the overestimation of warming that is likely to be part of the CRUT3V temperature anomaly. The CRUT3V anomaly does not account fully for socioeconomic contamination of the temperature record, as demonstrated by McKitrick and others. The real post-Little Ice Age global temperature anomaly trend is probably lower than that used in the post under discussion here.

Dr A Burns
August 9, 2009 2:31 pm

I wonder what effect man’s deforestation and desertification has had on climate in the past century ? Australia alone has lost 70% of its natural vegetation.

Adam from Kansas
August 9, 2009 2:41 pm

The paper is interesting, but if the draconion restrictions of emissions isn’t enough to stop AGW like that paper says the models suggest, does that mean the only way would be an unprecedented reduction of global population by more than 90 percent and maybe even 99 percent? Would we have to completely and rapidly de-populate Africa, China, and India and make that whole continent and the whole of those countries massive plant and animal preserves to make the temps. stay even?

cba
August 9, 2009 3:02 pm

One should bear in mind that when playing with the simple stefan’s law balance sensitivity that the surface average radiation is around 390 W/m^2 and the average radiated power is 240 W/m^2. Take the difference of that and you find that about 150 W/m^2 is lost going through the atmosphere and not made up for by radiation from higher altitudes. If you divide 240 by 390 you get 0.61 as being the fraction of power that is radiated. For a small change, you can assume that an increase in surface temperature via stefan’s law required to achieve balance from a 1W/m^2 increase in absorption is actually more like 1.67 W/m^2. The result for this is more like 0.31 K per W/m^2 for a radiative only sensitivity (in clear sky). Making a further stretch that this is valid for all W/m^2 forcings, one sees that 150*0.31 = 46 Kelvins above that of a BB which is far greater than the observed 33K rise caused by all GHGs. The sensitivity of this is the 33K riside divided by the 150W/m^2 GHG absorption forcing which amounts to 0.22 K/ W/m^2. That places us in the position that real world forcings are subject to net negative feedback over that of a simple radiative transfer. THe only thing that this doesn’t provide for is the additional amount of other ghg forcing changes caused by a change in temperature – i.e. the water vapor feedback. Note that the 0.22 K/ W/m^2 is the actual Earth system average sensitivity value to a co2 only (or any other forcing only) rise of 1W/m^2. Of course if there is a variation in sensitivity to W/m^2 as ghg absorption is increased, then the current sensitivity must be lower than this average if earlier forcings had higher sensitivity levels. For the current levels to be at higher sensitivities, the earlier had to have a lower effect – which makes little sense as the power absorption attenuation itself is a log function of decreasing effect.
What this cannot show directly is a feedback that changes the total w/m^2 forcing from another gas – such as how many W/m^2 increase in h2o vapor forcing happen when the T rises from the original – such as a 1 W/m^2 increase in co2 forcing. A co2 doubling of 3.6w/m^2 should then result in 0.22×3.6 = 0.8 Kelvins increase in T. If a 0.8 Kelvins rise in T creates an increase in h2o vapor forcing in some sort of positive feedback mechanism, its effect must definitely be less than 0.8 Kelvins. Otherwise any small variatiations in upward T would result in the complete runaway of h2o vapor driving itself upward. Also, any net positive feedback of h2o vapor forcing with T leads to wild swings and variations. Negative feedback net still results in variations but they are both stable and reduce the total swing from what it would be otherwise.
Note that a modest increase in T can only result in the possibility of h2o ‘feedback’ where there is liquid h2o available to be brought into the atmosphere. Some areas do not have this ready reservoir. Also, bringing in more h2o vapor into the atmosphere increases convective power transfer and can bring in that real unknown of additional cloud cover which can reduce added h2o vapor forcing from possibly being positive to being seriously negative – and this is the area that is poorly known and practically ignored in modeling.

Steve Fitzpatrick
August 9, 2009 3:02 pm

It would unfortunately take more time to answer all the comments/questions posted until now than it took to prepare the post, so I can’t possibly address all. I will try to address at least some:
With regard to this be a ‘pro-AGW’ post, I want to point out that I am very skeptical of the large temperature increases projected by GCM’s and the IPCC, and that the net climate sensitivity the model suggests (~0.27 degree per watt) is in the range of 1/3 of the sensitivities used in making those much larger projections. On the other hand, adding any infrared absorbing gas to the atmosphere makes it more difficult for infrared radiation to escape from the Earth’s surface. The net effect of these gases has been pretty well studied, and their “radiative forcings” are reasonably well known, so the addition of infrared absorbing gases to the atmosphere ought to increase the surface temperature. The key issue is the magnitude of warming that might reasonably be expected from surface temperature. Is the feedback that operates on this radiative forcing negative, positive, or near zero? My post was an effort to define the warming that might take place due to greenhouse gases IF they were the only cause for warming since the mid 1800’s. The several comments suggesting that part/most/all the warming was due to other causes seem to miss the point I was trying to make: this is pretty much a worst case scenario.
With regard to the model itself, I was not ware when Anthony would place the post on WUWT; had hoped to have the spreadsheet (sorry, it is not R or something else that some might prefer) available at the time of the post. I will ask Anthony to make the spreadsheet available.
Lief: I was not aware that Lean had changed her mind about the 2 watts change since the little ice age. It certainly was not my intent to misrepresent her current views. The calculations I did were based on recently measured changes in intensity over the solar cycle (peak to valley) of ~1 watt per square meter at the top of the atmosphere, and the model assumed this variation was the same since 1871. This works out to ~0.7 * 0.25 = 0.175 watt per square meter, and an expected solar signal from the solar cycle of 0.047C (peak to valley) for a sensitivity of 0.27 degree per watt per square meter. What I found interesting was that the best model fit to the temperature data corresponded to ~0.168 watt per square meter, remarkably (at least to me) close to the 0.175 watt per square meter that would be expected based on the measured variation over the last few cycles. So for what is is worth: the model is consistent with no substantial change in cyclical variation over the past 130 years.
Willis Eschenbach (09:13:53) : “The problem is that the AMO Index and the Nino 3.4 index are both measurements of sea surface temperature (SST). You present it as some kind of revelation that changes in SST today will affect the global temperature in the near future … but this is trivially true for any autocorrelated dataset, and is particularly true for SST and global temperature.”
If you look at the AMO and Nino 3.4 historical data you will see that in spite of overall warming since 1871, these indexes have shown essentially no net trend, and so appear to have contributed virtually nothing to the observed total warming. Graphs showing the historical trends of these two indexes are included in the spreadsheet. AMO and Nino 3.4 most certainly are related to “climate/weather noise”, and that is the point of including them in the model: these indexes account for most (not all) of the variation around the long term trend. AMO and Nino 3.4 can be measured at any time you want, and their contributions subtracted from the currently measured global average temperature to reveal the “true” temperature trend (or at least a much “truer” trend). Indexes like the AMO and Nino3.4 are well known to capture shorter term climate variation, and I was not suggesting that including them in a model was any kind of “revelation”; they were included in Bill Illis’s model (based on only CO2, AMO, and Nino 3.4) back in 2008.

Steve Fitzpatrick
August 9, 2009 3:44 pm

eric (06:40:53) :
“Steve Fitzgerald,
One of the key problems with an analysis of climate sensitivity from temperature data, such as you have performed is the estimation of lag time for the ocean surfaces to heat up. The use of the solar cycle versus temperature data is problematic. It is ok if the system consists of only one heat resevoir, so a single time constant is appropriate. The problem is that the ocean has a shallow and a deep resevior with different time constants, and the easily observed smaller resevoir, which has a 2 year time constant, will give you too small an answer for the climate sensitivity.
A more complex model is required to get a correct answer.”
I do not know who this Steve Fitzgerald person is, but he is not me.
If you assume that the “short” time constant is <2 years, and this shallow reservoir represents only ~40%, and you also assume that there is a larger (~60%) reservoir with a time constant of ~30-40 years, then the model will calculate (with with a lower R^2) a sensitivity of about 0.37 degree per watt per square meter, or ~1.37C for a doubling of CO2. To reach a sensitivity in the range of the IPCC projections, you need BOTH much longer ocean lags (which do not appear consistent with recent ARGO data) and assume that man-made aerosols have "canceled" much of the radiative forcing (once again, not consistent with 'global brightening' observed since the early 1990's. The model will also suggest that the solar cycle forcing has to be substantially bigger than has been observed by satellites.
The key point is: what will happen in the next 50 years? A relatively straightforward (and simple) curve fit analysis suggests that warming may continue, but at much less than the IPCC projected rate. Please look at the accuracy of the 1972 to 2008 model projection, and then compare with the accuracy of GCM projections since at least 2000 (Lucia has many relevant posts on this subject); the GCM's consistently predict more warming than actually happened. Do you honestly think that the prediction accuracy of the model I showed will change from good to poor starting in 2009, and the GCM's will suddenly become more accurate? If so, what change(s) in the sun/oceans/atmosphere do you think is(are) happening right now that will make the curve fit model less accurate than it was for 1972 to 2008, and make the GCM's more accurate?

Bill Illis
August 9, 2009 3:46 pm

Regarding Willis’ comments about the AMO and Nino 3.4 region being part of the temperature record – they are. But they are only a small part of it.
The AMO region represents about 5.5% of the globe (probably less compared to the regions which are actually counted in the global temperature record since Hadcrut3 and GISS don’t use the whole AMO region in their global temperature record.)
So one is using (up to) 5.5% of the dataset to predict 100% of the dataset.
The Nino 3.4 region as well represents about 0.7% of the globe and is fully counted in the global temp record. But the correlation for Nina 3.4 is lagged 3 months if one is using a monthly model so one is using 0.7% of the temperature record of 3 months ago to predict today’s temperature record.
Willis is, thus, partly correct but If 5% of the globe and 0.7% of the record of 3 months ago, can predict up to 70% of the temperature variation, then it is certainly something worth looking at.
If anything, they are responsible for a large part of the “noise” in the temperature record which is rather easy to demonstrate. Take out some of the noise and the underlying trends are more evident.
Both indices are detrended, so over the long-term, they are not adding to the underlying trend. But on short time-scales, they obviously have an impact on the trend. Hadcrut3 increased by 0.6C from the beginning of the 1997-98 El Nino to the end – only 15 months. Why wouldn’t one want to adjust for that?

August 9, 2009 3:54 pm

Nogw (13:41:21) :
If atmospheric CO2 falls to 220 ppm, plants get sick. They die at
160 ppm.…and if plants die..your beautiful lives little global warmies will end too!

Yeah! What you say is true. Warmies cannot understand that plants survive better to droughts at higher levels of CO2:
http://www.ars.usda.gov/research/publications/Publications.htm?seq_no_115=220520

Willis Eschenbach
August 9, 2009 3:54 pm

Steve Fitzpatrick, thanks for your reply. You wrote:

If you look at the AMO and Nino 3.4 historical data you will see that in spite of overall warming since 1871, these indexes have shown essentially no net trend, and so appear to have contributed virtually nothing to the observed total warming. Graphs showing the historical trends of these two indexes are included in the spreadsheet. AMO and Nino 3.4 most certainly are related to “climate/weather noise”, and that is the point of including them in the model: these indexes account for most (not all) of the variation around the long term trend. AMO and Nino 3.4 can be measured at any time you want, and their contributions subtracted from the currently measured global average temperature to reveal the “true” temperature trend (or at least a much “truer” trend). Indexes like the AMO and Nino3.4 are well known to capture shorter term climate variation, and I was not suggesting that including them in a model was any kind of “revelation”; they were included in Bill Illis’s model (based on only CO2, AMO, and Nino 3.4) back in 2008.

You are correct that there is no trend in the AMO or the Nino3.4. This is because they are a ratio of SST values, and not SST itself.
However, they are measurements of the climate system, and as such, you can’t use them to reduce the variance in the data. You treat them as though they were external forcings, or new data which you could subtract from the existing measurements to “reveal the true temperature trend”.
But they are not external forcings or new data in any sense. They are temperature measurements of the system. You can’t use them to “reveal the true temperature trend”, that’s simply not possible. You can’t “bootstrap” more information out of measurements by subtracting some subset of those measurements from the data. It’s the same as just smoothing out the data to get rid of short term variability. Makes your data look better … but it doesn’t make your model more accurate in the slightest.
This is a fundamental and central point, which obviates your basic thesis. Please do some research on the question, as your claims as they stand are simply not tenable.
Heck, if you want to take your path to the ultimate, just detrend the SST. This gives you the ultimate measure of the natural variability. Then subtract the detrended SST from the air temperature, and voila!! The true temperature trend is revealed!
But that doesn’t make your model any more or less accurate, not by one bit. It does show the trend … but we knew that already.
w.

Steve Fitzpatrick
August 9, 2009 4:01 pm

Dr A Burns (14:31:15) :
“I wonder what effect man’s deforestation and desertification has had on climate in the past century ? Australia alone has lost 70% of its natural vegetation.”
Very complex question. A dense cover of trees has low albedo, while a desert has much higher albedo, so at first glance you might think that conversion of forest to desert would have a net negative effect on net solar heating. However, there are other issues like rainfall patterns being changed by forests which could modify the heat balance.

Steve Fitzpatrick
August 9, 2009 4:04 pm

Anthony (or moderator) how can I best send you the spreadsheet so people can paly with it if they want? Should I send it to Anthony’s email?
REPLY: WordPress.com does not allow hosing of Excel spreadsheets or Zip files for security. Best to publish it to a 3rd party file service and provide a URL – Anthony

Willis Eschenbach
August 9, 2009 4:08 pm

Bill Illis, good to hear from you. You say:

Regarding Willis’ comments about the AMO and Nino 3.4 region being part of the temperature record – they are. But they are only a small part of it.
The AMO region represents about 5.5% of the globe (probably less compared to the regions which are actually counted in the global temperature record since Hadcrut3 and GISS don’t use the whole AMO region in their global temperature record.)
So one is using (up to) 5.5% of the dataset to predict 100% of the dataset.
The Nino 3.4 region as well represents about 0.7% of the globe and is fully counted in the global temp record. But the correlation for Nina 3.4 is lagged 3 months if one is using a monthly model so one is using 0.7% of the temperature record of 3 months ago to predict today’s temperature record.
Willis is, thus, partly correct but If 5% of the globe and 0.7% of the record of 3 months ago, can predict up to 70% of the temperature variation, then it is certainly something worth looking at.
If anything, they are responsible for a large part of the “noise” in the temperature record which is rather easy to demonstrate. Take out some of the noise and the underlying trends are more evident.
Both indices are detrended, so over the long-term, they are not adding to the underlying trend. But on short time-scales, they obviously have an impact on the trend. Hadcrut3 increased by 0.6C from the beginning of the 1997-98 El Nino to the end – only 15 months. Why wouldn’t one want to adjust for that?

Certainly we can use those measures to reduce the variance of the temperature record. But how does this differ from just smoothing the record? It has the same advantages (reduction of short-term variability) and the same disadvantages (reduction of degrees of freedom). How does it help the modeling effort?
A model contains one or more dependent variables (temperature, precipitation) and a number of independent variables (changes in CO2, aerosols, water vapor, black carbon, and the like). Removing the effect of one of the independent variables helps us to establish the true strength of the remaining independent variables.
However, AMO and Nino3.4 are dependent variables, not independent variables. As such, removing them does not improve the model at all. So yes, you can do what you propose … but how does it help?
w.

Steve Fitzpatrick
August 9, 2009 4:25 pm

Richard Sharpe (07:45:19) :
” A new paper by Lindzen and Choi (described at WUWT on August 23, 2009)
Do you have something scheduled to drop on August 23?”
Sorry, a simple error: July 23, 2009; I guess I was getting ahead of myself.

Steve Fitzpatrick
August 9, 2009 4:36 pm

tallbloke (00:52:31) :
“What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”
There is no data I am aware of that would lead to assignment of 75% of warming to solar contributions. Let me know if you have this data and where it comes from.

August 9, 2009 4:54 pm

Steve said
“On the other hand, adding any infrared absorbing gas to the atmosphere makes it more difficult for infrared radiation to escape from the Earth’s surface.”
Steve, thanks for an interesting article. Under what circumstances could infrared radiation leave the earth?
On a similar tack I read that co2 molecules could leave the earth provided they attained sufficient velocity but how that was achieved and what % of co2 ‘leaks’ from Earth the article did not say.
Anyone able to eleboarate on either of these issues?
tonyb

Steve Fitzpatrick
August 9, 2009 5:10 pm

Allen63 (05:24:10) :
“But, AGW is all about global heat accumulation — for which global temperature is only a proxy.”
Of course.
Unfortunately, we do not have 137 years of Argo heat data (which would completely settle the question of climate sensitivity). We only have temperature data, with all it’s warts, uncertainties, and problems. We do not have reliable temperature records from before the 1800’s, so it is not possible to verify if the model results would be consistent with earlier periods. Substantial warming and cooling certainly have taken place over very long periods (hundreds to thousands of years) including the medial warm period, little ice age, Roman warm period, and the holocene optimum.
My intent was not to explain the recent climate history of the Earth. I was trying only to make a reasonable prediction for the next 50 years (about two generations), assuming that the recorded warming since the 1800’s has been almost all due to greenhouse forcing. Will the prediction be perfect? For sure not. Will the prediction be petty close? Probably. If I were young enough to have a chance to be around in 30 or 40 years, I would happily take bets on the accuracy of the prediction. The standard error of the temperature estimate is about 0.095C, so there is about a 2/3 chance that the model’s prediction will be within ~+/-0.1C of the measured temperature 30 or 40 years from now.

August 9, 2009 5:25 pm

Bill Illis: A few clarifications.
You wrote, “The AMO region represents about 5.5% of the globe (probably less compared to the regions which are actually counted in the global temperature record since Hadcrut3 and GISS don’t use the whole AMO region in their global temperature record.)”
NOAA ESRL calculates the AMO as detrended North Atlantic SST anomalies from 0 to 70N.
http://www.cdc.noaa.gov/data/timeseries/AMO/
HADSST2 (used by GISS up to November 1981, also used in HADCrut) appears to capture the North Atlantic as far as 80N. It should vary with ice extent:
http://hadobs.metoffice.com/hadsst2/
OI.v2, which GISS has used since December 1981, appears to capture anything that’s not ice. http://i26.tinypic.com/2v0hbid.png
(And, curiously, on occasion appears to indicate SST anomalies where ice exists.)
http://i42.tinypic.com/2ms27a1.jpg
So both GISS and HADCrut should capture all of the AMO. And what’s the surface area of the North Atlantic, about ½ of the Atlantic? So, if the Atlantic represents 30% of the global ocean area, and if the North Atlantic occupies half of it, and if the oceans represent 70% of the global surface area, then the North Atlantic should represent about 10% of the global surface area, should it not?
You wrote, “The Nino 3.4 region as well represents about 0.7% of the globe and is fully counted in the global temp record. But the correlation for Nina 3.4 is lagged 3 months if one is using a monthly model so one is using 0.7% of the temperature record of 3 months ago to predict today’s temperature record. ”
Not to be nitpicky but… The distance between 5N and 5S is 1111 km. The distance between 170W and 120W is 5533 km.
http://www.nhc.noaa.gov/gccalc.shtml
The surface area for the NINO3.4 area is ~6.147 million sq km. And if the surface of the globe is 510.072 million sq km, then the NINO3.4 area represents ~1.2% of the globe.
BUT
El Nino events affect more of the eastern tropical Pacific than the NINO3.4 area:
http://i25.tinypic.com/t8t1lw.png
The SST anomalies of the NINO3.4 area are used for comparison to global temperatures because they agree statistically with global temperature variations better than the SST anomalies of the NINO3, NINO4, and NINO1+2 areas. Thus your reason for using the NINO3.4 region for your predictions.
Regards

Steve Fitzpatrick
August 9, 2009 5:28 pm

Willis Eschenbach (16:08:58) :
“However, AMO and Nino3.4 are dependent variables, not independent variables. As such, removing them does not improve the model at all. So yes, you can do what you propose … but how does it help?”
If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871. They were included to improve the accuracy of the model predictions. Yes, the short term variation in global temperature is closely correlated with these indexes, but these indexes are completely independent of radiative forcing and clearly are not responsible for any significant net global warming over the entire period…. their trends are completely flat.

Nogw
August 9, 2009 5:36 pm

I was wondering, when reading in Ian Plimers book “Heaven and earth”, how is anybody going to enforce tax payment on CO2 emissions made by Mammoth Hot Spring at Yellowstone, which emits from 160 to 190 tonnes per day of CO2?
It will be a bit troublesome, though visitors could be charged instead of the spring itself…

Steve Fitzpatrick
August 9, 2009 5:55 pm

TonyB (16:54:04) :
“Steve, thanks for an interesting article. Under what circumstances could infrared radiation leave the earth?”
Infrared radiation leaves the Earth continuously, with an average rate that is very is very close to the average rate of solar heating. Any difference shows up as heat gain or loss in the oceans. How much infrared radiation is lost per square meter varies a lot. In general, the rate is highest in the tropics (or close to them) where the rate of solar input is highest, and lowest near the poles, where the solar input is small. The rate of loss also varies with time of day, weather, season, if ocean or land area, and with local geography on land.

Patrick Davis
August 9, 2009 6:15 pm
Editor
August 9, 2009 6:26 pm

Steve Fitzpatrick (17:28:10) : “If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871.”
Well, that’s exactly what I said a while ago. When Nino and AMO indices are taken away, the only factor left is CO2. So all the model is doing is ascribing all the observed temperature trend to CO2.
What value is the model? IMHO precisely zero.

Jim
August 9, 2009 6:30 pm

***************
Nogw (17:36:06) :
I was wondering, when reading in Ian Plimers book “Heaven and earth”, how is anybody going to enforce tax payment on CO2 emissions made by Mammoth Hot Spring at Yellowstone, which emits from 160 to 190 tonnes per day of CO2?
It will be a bit troublesome, though visitors could be charged instead of the spring itself…
****************
What with all the talk about the role of water in general and clouds in particular, I wonder how much CO2 cold rain in the tropics sweeps into the ocean? The cold rain should be pretty efficient at absorbing CO2. I guess being fresh water, it wouldn’t mix well with the ocean water and end up out-gassing pretty quickly.

Steve Fitzpatrick
August 9, 2009 6:33 pm

“Adam from Kansas (14:41:13) :
The paper is interesting, but if the draconion restrictions of emissions isn’t enough to stop AGW like that paper says the models suggest, does that mean the only way would be an unprecedented reduction of global population by more than 90 percent and maybe even 99 percent? Would we have to completely and rapidly de-populate Africa, China, and India and make that whole continent and the whole of those countries massive plant and animal preserves to make the temps. stay even?”
Is your question tongue in cheek? If not, then yes, there are a lot of green crazies out there who advocate drastic reductions in present worldwide populations, numbers like 50% to 80% fewer people than today are kicked about, combined with drastic reductions in per-capita fossil fuel use. They basically want very few babies born over the next 100 years (Worldwide lotteries for the right to have offspring, or will the IPCC just pick the winners? And what to do with those pesky babies born to people who didn’t have permission?).
If you want get really depressed about what sane people have to overcome in the age of Obama, read a while at the Green Hell Blog. James Hansen (a very main stream green, not nearly as extreme as many) calls for reducing CO2 to 350 PPM through a combination of herculean efforts over the next 100 years. It is truly mind boggling.

Dr A Burns
August 9, 2009 6:55 pm

“does that mean the only way would be an unprecedented reduction of global population by more than 90 percent and maybe even 99 percent? ”
A back-of-the-envelope calculation shows that body heat from the current 6.7 billion people is enough to heat the atmosphere by 0.8 degrees C in 100 years.
The point is that the changes in temperature being discussed as so small that almost anything can affect them.

Robert Austin
August 9, 2009 6:58 pm

John (03:06:17) :
You wrote.
“The author talks of infra-red absorbing gases such as CO2. My understanding is CO2 on received a quantum of infra-red, instantly radiates in random direction a quantum of infra-red at the same wavelength and energy. If CO2 absorbs IR it must get warm.
I thought this was one of the main misdirections used in the so-called greenhouse gas theory.
Not so?”
Please, anybody correct me if I have got this wrong since I am not a physicist.
If a CO2 molecule is excited to a higher energy level by a photon of radiation and “immediately” re-radiates the same energy photon, there is no heating of the CO2 molecule. This extended to multiple absorptions and emissions is the simple but not very correct model that some use to illustrate the so called greenhouse effect. In other words, these photons with wavelengths corresponding to the absorption bands of CO2 are shown to go ricocheting in random directions and eventually escape to space or collide with the earth where the process starts all over again. The delay in the escape of the photons within the absorption bands is put forth as creating the greenhouse effect.
The above description may well be accurate in a rarefied gas where the decay time for the raised energy state in the CO2 is substantially shorter than the mean time between collisions with other molecules but in the lower atmosphere, this is not the case.
So what we have in the lower atmosphere is the earth emits approximately like a black body with a small portion of this energy being in the absorption bands of CO2. The photon travels only a few metres before exciting a molecule of CO2. Most of these excited CO2 molecules collide with adjacent molecules before the high energy state can decay. The photon energy is converted into heat energy and the heated gases again radiate as a black body with the radiated energy spread out over the infrared spectrum. This “dilution” of the original energy in the absorption bands means that the CO2 has done its thing in the lower atmosphere and is of diminishing importance as the concentration rises and this is exemplified in the logarithmic relation of CO2 concentration to temperature rise.
Of course, the thing is immensely more complicated when you introduce clouds, albedo, convection but I cringe when I hear what I call the ping pong ball explanation of the greenhouse effect.

Kevin Kilty
August 9, 2009 7:04 pm

TonyB (16:54:04) :
Steve said
“On the other hand, adding any infrared absorbing gas to the atmosphere makes it more difficult for infrared radiation to escape from the Earth’s surface.”
Steve, thanks for an interesting article. Under what circumstances could infrared radiation leave the earth?
On a similar tack I read that co2 molecules could leave the earth provided they attained sufficient velocity but how that was achieved and what % of co2 ‘leaks’ from Earth the article did not say.
Anyone able to eleboarate on either of these issues?

Q1: IR leaves Earth surface upward toward space as long any surface material has an absolute temperature above 0K, which includes everything. Certain bands in the IR leave Earth unimpeded simply because no gaseous material in the atmosphere absorbs this radiation. In other bands, however, there are gases that absorb IR strongly. CO2 for instance absorbs strongly in the band from about 12 to 16 micrometers wavelength. IR absorbing gases do not store IR, they absorb then re-emit in new directions including back toward the Earth. It is the radiation emitted back toward Earth that we call the “greenhouse effect.”
Q2: Gases do escape Earth all the time, but do so only up in the exosphere weher the atmosphere is so tenuous that the distance between successive collisions of gas molecules with one another is large. If you took any high school chemistry and remember such, you will recall that at any temperature all molecules in gas possess the same mean kinetic energy, therefore the least massive molecules possess the highest speed. These are those that escape Earth most easily; so, as hydrogen and helium manage to reach the exosphere they will leave the Earth quite quickly. For example, there is no primordial helium left in the atmosphere. A gas like CO2, on the other hand, doesn’t even reach the exosphere because of the cold trap at the mesopause. The principle means by which CO2 leaves our atmosphere is through weathering of surface rocks, which produces bicarbonate and carbonate minerals carried to the oceans in rivers, and the direct solution of CO2 into ocean water. This dissolved CO2, in turn, reacts with oceanic crust and is stored more permanently in minerals there. There are other parts to the carbon cycle was well.

Steve Fitzpatrick
August 9, 2009 7:14 pm

Mike Jonas (18:26:22) :
“Well, that’s exactly what I said a while ago. When Nino and AMO indices are taken away, the only factor left is CO2. So all the model is doing is ascribing all the observed temperature trend to CO2.
What value is the model? IMHO precisely zero.”
Well actually, the model includes trends in radiative forcings from CO2, N2O, chloro-fluorocarbons, and methane. These separate trends were included so that divergence in their trajectories could be considered (instead of just a single trajectory for CO2, which is really not as accurate a representation of radiative forcing). The value of the model is to make reasonable predictions over reasonably long periods, under a worst case assumption that greenhouse forcing has caused all of the observed warming.
“Splitting the time period, and curve-fitting from 1871 to 1971 then comparing “predictions” with post-1971 sounds impressive, but all it means is that if factors which actually caused the overall trend from 1871 to 1971 remained in place from 1971 to 2008, then the model would match neatly. The actual factors could be the sun, clouds, shipping volumes, or world use of soap. The model would still give a good match.”
Of course; and if the world use of soap continues to match the radiative forcing for the next 50 years, then the model will continue to make accurate predictions. Remember this is a WORST CASE prediction. If there are other factors that are:
1) truly “causative”
2) independent of radiative forcing
3) which by coincidence have historically tracked radiative forcing over 100+ years, and
4) which will now no longer do so
then the model predictions will be way too high.
On the other hand, if radiative forcing has caused all or even most of the warming, then the model should make pretty accurate predictions. The US Congress is currently working on an absolutely horrible cap-and-trade scheme, which will cost a fortune, reduce carbon emissions very little, and which is justified only on the basis of extreme predictions of global warming. If a realistic projection of warming in the worst case shows the warming will be lower (eg. <0.7C in 50 years instead of 1.4C), then perhaps this dreadful legislation loses some of it's presumed justification. Is this not a good thing?

John S.
August 9, 2009 7:20 pm

Dennis A (02:59:26):
Thank you for unearthing Stevenson’s realistic description of how differently the oceans respond to SW and LW radiation. That description should be studied carefully by all would-be climatologists, too many of whom are still stuck on simplistic blackbody concepts.
Willis Eschenbach (09:13:53):
Thank you for bringing a sorely lacking physical distinction between external sources and internal redistribution of heat into the discussion, which seems myopically centered on curve-fitting. You seem to be one of the few who truly grasps the categorical difference between active forcing and passive system response.

TD
August 9, 2009 7:30 pm

John (03:06:17) :
You need to stop thinking of absorption and emission as ying and yang, they are not that closely related.
Absorption is governed by the number of absorbers and the incoming IR.
Emission is governed by the number of emitters and the temperature of the local gas.

Adam Grey
August 9, 2009 8:45 pm

I’ve read that a climate sensitivity that is too low means that ice age changes are not possible. The ~3c sensitivity is corroborated in various ways, and one of them is to estimate from large-scale global temp changes – quaternary ice age cycles serving well because the land masses, and hence distribution of ice sheets, ocean/air currents etc are very similar to today.
Perhaps the author could make a global model of ice age changes (specifically deglaciation), and plug in the lower climate sensitivity posited here to see if it accomodates (proxy) observations.
This is the main problem I’ve read regarding lower climate sensitivities – whether Lindzen’s Iris effect or whatever. If the climate doesn’t respond as much as is thought, then the extreme swings in the geoliogical record, allegedly, aren’t possible.

Lachlan O'Dea
August 9, 2009 8:57 pm

Mr Fitzpatrick’s model and the IPCC models seem to both be based on the assumption that GHG increases are the only significant cause of temperature change. Yet, they produce greatly different values for the climate sensitivity. If someone could give a quick summary of how Mr Fitzpatrick’s approach differs from the IPCC’s, that would really help me.

Larry
August 9, 2009 9:31 pm

Steve, thanks for your excellent post. I’m just a poor dumb lawyer, so it was hard for me to sort through some of your discussion, but I appreciate what you were attempting to demonstrate in terms of a “worst case” scenario if what the AGWers are saying proves to be correct (although I doubt all of it can be proven correct, and I also tend to doubt whether your method of calculating climate sensitivity is really useful in making long term climate predictions). I also agree with your policy suggestion concerning making nuclear power the source of our future electricity. When John McCain was looking for an “insurance policy” in connection with this overall question, he didn’t stress this particular solution enough. Your post makes a compelling case for long-term further rational study and gradual action as opposed to short-term hysteria and draconian solutions.

Patrick Davis
August 9, 2009 11:26 pm

It’s funny, last week we had a couple of days which were a little warmer than average and then we get this…
http://www.smh.com.au/environment/chilliest-sydney-morning-for-a-year-20090809-ee8v.html
I can confirm it was cold this morning, and in fact last night, working on my car with a flat battery. Car broke down, left out for about an hour and a half, condensation on the rear windscreen and roof as well as my breath condensing too. This was about 9-10pm on Sunday the 9th.

Allan M R MacRae
August 10, 2009 1:53 am

Building on Willis’ comments, here is an excerpt of a file from 2005.
Ascribing all of the alleged 0.6C rise in global temperature to increased atmospheric CO2 gives a climate sensitivity to CO2 doubling of ~1.2C (1.189C) from a one-line solution.
But there is, imo, no evidence to ascribe such warming to increased CO2.
The global cooling from ~1945 to 1975 and the cooling since ~2000 are not explained by this assumption.
As Willis suggests, such cooling cannot be properly explained by ascribing it to another measured temperature, whether it be PDO, AMO or other.
IPCC modelers have attempted to attribute the 1945-75 cooling to aerosols, principally SO2, but they have had to invent the data to do so – I accept Douglas Hoyt’s comment that there are no such trends in his real measured data, save volcanoes which are clearly apparent.
Here is the 2005 Excel file, copied onto WORD – hope it is legible.
9. EXTRAPOLATING OBSERVED WARMING TRENDS
by Jarl Ahlbeck (Turku, Finland)
We should not confuse the word “possibility” with “probability” as some
people do when they compare different simulated results with each other.
Everything is possible, but probability has a mathematical definition and
should not be used when comparing simulated results. These reported
(Nature, 27 Jan 2005) values of 1.9 to 11.5 deg C warming are
possibilities, computerized speculations, nothing else. Also: Let’s not
to talk about percent possibilities. All possibilities are
100% possible.
But of course, a kind of reality check can be made very easily: Say that
half of the observed 20th century warming of 0.8 deg is due to greenhouse
gases (CO2 increase from 280 to-370 ppm) and half is due to increased sun
activity. As the relation is logarithmic, 0.4 deg=k*ln(370/280), giving
k=1.435. For 2*CO2 (560 ppm), an additional warming of 1.435*ln(560/370)
=0.59 deg C could be expected. This is a speculation as good as any
produced by a computer climate entertainment program.
In fact, 0.59 deg may be an overprediction as the observed warming has been
partly caused by CFCs and CH4. As we know, the atmospheric concentration of
CFC has decreased, and there is no more increase in CH4. This means that
the k-value for CO2 should be lower than 1.435.
k = deltaT/ln(CO2b/CO2a)
deltaT = k*ln(CO2b/CO2a)
For various % of 0.8 degree C temp rise in 20th century ascribed to CO2:
(MacRae calculations and comments below)
k CO2a CO2b deltaT As Above Case
1.435 280 370 0.4 checks Assumes 50% deltaT
1.435 370 560 0.595 checks due to >CO2.
2.870 280 370 0.8 Assumes 100% deltaT
2.870 370 560 1.189 due to >CO2.
Both 50% and 100% seem too much high, given the better correlations below.

crosspatch
August 10, 2009 1:56 am

I think our climate can be very sensitive but maybe not to the things people have put forth so far. Take the Bering Strait as an example. It is pretty shallow, only about 55m at its deepest point. The average depth is about 40m. Maximum ice thickness in winter has approached 15m in a cold year in the 20th century. General flow is Northward from the Pacific into the Arctic.
Now imagine we have some really cold years and the ice freezes to 20 or 25m. That would mean that the ice will freeze completely to the bottom over a larger area and will freeze the mud under it. Ice floating on water mainly melts from the bottom up. While this is going on, the amount of water transported North will decrease because the size of the channel available will decrease. This could act as a positive feedback that causes the Arctic waters to become even colder. So now we have a situation where it takes much longer to melt the ice as it must melt from top down or from the edges in rather than bottom up as is the case with floating ice. If combined with that, we see an accumulation of ice on land, sea levels could begin to drop exacerbating the problem even more.
At some point you don’t need to wait for sea levels to drop completely to expose the “land bridge” between Asia and North America. Once the Bering Strait freezes all the way to the bottom, the cutoff of warm water to the Arctic could trigger a situation where ice accumulation in the Northern hemisphere rapidly increases and you get rapid sea level decline. In fact, it might not need to freeze all the way to the bottom to have a significant impact. Simply freezing 5 more meters of depth might be enough to reduce the volume of Pacific flow to have an impact on Arctic ocean temperatures.
Once the Arctic cools significantly and we see ocean levels begin to drop or more area freezing all the way to ground, we begin to see other bodies begin to constrict such as the English Channel, North Sea, and Irish Sea further modifying the amount or exchange between colder and warmer regions of water.
And you can have an alignment of several things that could combine to produce a larger impact. For example, the mean flow through the Bering is Northward but sometimes weather patterns and pressure gradients can set up in such a way as to create a mean Southerly flow instead. If such a pattern was unusually persistent, it could be the first domino that sets things in motion. So a combination of declining insolation due to orbital change, an unusual weather/wind pattern, and a colder than normal winter or series of winters could result in a dramatic change in climate that might cause the Northern Hemisphere to tip rapidly into glacial conditions.
So yeah, I believe Earth’s climate can be very sensitive but I also believe that changes in the ocean temperatures, currents, and flow volumes might have a much greater impact on climate than human emissions. Reduction of exchange between Pacific and Arctic might be enough to start the ball rolling and the freezing of the Bering to the bottom would be the final step in tipping the Northern Hemisphere into glaciation.
Once that freezes all the way to bottom, it would be difficult to get open again but once it did open, it could also mean a very quick transition to interglacial conditions again.

Allan M R MacRae
August 10, 2009 2:15 am

Further to the above post:
Another issue is the divergence between satellite and Hadcrut3.
I estimate that Hadcrut3 ST has a warming bias of 0.07C per decade over UAH LT, since ~1979.
See the first graph at
http://www.iberica2000.org/Es/Articulo.asp?Id=3774
Furthermore, it is clear that CO2 lags temperature at all measured time scales, from ice core data spanning thousands of years to sub-decadal trends – the latter as stated in my 2008 paper, and previously by Kuo (1990) and Keeling (1995) .
My 2008 paper is located at
http://icecap.us/images/uploads/CO2vsTMacRae.pdf
Considering all the evidence, and the work of Roy Spencer , Richard Lindzen and others, it is difficult to attribute more than 0.3C average global warming to a hypothetical doubling of atmospheric CO2.
The actual sensitivity could be much less, approaching 0.0.
In any case, less than 0.3C seems inconsequential to me.
Those who feel the need to panic should find something more credible to panic about.
Regards, Allan

Allan M R MacRae
August 10, 2009 2:23 am

Sorry – typo in the above, should be 0.8C not 0.6C – correction reads:
“Ascribing all of the alleged 0.8C rise in global temperature to increased atmospheric CO2 gives a climate sensitivity to CO2 doubling of ~1.2C (1.189C) from a one-line solution.”

tallbloke
August 10, 2009 2:24 am

Steve Fitzpatrick (16:36:50) :
There is no data I am aware of that would lead to assignment of 75% of warming to solar contributions. Let me know if you have this data and where it comes from.

Hi Steve, it’s my data. Like all of us who are doing a bit of home brewed modelling, I’m working on scenarios in which the parameters are a bit different to the ones generally accepted by the modelers who’s models don’t work.
In my case, I’ve worked out some values from the satellite altimetry showing sea level rise, the amount of solar energy retained in the oceans needed to get the thermal expansion component of that rise, and a value I’ve estimated for the level of solar activity at which the oceans neither gain nor lose heat. Coupled with a correlation Ive found between small changes in the length of day and changes in global temperature, I’ve come up with this graph:
http://s630.photobucket.com/albums/uu21/stroller-2009/?action=view&current=temp-hist-80.gif
The mismatch around the war years is due to the LOD proxy not capturing el nino events very well, plus a well known bias introduced to the SST data by the engine cooling water inlet sensors used by military vessels.
I haven’t yet worked out all the energy relationships, but given the uncertainty over TSI measurements and the poor state of knowledge regarding the amount of heat coming through the relatively thin seabed from changing and overturning currents of radioactive molten stuff in the earths mantle which are responsible for about 90% of changes in LOD, it seems plausible to me at the moment.
So I’m just asking a hypothetical question for now, if you don’t mind a little speculation:
“What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”

Patrick Davis
August 10, 2009 4:14 am

IPCC mass extinctions due to “climate change”….NOT.
http://news.smh.com.au/breaking-news-world/flying-frog-among-353-new-himalayan-species-wwf-20090810-efjo.html
Until our Sun decides to “destroy the Earth” our pautry efforts, in terms of emissions and control of said emissions, are so trivial it beggars belief.

timetochooseagain
August 10, 2009 6:08 am

Adam Grey (20:45:45) : The problem with that is that it assumes that 1. The forcings that control the glaciations are all known and determined 2. That the sensitivity is independent of the climate state and timescale and most importantly 3. That the influence of Milankovitch effects are adequately described by the very small net changes in recieved solar radiation at the top of the atmosphere.
You see, the concept of climate sensitivity is really only appropriate for dealing with effects of forcings which are more or less spatially homogeneous (that is, the global mean forcing is essentially the same as that anywhere else). This is the case with CO2, essentially, but it is NOT true of orbital effects, which vary strongly not only with latitude but also season. One reason this matters is that, as was pointed out by Lindzen and Pan, 1994, such variations would mean that heat fluxes between the Equator and Poles would be greatly altered by Milankovitch effects-if, as the Iris hypothesis suggests, Tropical climate is strongly constrained, then such changes in transport would lead to large changes in global mean temperatures, and Polar temperatures would be additionally boosted in variation by ice-albedo feedback.
So the Ice Age comparison is ill posed ( not to mention that arguing against observed negative feedbacks because it becomes to hard for you to explain ice ages is pretty silly).
Lindzen, R.S., and W. Pan (1994) A note on orbital control of equator-pole heat fluxes. Clim. Dyn., 10, 49-57.

Fred Middleton
August 10, 2009 6:10 am

Common Sense and Politics are opposing electrical view points.
The complexity of Climate is confusing to those of use that are told to shut up and sit down by government. No easy pill to swallow if the Captain, leaning over the rail, throws you a brick and says “catch this, it will keep you afloat”. The guy floating next to me says “don’t believe, its just a brick”. I can see its just a brick, but why does the Captain (if the ship is really sinking) keep throwing a brick?
Spotted owl: “experts (gov’t)” say that it is endangered (30 years ago) Habitat Loss. But wait, another expert, private biologist says ” no, there are sick communities, that is what needs to be studied”. Today, the Feds have hired shooters. Yep. To shoot the invading eastern Barred Owl that says to the Spotted Owl, “move out or I will kill you”. The Spotted Owl says to self, move out this guy is tougher than me” and too many Spotted Owls were then living in the same location, creating sickness loss in hatch survival.
Watts the point: This site is progressively being attacked(my suspicions by an intentional seed plant) at chit-chat blogs-common citizen type, on an increasing frequency. Name calling, sometimes with foul language. Constantly siting this expert and that expert (no names or data) but an identical government sermon similar to the 30 yr ago sick owls, with repeated talking points condemning the Surface Station project. The Fed grants have not begin to study with conviction the migration over the Rocky Mt. of the Barred Owl.
Blow vi ate is akin to Bovine chewing cud and expelling (burp) the large quantities of greenhouse gas. Shame on me.

Steve Fitzpatrick
August 10, 2009 6:20 am

Adam Grey (20:45:45) :
“I’ve read that a climate sensitivity that is too low means that ice age changes are not possible. The ~3c sensitivity is corroborated in various ways, and one of them is to estimate from large-scale global temp changes – quaternary ice age cycles serving well because the land masses, and hence distribution of ice sheets, ocean/air currents etc are very similar to today.”
I think it is fair to say that nobody knows for certain all the causes for ices ages, which by the way the Earth is currently in if you consider the history over the last hundered million years; significant ice was not present over the much of the last 100 million years. Over the past ~3 million years ice has always been present at high latitudes, with relatively rapid advances and retreats of ice sheets and significant (eg 3-6 C) shifts in temperature. For the vast majority of the last 3 million years, Earth was substantially less hospitable to land animals than it is today, because a significant fraction of the total land area was covered with ice sheets. It is clear that variations in the shape of Earth’s orbit and axial inclination are correlated with the repeated ice ages of the last few million years.
Some people have suggested that climate sensitivity to radiative forcing is not a fixed value, but rather depends on feed-backs from albedo changes caused by ice sheets and the sea level drops that go along with ice sheets. The atmospheric concentrations of CO2 and methane were also substantially lower 25,000 years ago than at any time during the holocene. Since the net forcing from these gases goes as the log of the concentration, the additional forcing for a fixed change in concentration from a lower base (200 to 225 ppm for example) would be larger than the additional forcing for the same change from a higher base (375 to 400 ppm for example). For these example numbers, the change at the lower level has ~83% more net radiative forcing than the change at the higher level, independent of any ice sheet or sea level feed-backs. So net sensitivity could have been substantially higher 25,000 years ago (maximum ice sheet coverage) than today, allowing relatively small forcings to make relatively big net changes in climate.
Whatever causes substantial long term (glacial/interglacial) variation, it is not possible to reliably infer from these variations that the present climate sensitivity to radiative forcing is high today
GCM’s have quite a large range of climate sensitivities (http://en.wikipedia.org/wiki/File:Global_Warming_Predictions.png), and a similarly wide range of projected warming, with the least sensitive models predicting only about 50% more warming than my curve fit model; the GCM’s can’t all be right, and it’s quite possible that none of them are right. The temperature history of the past 100+ years is consistent with relatively low sensitivity.

August 10, 2009 7:15 am

Steve Fitzpatrick (17:28:10) : “If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871.”
Well, that’s exactly what I said a while ago. When Nino and AMO indices are taken away, the only factor left is CO2. So all the model is doing is ascribing all the observed temperature trend to CO2.

So let me see if I can parse this.
the long term trend in both indexes is flat since 1871.
When Nino and AMO indices are taken away, the only factor left is CO2.
I believe that makes the contribution of CO2 approximately zero.

August 10, 2009 7:24 am

I like the Bering Straight hypothesis.
Once that freezes all the way to bottom, it would be difficult to get open again but once it did open, it could also mean a very quick transition to interglacial conditions again.
The answer is high explosives. Lots of them. And nuclear powered icebreakers.

Tenuc
August 10, 2009 7:24 am

crosspatch (01:56:05) :
“…And you can have an alignment of several things that could combine to produce a larger impact…”
Yes, I think this is the point which is missed by many climatologists who forget that climate is a chaotic system and who try to extract bits of the ‘machine’ to treat in a linear way. Observing historic behaviour, our climate seems to have warm and cool periods, with cool being the dominant trend. We need to treat the sun and planets (including Earth) as one complete system so that bifurcation points can be better predicted.
Doing this will allow us to plan how to mitigate the effects of change before crisis point is reached, rather than reacting to red-herrings like GHGs.

Steve Fitzpatrick
August 10, 2009 9:18 am

M. Simon (07:15:12) :
“So let me see if I can parse this.
the long term trend in both indexes is flat since 1871.
When Nino and AMO indices are taken away, the only factor left is CO2.
I believe that makes the contribution of CO2 approximately zero.”
Let me say one time more:
1. The calculated sensitivity (0.27 degree per watt) is based on the ASSUMPTION that all net warming since 1871 was due to radiative forcing. I did not say that radiative forcing is the only cause for observed warming, nor even that it is the most important cause. The calculated sensitivity represents a worst case estimate for sensitivity to radiative forcing. The measured variation in TSI over the last three solar cycles (about 1 watt per sq. meter) shows up in the temperature record quite clearly over the last 100+ years, with a best estimate solar cycle effect that is almost exactly what would be expected for a radiative sensitivity of 0.27 degree/watt. This does not prove 0.27 is the correct sensitivity, but it certainly shows the measured solar cycle signal is at least consistent with a sensitivity in this range.
2. Total radiative forcing is not the same as forcing from CO2. Total forcing includes radiative forcing from N2O, methane, chloro-fluorocarbons topospheric ozone from VOC’s, and solar cycle forcing based on measured varaition in TSI over the last three solar cycles. The reason for including all these forcings individually is that they may not (indeed, we already know they do not) follow the trajectory of forcing from CO2, and any projection for warming based only on forcing from CO2 ignores that these some of these other forcings are likely to not increase as much as CO2 in the future, and may actually decrease, canceling some expected forcing from CO2.
3. You may believe that all observed warming has been caused by other factors. My post does not and was never intended to address other possible causes, nor to suggest that greenhouse gases have caused a known X% of total warming. It was intended to place a realistic ceiling on the warming that could possibly be attributed to greenhouse gases.
If you do not believe there is any possibility that radiative forcing has contributed to observed warming, that is OK with me, but this really has nothing to do with my post.

Steve Fitzpatrick
August 10, 2009 10:03 am

Dr A Burns (18:55:43) :
“A back-of-the-envelope calculation shows that body heat from the current 6.7 billion people is enough to heat the atmosphere by 0.8 degrees C in 100 years.
The point is that the changes in temperature being discussed as so small that almost anything can affect them.”
No doubt true if you fed everyone food from somewhere outside the earth or made all food using only fossil fuels (with no sunlight involved). But since essentially all caloric value in the foods people eat (animal or plant) comes from chemical conversion of the energy in sunlight to the energy in carbohydrates, the heat given off by people’s bodies can have no effect on the whole of the climate, not even a tiny one. The total heat input to the earth is unchanged by our food’s caloric value.
The inside of a plane filled with people, sitting at the gate for an hour, presents a micro-climate with a very different response to body heat….

SteveSadlov
August 10, 2009 10:13 am

This appears to be an excellent step ahead in the quest to better define the positive forcing factors. Now we need to get a handle on negative forcings and feedback loops, both positive and negative. Perhaps a better model may be possible within the next 5 years.

crosspatch
August 10, 2009 10:20 am

Hmmm I wonder if that is real.
““A back-of-the-envelope calculation shows that body heat from the current 6.7 billion people is enough to heat the atmosphere by 0.8 degrees C in 100 years.”
What is the heating caused by several hundred million automobiles sitting in the sun heating up the air inside them? Open a window a little and you have enough circulation so you have hundreds of millions of little solar heaters sitting in the sun.

Jim
August 10, 2009 10:31 am

*****************
Steve Fitzpatrick (06:20:56) :
Adam Grey (20:45:45) :
Some people have suggested that climate sensitivity to radiative forcing is not a fixed value, but rather depends on feed-backs from albedo changes caused by ice sheets and the sea level drops that go along with ice sheets. The atmospheric concentrations of CO2 and methane were also substantially lower 25,000 years ago than at any time during the holocene.
****************
It seems even with less ocean liquid water volume, the colder ocean water due to the ice age could soak up a lot more CO2.

Willis Eschenbach
August 10, 2009 10:32 am

Steve, thanks for your response. You say:

If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871. They were included to improve the accuracy of the model predictions. Yes, the short term variation in global temperature is closely correlated with these indexes, but these indexes are completely independent of radiative forcing and clearly are not responsible for any significant net global warming over the entire period…. their trends are completely flat.

The misunderstanding seems to be in this statement:

[Nino3.4 and AMO] were included to improve the accuracy of the model predictions.

I say again: including observational data cannot improve the accuracy of model predictions. It can only improve the accuracy of model hindcasts.
But improving the accuracy of your hindcasts by including actual observations is a mug’s game. Sure, you can improve hindcast accuracy by including PDO (Pacific Decadal Oscillation), or AMO (Atlantic Multi-Decadal Oscillation), or MJO (Madden-Julian Oscillation), or SOI (Southern Ocean Index), or NAO (North Atlantic Index), or any other index you choose. Any one of these, or any combination of them, will improve the accuracy of your hindcast. See the NOAA Climate Indices web page for a complete list, you can play with and compare any and all of these indices to any other or to global temperature.
But using your method as shown in the head post of this thread, all you have proven is that model estimates of past observations can be improved by using past observations in creating your model estimates …
Surely you can see how pointless that exercise is.
w.

tallbloke
August 10, 2009 10:54 am

Steve Fitzpatrick (09:18:45) :
1. The measured variation in TSI over the last three solar cycles (about 1 watt per sq. meter) shows up in the temperature record quite clearly over the last 100+ years, with a best estimate solar cycle effect that is almost exactly what would be expected for a radiative sensitivity of 0.27 degree/watt. This does not prove 0.27 is the correct sensitivity, but it certainly shows the measured solar cycle signal is at least consistent with a sensitivity in this range.

Hmmm.
1) El nino tends to occur at solar min, and is the manifestation of solar input to the oceans at solar max. This masks some of the true solar input to the climate system by ‘flattening’ the temperature curve.
2) PMOD data uses a model to calculate TSI, based on the splicing together of records from several satellites used to measure irradiance over the last 30 years. PMOD and the IPCC prefer the use of ERBS data to calibrate the change during the ‘ACRIM gap’. The ACRIM team maintain this is not as good as the data from the other satellite, NEPTUNE which was working when the gap occurred and that consequently, TSI shows a little trend when it should show a rising trend at the end of the C20th.
3) Additionally, the Acrim data shows that cycle 21 had a difference of nearer 2W/m^2 between solar max and min than the 1W/m^2.
All this adds up to a spread of uncertainty about the effect of Solar max-solar min on temperatures.
PMOD says 0.05 to 0.1C
I say it could be more like 0.35-0.4C depending how you account for heat storage in the oceans and heat energy release in el nino.
If correct, this means the temp change over the C20th can mostly be explained by the sun, as the lower, longer cycles with longer minima of the early part of the C20th averaged out means a lot less TSI recieved at Earth.
By the way, I replied to your question earlier.

George E. Smith
August 10, 2009 11:39 am

“”” Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s). This high sensitivity depends mainly on three assumptions:
1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.
2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases. Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.
3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.
However, there is doubt about each of the above three assumptions. “””
Well I haven’t read the paper yet; but I printed it out so I can study it and let it sink in.
But I couldn’t easily get past the introduction; pasted above.
Well I hope to shout there is doubt about those three assumptions that are part of the GCM models.
#1 The “slow heat accummulation” in the oceans. What is so darn slow about it ? A solar photon is going to get absorbed in those oceans in less than one millisecond; not >20 yrs; and the “heat” in the ocean comes from those solar photons. The long wavelength re-radiation from a solar and GHG warmed atmosphere is completely absorbed in the top ten microns or less of the ocean surface, and that promotes prompt evaporation from the locally warmed surface. I wouldn’t be hunting for a lot of that long wave energy being stored in the oceans for any length of time.
#2 Cut with the cloaking hypotheses; aerosols (aka clouds) have always been an integral part of the earth’s atmosphere and always will; so stop making up excuses, and properly account for clouds in those silly GCM computer models; they aren’t cloaking anything, they are a part and parcel of the water NEGATIVE feedback effect.
#3 Balderdash ! Water vapor is a GHG; the most prominent GHG, and it doens’t need any other GHG to spur it into action; it is perfectly capable of causing all the warming the atmosphere needs, and in cloud form of causing all the cooling the earth surface needs. The notion of H2O positive feedback “enhancement” of some other GHG caused atmospheric warming; is simply a crutch to ignore the fact that it is the water that is controlling the whole temperature balance; not the GHGs.
And as for buying any notion that a linear approximation to a highly non-linear process is valid; don’t count on planet earth approximating the heating effect of incoming radiant energy, and cooling from outgoing IR by any linear approximation. The earth will apply the correct physics to the situations, and compute the correct answer, not some linear guess of unreality.
The problem with the predictions of the GCMs is simply the GCMs themsleves; Use the earth’s own GCM and then you will get the right answers.

Rik Gheysens
August 10, 2009 12:15 pm

Kevin Kilty (06:59:28) :
“By the way, we can arrive at roughly the same value of sensitivity in three more different ways. Set W=e(sigma)T^4 (Stefan Equation), differentiate T with respect to W, and the result is the sensitivity. If one plugs in e=0.98, sigma = 5.67×10^(-8), and T as 288K, then one gets 0.25.”
Your view is correct! May be there is a fault in the calculation.
sensitivity = (dW/dT)^–1=(4 ε σ T^3)^–1
If one plugs in the given values, one gets 0.188 (not 0.25). Do you agree with this?
I found a remarkable article, where much is explained: http://www.webcommentary.com/climate/monckton.php
I have not read it yet entirely because it requires some attention…

Steve Fitzpatrick
August 10, 2009 12:54 pm

Rik Gheysens (12:15:39) :
“sensitivity = (dW/dT)^–1=(4 ε σ T^3)^–1
If one plugs in the given values, one gets 0.188 (not 0.25). Do you agree with this?”
The value of T in the above equation is the effective emitting temperature of the Earth, not it’s surface temperature. The infrared headed out to space is emitted over a range of effective temperatures (this is clear from looking at the NASA graphic of infrared intensity at the beginning of the post), but all these emissions are at blackbody equivalent temperatures well below the surface temperature that lies under the emitting atmosphere.
The average emission temperature is the blackbody temperature which will balance the solar energy absorbed by the Earth’s surface: about 0.7 * 0.25* 1365 watts/M^2 = 239 watts/M^2. The blackbody temperature in equilibrium with the absorbed solar energy is ~255K, and the corresponding blackbody sensitivity is about 0.266 degree per watt/M^2. If you assumed 288K as the average emission temperature, the associated sensitivity would be 0.185 degree per watt/M^2, but the heat loss to space would then be (288/255)^4 = 1.672 times the solar energy that is actually absorbed.

Allen63
August 10, 2009 1:10 pm

Steve Fitzpatrick,
Thanks for your substantial reply to my posting of the obvious. From time to time I post the seemingly obvious to see if others feel the same.
I can accept your reasoning for what it is — an attempt to project warming during the next few decades assuming CO2 has been and will be the sole source of the warming. Your results indicate that some climate models over estimate the impact of CO2. I think your effort adds value to the debate.
In general, I agree with you that CO2 is probably causing some warming — however, not enough to worry about — assuming the “official” historical global temperature anomaly plots are “accurate” (they may not be — and that’s another issue for another time).

George E. Smith
August 10, 2009 1:52 pm

“”” DennisA (02:59:26) :
In 2000, Dr Robert E Stevenson, (now deceased), Oceanographer and one-time Secretary General of the International Association for the Physical Sciences of the Oceans (IAPSO), wrote an assessment of Levitus et al (2000) on global heat budget.
http://www.21stcenturysciencetech.com/articles/ocean.html
Yes, the Ocean Has Warmed; No, It’s Not “Global Warming”
This is a small extract:
“How the Oceans Get Warm
Warming the ocean is not a simple matter, not like heating a small glass of water. The first thing to remember is that the ocean is not warmed by the overlying air.
Let’s begin with radiant energy from two sources: sunlight, and infrared radiation, the latter emitted from the “greenhouse” gases (water vapor, carbon dioxide, methane, and various others) in the lower atmosphere. Sunlight
penetrates the water surface readily, and directly heats the ocean up to a certain depth. Around 3 percent of the radiation from the Sun reaches a depth of about 100 meters.
The top layer of the ocean to that depth warms up easily under sunlight. Below 100 meters, however, little radiant energy remains. The ocean becomes progressively darker and colder as the depth increases.
The infrared radiation penetrates but a few millimeters into the ocean. This means that the greenhouse radiation from the atmosphere affects only the top few millimeters of the ocean. Water just a few centimeters deep receives none of the direct effect of the infrared thermal energy from the atmosphere! Further, it is in those top few millimeters in which evaporation takes places. So whatever infrared energy may reach the ocean as a result of the greenhouse effect is soon dissipated. “””
Well I had to cut and paste this piece of history. I have been harping on this question for some time now; but make no claim to having said so first; although it came to me quite independent of any earlier publications; it’s so obvious that anyone could think of it.
And my only amendment to the late Dr Stevenson’s comments would be to say that the long wave IR from the atmospheric radiation is absorbed in the top ten microns of the ocean water not “a few millimeters”.
So I concur with Dr Stevenson that atmospheric warming of the oceans is a losing thesis; surface evaporation quickly removes any surface energy supplied by the atmospheric long wave downdraft.
Any simple analysis of the up/down propagation of long wave infra-red radiation in a non-uniform atmosphere that has both a density and temperature gradient and a principally CO2 (other than water) GHG component, will clearly demonstrate that upward propagation is favored over downward; simply because of the way that the CO2 absorption band changes in width with altitude (gets narrower at greater heights).
As for high Cirrus clouds creating a positive feedback warming of the surface (and the higher and less dense those clouds, the warmer the surface gets). That’s just plain silly; those high cirrus clouds are there because of the warmer surface; they are not the cause of the warmer surface, and because of the usual temperature relaxation with altitude, the warmer the surfrace is, the higher the water vapor has to rise (due to convection) before the dew point is reached and clouds can form, and if the water content is lower so the relative humidity is lower; the vapor has to go higher still so the clouds get less and less dense as a result.
And like ANY other cloud; they still reflect sunlight from the tops (albedo enhancement, and they still block additional solar radiation from the surface; and it still gets colder when one of those clouds passes in front of the sun; it never gets warmer in the shadow zone as a result of those clouds; at any height.
So I didn’t know the late Dr Stevenson; but I’m happy to know there have been others who find the standard line to be ludicrous.
George

tallbloke
August 10, 2009 2:15 pm

George E. Smith (11:39:18) :
“However, there is doubt about each of the above three assumptions. ”
Well I hope to shout there is doubt about those three assumptions that are part of the GCM models.

Don’t waste your time here George. Lip service is paid to doubt, but little heed is given to any serious aberration from the orthodoxy.

Pamela Gray
August 10, 2009 2:21 pm

George, I am with you on your post. CO2 and other greenhouse gasses, regardless of source are poor ways to heat water. Can you imagine using that method on a camping trip to heat water for morning coffee? It is the most amateurish part of global warming notions, let alone that the heat from air is somehow locked away in a vault to be spewed onto land like some B monster movie.

Steve Fitzpatrick
August 10, 2009 2:22 pm

Willis Eschenbach (10:32:08) :
“I say again: including observational data cannot improve the accuracy of model predictions. It can only improve the accuracy of model hindcasts.”
I understand exactly what you are saying and why you think that these indexes do not “improve the model forecast”. As I already said, removing the AMO and Nino 3.4 indexes from the regression does not significantly change the calculated climate sensitivity, nor should they… they are detrended indexes!
Let me try to explain why I use them.
1. If you plot up Nino 3.4 against global average SST, you find virtually no correlation. Yes, Nino 3.4 is an index that comes from measured SST in a certain ocean region. No, it is not a simple proxy for average SST as you appear to be suggesting. Nino 3.4 does provide information about the current state of the Earth’s climate system relative to an “average” state. To be more specific, Nino 3.4 helps us understand if a currently measured “higher than normal” or “lower than normal” global average temperature is a result of specific short term conditions in the ENSO or if that measured average temperature is more likely associated with a “background” trend in temperature. If someone says to me “Look how hot last month was!” and I know that we are in the middle of the biggest El Nino in history, then I can pretty confidently reply: “I’ll bet it will cool off by more than 0.2C in the next year of two.”
2. AMO is a bit more complicated, since it comes from a much larger ocean area, and does correlate with global average SST (R^2 about 0.4), and some of the AMO index is simply a proxy for average SST. But AMO is detrended over 100+ years of temperature records; when the AMO index is well above or well below zero (well above or well below the long term trend line), it is telling us that the current measured global average temperature is not typical in a historical sense, and that the current measured average temperature is probably not an accurate representation of the underlying long term temperature trend. A very high AMO index fairly well screams that the temperature will fall back toward the long term trend line within several years.
AMO and Nino 3.4 are not perfect stand-ins for short term variation, but are much better than nothing. AMO and Nino3.4 are determined each month, just as is the global average temperature, and can be used to better evaluate the ‘true’ underlying warming (or cooling!). Most everyone who thinks about climate change already knows this, and people often use these indexes just like the model does. For example, the current El Nino, which started just a month or two ago, “suggests” that the global average temperature will be above the trend line for at least a few months. Most every climate blog that you can think of has probably discussed this expected “El Nino warming” at least once in the last month or so, and official climate and weather organizations have also had press releases about it.
If they were really just a proxy for the average SST (as you suggest), then why would anyone even bother to calculate them?
If you believe that these indexes are not useful in the model, that is OK with me. But you can count on people to continuing to look at these indexes to better understand what is shorter term variation and what is longer term change.

tallbloke
August 10, 2009 2:23 pm

How Sensitive is the Earth’s Climate?
Very.
But not to co2.
This is because co2’s puny near surface radiative activity is completely drowned by far larger atmospheric processes which only have to make a very minor adjustment to deal with it’s effect.
However extra insolation does warm the ocean’s, and so, the Earth. Enjoy it while it lasts, the oceans are losing heat.

Pamela Gray
August 10, 2009 2:25 pm

I would also add that Sunlight is EASILY reflected away from warming the oceans. It’s measure at the surface is very noisy while highly stable just outside our atmosphere! And what varies it? Earth’s atmosphere. One of the most variable planets in the solar system in terms of its climate and weather.

Steve Fitzpatrick
August 10, 2009 3:08 pm

tallbloke (14:23:29) :
and
Pamela Gray (14:25:11) :
Please tell me which of the following you disagree with:
1. The concentrations of CO2, chloro-fluorocarbon’s, N2O, and methane in the atmosphere have increased by some amount in the last 100 years.
2. All the above gases have well characterized infrared absorption spectra.
3. Based on these known spectra, the escape of infrared radiation from Earth’s surface through the atmosphere to space should be slightly inhibited compared to escape under the same conditions, but with the concentrations of these gases reduced to what they were 100 years ago.
4. It is therefore reasonable that all else being equal, some warming of the Earth’s surface (however small) should result from the increased concentration of these gases in the atmosphere.
My understanding of chemistry and physics suggests that the above statements are not at all speculative. I am really trying to understand why you seem to object so strenuously to my post, which says clearly that the projections of warming based on GCM’s are much too high. What exactly do you take issue with?

Stevo
August 10, 2009 3:23 pm

John Finn,
“Have you a link to your explanation.”
I initially wasn’t going to bother replying (no offence intended, but I didn’t really have the time to go through it all again should anyone want to debate it), but I notice some people have spent time arguing against the wrong model of the greenhouse effect (the radiative one), so I’ll refer to it again for anyone interested. My comment above was meant to be more light-hearted.
The first comment was here. More further down.

tallbloke
August 10, 2009 3:30 pm

Steve Fitzpatrick (14:22:17) :
If you plot up Nino 3.4 against global average SST, you find virtually no correlation.

If you go to Bob Tisdale’s website you’ll find a post on how you can add nino3.4 values cumulatively (along the lines of what I did with sunspot numbers as I described in the post answering your question which you ignored), to get an uncannily accurate history of SST’s.

Jacob Mack
August 10, 2009 3:32 pm

The physics is off on this post; there is no way the increase in global mean temp would be so low when C02 is double that of pre-industrial levels; when I have time I will come back to this point.
Pamela what references are you using? I would love to see those if you would paste them up. You may want to see this:
http://www.fas.org/spp/military/docops/afwa/ocean-U1.htm
here:
http://www.theallineed.com/biology/07052901.htm
and here:
http://www.learner.org/courses/envsci/unit/text.php?unit=12&secNum=0

Jacob Mack
August 10, 2009 3:35 pm

Steve,
they take issue with any global warming due to greenhouse gases from the burning of fossil fuels; some here, will agree that man has slightly helped along natural variation so as long as the warming stated is so negligible that it has no potential detrimental effect.

tallbloke
August 10, 2009 3:45 pm

Steve Fitzpatrick (15:08:01) :
4. It is therefore reasonable that all else being equal,

They are not. Lots of other things have changed. The atmosphere is a big place. These gases you obsess about occupy a vanishling small part of it, and although they may have some properties which might have some effects, they are a drop in the bucket which the massive processes continuing in the atmosphere can shrug off with a tiny average shift of the jetstreams towards the poles here, or a changing of the extratropical hadley cell boundaries there.
This is the second set of questions you’ve asked me that I’ve replied to. Are you going to continue ignoring the first?

crosspatch
August 10, 2009 4:16 pm

” Steve Fitzpatrick (15:08:01) : ”
“3. Based on these known spectra, the escape of infrared radiation from Earth’s surface through the atmosphere to space should be slightly inhibited compared to escape under the same conditions, but with the concentrations of these gases reduced to what they were 100 years ago.”
Maybe, maybe not. What if increased CO2 concentrations in the atmosphere displaces H2O and results in lower absolute humidity in response to the increased CO2 content and total greenhouse impact is reduced? Suddenly what was thought to be a positive feedback turns into a negative feedback as a less absorptive gas displaces one with a wider absorption range.
What if the atmosphere is already practically opaque to IR at the most important wavelengths and adding CO2 is like putting a shade across an already bricked up window?
And the bottom line is, based on my understanding of physics, is that if you have this increase in IR absorption, you should see that elusive hot spot. If you are catching radiated heat from the surface and re-radiating it back down, you should be able to measure it. So far that hasn’t happened. That heat isn’t there. There is currently no indication that the atmosphere is increasing its absorption of IR the way the models predict it would.
Then we have the whole problem of convection that the models ignore. If the gasses DID absorb more IR and heat up, they would rise and give the heat up as they do. Direct radiation would convert to convection/radiation and the atmosphere would still give up its heat to space. The models (according to what I have read in the writing of others) seem to rely on a static atmosphere that collects IR, heats up, and just sits there re-radiating the heat back to the ground or is “infinitely thick” and never gives up the heat to space the way a gas would in real life.
Earth has a natural refrigeration system using water vapor as the working fluid. The only other atmospheres we have to look are made up of mostly CO2 with no water to speak of (Venus and Mars). Hansen’s group was originally formed to model the atmospheres of these (and the other) planets. They have spent a lot of their life modeling CO2 greenhouse atmospheres. But things might not work the same way here. It would take, I believe, a HUGE amount of CO2 increase to even make as much difference as the normal variability of H2O. I believe CO2 adds so little greenhouse warming that it gets “lost in the noise”.
Looking at NCDC’s data for the continental US, we are looking at a warming rate of 1.2 degrees Fahrenheit per century since 1895 to present and the most recent 12 month period is below the trend line. ( Go here and enter “most recent 12-month period in the “Period” pull down, its the last item in the list and select “Submit).
Since 1999 to present we see a cooling trend of 8.6 degrees Fahrenheit per century. That is quite dramatic cooling over CONUS over the past 10 years.
Why? And why aren’t the global satellite averages tracking with CO2 emissions?

Jacob Mack
August 10, 2009 4:28 pm

I would suggest that AGW skeptics [snip] see Spencer Weart’s work. Just google him, and you will find an immense resource of information regarding why AGW is a fact from the standpoing of solid physics. The upper atmosphere contains little to no water vapor and therefore any contribution made by CO2 would have a net warming effect, since it acts as a blanket. Also, the lower and middle troposphere is far from being satuarted as of yet, but even if it were, the C02 in the upper atmosphere where it is cool and dry would absorbs wavelengths at different bands at varying altitudes and thus reflect LWR back down towards Earth.

Jacob Mack
August 10, 2009 4:30 pm

Pamela Gray, what are your references regarding: “I would also add that Sunlight is EASILY reflected away from warming the oceans. It’s measure at the surface is very noisy while highly stable just outside our atmosphere! And what varies it? Earth’s atmosphere. One of the most variable…”?

Jacob Mack
August 10, 2009 4:32 pm

I again see a temporary disappearance of my posts, but I will wait…

Pamela Gray
August 10, 2009 4:45 pm

#1. Disagree. We don’t know the long term trend or average during the past 500 years. All we have is AIRS showing us observed measures during a single warm oceanic oscillation. All the rest is proxied calculations, which have a reasoned large standard deviation compared to a gas chamber. We do not know what happens to CO2 under different oceanic oscillation conditions in terms of measuring small changes in ppm. What we do know is that torrential rains can send tons of CO2 captured by plants out to sea and down the ocean floor. In essence, bad weather can scrub carbon out of the air. Bad weather comes when oscillations fight each other, IE one is cool while the other is warm.
#3. Disagree. Real world conditions include stormy weather and uncooperative jet streams as well as aerosols that change the amount of short wave solar radiation reaching the surface and then longwave radiation available to be absorbed by CO2 and other GHG’s to add warmth. There is no such thing as same conditions in the actual world and besides, CO2 is only available to warm the air after the Sun’s rays are changed by natural and highly variable parameters. CO2 cannot overcome that. Its ability to warm is stable. It’s all the other variables that do not let it do its job very well. With full amount of incoming solar shortwave radiation reaching the ground and with the full amount of outgoing longwave radiation reaching greenhouse gas layers, we get about 30 degrees Celsius of warming. But we never actually get that because of the highly variable atmosphere of our planet.
#4. Nothing is equal in the real world.
On the contrary, all the other variables create lots of speculation. See:
http://www.arm.gov/publications/proceedings/conf02/extended_abs/ellingson_rg.pdf
and Anthony, help me understand this ppt:
http://clarreo.larc.nasa.gov/workshops/2009-02-24/docs/Huang_Langley-visit_20090224.ppt

Pamela Gray
August 10, 2009 4:57 pm

BTW, this is an interesting site with data available. Well worth a visit. There maybe evidence of coolaid drinking but they are collecting longwave radiation data. Something that is highly variable depending on how much shortwave radiation actually hits the surface.
http://www.arm.gov/acrf/

Steve Fitzpatrick
August 10, 2009 4:59 pm

tallbloke (15:45:04) :
I obsess about nothing, and it would help maintain civility by not making this kind of non-constructiive comment.
“What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”
I replied that I knew of no data set that could be used to make solar contribution 75%. You replied with an address that held a series of graphs, without any description of those graphs that I could find. I really have no idea how those graphs relate to solar forcing. That is why I did not reply further. I still have no idea what the graphs mean or how they might relate to solar forcing.
With regard to:
“If you go to Bob Tisdale’s website you’ll find a post on how you can add nino3.4 values cumulatively (along the lines of what I did with sunspot numbers as I described in the post answering your question which you ignored), to get an uncannily accurate history of SST’s.
I have not seen this post, though I have seen Bob’s site a few times. I am not sure I understand what the connection might be between a sum of historical values of Nino 3.4 and historical sea surface temperatures. If you can offer a brief summary it might help. It would also help if you could explain how/why the sum of historical sunspots relates to total solar forcing; I have never heard of this before. How far back do you sum, and how do you choose the starting point for the sum? What information does this sum of sunspots provide about solar forcing?

crosspatch
August 10, 2009 5:05 pm

“The upper atmosphere contains little to no water vapor and therefore any contribution made by CO2 would have a net warming effect,”
The same can be said for polar winter … and no such warming has happened.

Jacob Mack
August 10, 2009 5:05 pm

Crosspatch: “Maybe, maybe not. What if increased CO2 concentrations in the atmosphere displaces H2O and results in lower absolute humidity in response to the increased CO2 content and total greenhouse impact is reduced? Suddenly what was thought to be a positive feedback turns into a negative feedback as a less absorptive gas displaces one with a wider absorption range.”
For one CO2 does not displace H20 and the lower atmosphere is not “saturated.” Also the absolute humidity/specific humidity do flucuate, but generally the relative humidity remains stable. In the upper atmosphere, it is cooler, and dry, an the presence of water vapor begins to fade, but lo and behold, C02 is still on the incline where it was formerly virtually non-existent. Also keep in maind that water vapor tends towards equilibrium, in relative humidity, but water vapor levels are also currently rising.

Pamela Gray
August 10, 2009 5:08 pm

Jacob, shortwave and longwave radiation of Sunlight 101. From the description found here, one can easily reason that these variables create a very noisy data stream of how much gets in, and how much is reflected.
http://www.physicalgeography.net/fundamentals/7f.html

Jacob Mack
August 10, 2009 5:14 pm

“The same can be said for polar winter … and no such warming has happened.”
You are neglecting altitude dependent changes.

Tom in Florida
August 10, 2009 5:29 pm

Jacob Mack (16:28:50) : ” the C02 in the upper atmosphere where it is cool and dry would absorbs wavelengths at different bands at varying altitudes and thus reflect LWR back down towards Earth.”
Why doesn’t LWR get reemitted equally in all directions? You certainly aren’t implying that gravity comes into play are you?

Jacob Mack
August 10, 2009 5:40 pm

Tom: “Why doesn’t LWR get reemitted equally in all directions? You certainly aren’t implying that gravity comes into play are you?”
No, I am not implying gravity.
There are pressure changes at different altitudes as well as temperature, which will influence how a given gas will absorb and emit LWR. Check out Peter Atkins 2009 book titled Elements of Physical chemistry book on Google books, cowritten by Julio de Paula, but first check out the work of Spencer weart., “Discovery Of Global Warming,” also on Google Books.

Jacob Mack
August 10, 2009 5:44 pm

Pamela Gray (17:08:35) :
“Jacob, shortwave and longwave radiation of Sunlight 101. From the description found here, one can easily reason that these variables create a very noisy data stream of how much gets in, and how much is reflected.”
This site is an oversimplification of the empirical data, the physics, and not just the GCM approximations. You are also neglecting GHG mixing and the reduction in ice cover, albedo.

Steve Fitzpatrick
August 10, 2009 5:45 pm

Jacob Mack (16:28:50) :
“I would suggest that AGW skeptics [snip] see Spencer Weart’s work.”
Not a very constructive start.
Should I reply by sending you to read dissertations by Richard Lindzen? Better that you stick to issues related to the thread. Did you read my post? Did you have doubts, questions, or suggestions? Do you think that there is factually incorrect information presented? If so, then I would be happy to address those subjects.
Sending me to read what “this authority” says is at least a bit odd; I have some 35 years experience in chemistry, physics, and nanotechnology, and do not need you to suggest that I become better informed on the basic technical issues of AGW.
Pleeeease!

DaveE
August 10, 2009 5:50 pm

If I remember correctly, the absorption bands of CO2 are, ~4, 7.5 & 15µm bands.
4 & 7.5 are pretty well covered by H2O so that leaves 15!
That’s just the poles, North & South! South cooling, North warming, where’s the CO2?
DaveE.

crosspatch
August 10, 2009 5:50 pm

” Jacob Mack (17:14:41) : ”
I don’t believe I am.
And by that I mean that the pole is at the same altitude now as it was 50 years ago. Any increase in CO2 greenhouse should have a much greater impact in the polar region because the air is so dry. CO2 plays a much greater role in any atmospheric greenhouse at the pole than anywhere else on the planet. Most of Antarctica has been cooling with the exception of the Western peninsula and that is due to wind currents.
The impact of CO2 warming has not been documented anywhere. NONE of the predicted indications have been observed, not a single one.

Jacob Mack
August 10, 2009 5:57 pm

Steve,
the post was not directed at you for starters. Secondly, I did make a post regarding your work briefly; you grossly underestimate the climate sensitivity. I am still going through your post with a fine tooth comb, but I will comment directly regarding your calculations, assumptions, and methods. I am referring other posters to this work as it is important work that both you and they neglect to mention. Now, several times I have highlighted the physics and findings that you have neglected to cover; mainly the C02 in the upper atmosphere which will most certainly lead to global warming and the water vapor feedback which you clearly underestimate. I will be more specific soon and show you where the chemistry and physics does not add up, but that is for when I have more time to give you post my undivided attention. Also you still hold that GHG lead to some, albeit mild global warming, so you are hardly a [snip] and as far as skepticism, your work is not to dat repeatable and validated and subject to peer review, so we shall wait and see as to how valid many of your claims are. I can tell you though that it is impressive your long career and experience with chemistry and physics, but you make several minor errors that makes the warming look far more negligble for a future prediction than it is already, which is of course, impossible. You may need a review in atmospheric physics and physical chemistry, my friend.
Reply: Future use of the term “denier” as a pejorative will lead to deletion of posts without notice or explanation. ~ charles the moderator.

Jacob Mack
August 10, 2009 5:59 pm

I think Steve, that we should discuss this physics and chemistry of AGW in depth here real soon.

Jacob Mack
August 10, 2009 6:15 pm

Crosspatch you are mistaken:
Quote: “Scientists on Wednesday unveiled evidence to suggest global warming is affecting all of Antarctica, home to the world’s mightiest store of ice.
The average temperature across the continent has been rising for the last half century and the finger of blame points at the greenhouse effect, they said.
The research, published in the British journal Nature, takes a fresh look at one of the great unknowns — and dreads — in climate science.” End quote.
dsc.discovery.com/news/2009/01/…/antarctica-warming.html
Also see: http://www.cnn.com/2009/WORLD/…warmingantarctic/index.html
And: BE Barrett, KW Nicholls, T Murray, AM Smith … – Geophysical Research Letters, 2009 – agu.org
Also there are pressure and temp differences for the Antartic elevation and the atmpospheric; there is also more precipitation in the Anartic then the upper atmosphere.

George E. Smith
August 10, 2009 6:19 pm

“”” Jacob Mack (16:28:50) :
I would suggest that AGW skeptics [snip] see Spencer Weart’s work. Just google him, and you will find an immense resource of information regarding why AGW is a fact from the standpoing of solid physics. The upper atmosphere contains little to no water vapor and therefore any contribution made by CO2 would have a net warming effect, since it acts as a blanket. Also, the lower and middle troposphere is far from being satuarted as of yet, but even if it were, the C02 in the upper atmosphere where it is cool and dry would absorbs wavelengths at different bands at varying altitudes and thus reflect LWR back down towards Earth. “””
Now why would you say “the upper atmosphere contains little to no water vapor”. Why would that be; other than the upper atmosphere contains little to no gases of any kind. The upepr atmosphere no matter how rarified is perfectly capable of sustaining a water vapor content in accordance with the saturated vapor pressure of Water as a function of temperature; and even at -90 C, the earth’s atmosphere still contains water vapor; so I don’t see why it should disappear with altitude; any more than CO2 would.
And as that upper atmosphere becomes ever more rarified so does the density of CO2 molecules up there so the GHG warming effect also diminishes. Oh maybe the local atmospheric temperature still changes somewhat since the reduced amount of captured IR long wave radiation is shared with a reduced mass of atmospheric gases; or when high enough the mean free path may be long enough for the CO2 to simply decay to the ground state and re-emit the absorbed photon. And that re-emission spectrum would be quite narrow, because of the lowered temperature and density so the Doppler and collision broadening would be reduced.
That narrower CO2 absorption/emission spectrum would have quite a chore making it through the denser warmer lower atmosphere, with its broader CO2 absorption band. Remember that each re-absorption and eventual emission from either the excited GHG or the atmosphere, results in an essentially isotropic re-radiation pattern; so roughly half of the total flux can be expected to be up and half down at each such level. The upward path would be expected to be favored over the downward, becasue of the temperature and density relaxation with altitude.
And for one more time, can I re-iterate that the GHG components of the atmosphere do NOT reflect long wave radiation from the surface. The process is an inelastic scattering process, and not an optical reflection. Reflection does not involve a frequency shift.
As for learning from Spencer Weart; see letters to the editor in “Physics Today” for January 2005; where I casually mentioned that when floating sea ice melts; the laws of physics require that the sea level will fall; not rise, and not stay fixed either.
Weart pooh poohed that idea; and substituted his own problem in place of mine, simply asserting that when the oceans warm the water expands, and the sea level rises. No doubt true; but totally unrelated to my comment about “when floating sea ice melts.”
So I’ll find a more on the ball teacher thank you.

Jacob Mack
August 10, 2009 6:20 pm

Dave,
there is also 17 as well from C02 and varying behaviors of C02 under different altitude conditions and mixing ratios in relation to varying amounts of water vapor, N02, CH4 and of course, 1/2 RHO V^2 is the altitude density equation and at higher altitudes, air pressure decreases (density decreases as D= M/V) and thus pressure does as well so gases will tend to spreead out more under such circumstances, but become less thermally excited at higher altitudes, and yet in the absence of other significant GHG, some of the CO2 does go to space, while the rest is held in and re-radiated to the Earth.

Jacob Mack
August 10, 2009 6:23 pm

Quote: “Reply: Future use of the term “denier” as a pejorative will lead to deletion of posts without notice or explanation.” ~ charles the moderator.’
I do not engage in name calling hence why I put ” or ‘ around such words, Charles.
I in no way meant it with any intent of contempt; it was actually to indicate that I did not mean my statements in a prejorative manner.
[Reply: Best to avoid using the “D” word entirely. ~dbstealey, moderator]

Bill Illis
August 10, 2009 6:26 pm

I’ve redone some charts I posted from above.
Here is how it plays out when you separate the solar forcing from the greenhouse effect. This IS the greenhouse effect.
Each extra Watt of GHG forcing is really only adding 0.18C right now. The 2.4 extra Watts assumed to have occured since 1850 or so would translate into 0.5C of warming (convienently close to what has actually occurred).
To get to +3.0C by 2100, GHGs will have to add an extra 13 Watts [which is an impossible amount – you can do your own math for CO2 alone with this formula – Watts = 5.35 ln(CO2future/387)]
http://img524.imageshack.us/img524/6840/sbearthsurfacetemp.png
Each extra watt is now only adding 0.18C.
http://img43.imageshack.us/img43/2608/sbtempcperwatt.png
The climatologists rely on these equations for everything. It underpins most of the physics and the models themselves. As far as I can tell, they have not calculated how each extra Watt will affect temperatures, they are just using the averages over the whole spectrum (surprising since they should know these are logarithmic/exponential equations).
This is a falsification as far as I am concerned.
This is also more-or-less consistent with Trenberth’s new Earth Radiation Budget paper. He’s bumped the surface Watts from 390 (my charts) to 396 assuming there is a lag in emissions from the surface ocean and deserts but this change also seems like an impossible amount.
http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/BAMSmarTrenberth.pdf

August 10, 2009 6:27 pm

Steve Fitzpatrick (17:45:30),
He’s trolling. And he’s bringing up Spencer Weart, because Weart is realclimate’s tame pet. If Weart had the …um …gumption, he would write an article for the web’s “Best Science” site like you did, and let people try to knock it down if they can. That’s how real scientists do it. Even Dr. Steig wrote an article that was posted here.
But Mr. Weart likes being scratched behind his ears, so he hides out at RC and similar agenda-driven sites — where he never has to face any uncomfortable questions. Because he hides out from answering inconvenient questions, he carries little weight here. Zero, actually.
Kudos to you, BTW, for an interesting article — and for being willing to respond to numerous questions.

Steve Fitzpatrick
August 10, 2009 6:31 pm

Jacob Mack (17:59:21) :
“I think Steve, that we should discuss this physics and chemistry of AGW in depth here real soon.”
I would be happy to do so, if you can keep the conversation civil and constructive.
Most everyone honestly believes what they say, even if they may sometimes be mistaken. A constructive dialog requires that anyone involved enter admitting that they may sometimes be wrong. If you can enter an exchange honestly saying that you may sometimes be wrong, then it will be worthwhile. If you enter certain that you (or worse, some distant authority you will point to) is 100% correct, then any discourse would be a terrible waste of time.

Pamela Gray
August 10, 2009 6:35 pm

Wow. I din’t know that quotes were so powerful.

crosspatch
August 10, 2009 6:45 pm

““Scientists on Wednesday unveiled evidence to suggest global warming is affecting all of Antarctica, home to the world’s mightiest store of ice.”
Oh, I take it that you are not familiar with the errors that were discovered in that “study”. That is Steig’s paper, I believe. It has been shown to be in error. Steig has produced a corrigendum which you can read about here. Basically the error bars are so wide now that the result of his study is “temperatures have risen 0.12 degrees +/- 0.12 degrees.
A station had great weight in the study but the data attributed to that station didn’t come from there. It was actually a splicing of data from several other stations. That study is, at this point, pretty much debunked.
Please, feel free to try again.

Jacob Mack
August 10, 2009 6:48 pm

Steve, no not 100% certainty, but I find your doubts of the “assumptions,” to be questionable, without further reference to data. I am confused as to how you make a confident statement like:
(1.) “Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.”
Next: (2.) “Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings. There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.”
(1.)What about the high heat capacity and specific heat of water, the changes in salinity recently noted, the higher ocean C02 content, and the lagging conduction of heat to the atmosphere from the ocean? (only about 10% of total heat, but evaporation and water vapor feedbacks come into play as well)
I will stop there for now, but it seems to me that the physics and chemistry (and the intricate weather patterns and long term climate trends, (say 50 years to present?) indicate otherwise for aerosols. There is signifcant research on aerosol scattering effects, so I am confused by your statement that there is no evidence regarding their current and future effects.

tallbloke
August 10, 2009 6:50 pm

Steve Fitzpatrick (16:59:37) :
tallbloke (15:45:04) :
I obsess about nothing, and it would help maintain civility by not making this kind of non-constructiive comment.

Apologies, your asking a question and then ignoring the reply annoyed me.
“What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”
I replied that I knew of no data set that could be used to make solar contribution 75%.

Thus avoiding answering the question. Which you still are… I pointed out the uncertainty in TSI values, and asked you to treat my question as speculative. But instead of giving me a single, clear answer, you have asked another four questions.
When you’ve answered my single reasonable question, I’ll answer your additional ones.

crosspatch
August 10, 2009 6:53 pm

In fact, one might want to peruse the process of reconstructing Steig’s data and methods by perusing these threads (which continue beyond the first page).

August 10, 2009 6:58 pm

Steve Fitzpatrick (15:02:40) :
I was not aware that Lean had changed her mind about the 2 watts change since the little ice age. It certainly was not my intent to misrepresent her current views. The calculations I did were based on recently measured changes in intensity over the solar cycle (peak to valley) of ~1 watt per square meter at the top of the atmosphere, and the model assumed this variation was the same since 1871.
The peak to valley change depends on the size of the solar cycle and varies by a factor or 3 or more. A good median value is 0.1% of TSI or ~1.4 W/m2, for some cycles larger, for some smaller.
This works out to ~0.7 * 0.25 = 0.175 watt per square meter, and an expected solar signal from the solar cycle of 0.047C (peak to valley) for a sensitivity of 0.27 degree per watt per square meter.
A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].
What I found interesting was that the best model fit to the temperature data corresponded to ~0.168 watt per square meter, remarkably (at least to me) close to the 0.175 watt per square meter that would be expected based on the measured variation over the last few cycles.
In view of my simple calculation above where the sensitivity doesn’t enter at all I don’t see the relevance of the correspondence.
So for what is is worth: the model is consistent with no substantial change in cyclical variation over the past 130 years.
And I don’t understand this statement. Stefan-Boltzman’s law hasn’t changed. So what is this ‘cyclical variation’?

August 10, 2009 7:04 pm

tallbloke (18:50:01) :
I pointed out the uncertainty in TSI values
The uncertainty is smaller than the solar min to max variation so is hardly relevant. Even a 1 W/m2 uncertainty translates into a 0.05K temperature signal which is negligible in the current context.

Jacob Mack
August 10, 2009 7:25 pm

Fair enough Crosspatch, but here are other recent papers, some preliminary or up for peer review, while others are already published:
http://www.atmos-chem-phys-discuss.net/9/…/2009/acpd-9-1703-2009.pdf
http://www.sciencemag.org/cgi/content/abstract/311/5769/1914
http://www.newscientist.com/article/dn16740-global-warming-reaches-the-antarctic-abyss.html (the attribution is not made prematures to C02, in fact it is stated that it is too soon to know by the researchers)
http://netbnr.net/loc.html?http://www.climatehotmap.org/antarctica.html
http://netbnr.net/loc.html?http://www.sciencedaily.com/releases/2006/03/060330181319.htm
I also want to add that precipitation will slow down Anartic warming and some evidence suggests that El Nino will also temporarily suppress warming magnitude, and yet the Anartic is still warming, while Greenland is, and the Artic is warming at even a faster pace.
More on all this later… I also have preparations to make for Steve, soon as he answers my initial questions.

Steve Fitzpatrick
August 10, 2009 7:40 pm

Jacob Mack (18:48:21) :
“Steve, no not 100% certainty, but I find your doubts of the “assumptions,” to be questionable, without further reference to data.”
I should hope a lot less than 100% certainty. The IPCC models differ by a factor of about 3 in their projections of warming through 2100. At a minimum that ought to lower the certainty level a fair amount below 100%; they can’t all be correct. If you have one parti`cuylar model that you think is almost certainly correct, then OK, but please tell me which model that is.
I am confused as to how you make a confident statement like:
(1.) “Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.”
There have been four published studies (that I am aware of) where total heat content in the top 700 meters of ocean was calculated based on Argo float data (following the correction of errors in a small subset of floats, of course) as well as independent confirmation by ocean mass and altimeter readings (satellite). One showed a modest fall in heat from 2003 to 2008, two shows a very slight fall toflat in heat, and one a slight increase in ocean heat. The best available data is that there has been no heat accumulation (or a very slight fall) in the top 700 meters of ocean since 2003.
“Next: (2.) Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings. There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.”
The IPCC’s uncertainty limits for net areosol focings range from tiny to huge. All global estimates based on measurements I have seen indicate a gradual fall (a total reduction amounting to about 50% of the 1993 value) since the effects of Pinatubo ended in about 1993. Measured aerosol effects declined through at least 2005, (the well known global brightening) which should have increased solar intensity and heat accumulation in the ocean… it did not happen.
“What about the high heat capacity and specific heat of water, the changes in salinity recently noted, the higher ocean C02 content, and the lagging conduction of heat to the atmosphere from the ocean? (only about 10% of total heat, but evaporation and water vapor feedbacks come into play as well)”
I honestly have no idea what you are trying to say in the above paragraph. Perhaps you could explain in a different way.
Jacob, what I see is that for extreme greenhouse forced warming to be correct, you have to believe that 1) ocean heat accumulation is extremely slow (lagging far behind the surface, if you will), that 2) human generated aerosols have canceled a large fraction of radiative warming, 3 ) that this “aerosol cancellation” is going to decline in the near future, and 4) that the total of water vapor and cloud feedbacks is strongly positive.
In addition, all these things must be correct for the whole structure to “hold together”; if ocean lags do not extend to hundreds of years, then the forcing can’t be what is claimed, if the forcing is not what is claimed then the feedbacks can’t be right, etc., etc. Simulating the atmosphere and ocean is a remarkably difficult problem, and this explains the wide range of model predictions (produced by groups of dedicated and honest scientists and programmers, no doubt)…. but none of it inspires confidence in their predictions. Finally, please note that most of the models do not even correctly predict the average surface temperature of the Earth…. today.

Jacob Mack
August 10, 2009 7:43 pm

Please see below, regarding the higher volume of salt water due to its higher density. (D=M/V) Again the differences in water’s physical characteristics due to salinity cannot be ignored.
“In a paper titled “The Melting of Floating Ice will Raise the Ocean Level” submitted to Geophysical Journal International, Noerdlinger demonstrates that melt water from sea ice and floating ice shelves could add 2.6% more water to the ocean than the water displaced by the ice, or the equivalent of approximately 4 centimeters (1.57 inches) of sea-level rise.
The common misconception that floating ice won’t increase sea level when it melts occurs because the difference in density between fresh water and salt water is not taken into consideration. Archimedes’ Principle states that an object immersed in a fluid is buoyed up by a force equal to the weight of the fluid it displaces. However, Noerdlinger notes that because freshwater is not as dense as saltwater, freshwater actually has greater volume than an equivalent weight of saltwater. Thus, when freshwater ice melts in the ocean, it contributes a greater volume of melt water than it originally displaced.”
Also this is discussed in General Chemistry (by most college professors) as is thermal expansion. So, there is a double net positive effect here; many HS textbooks and 8-9th grade science websites get this wrong and assert (incorrectly) that sea ice melt would contribute little to nothing to sea level rise. Once you take a college level physics/chemistry course it becomes clear that salinity levels affect the heat capacity of water, the density/volume of melting ice which displaces the water, and thus sea level rise. If we were discussing melting of fresh ice into fresh water than the displacement would be almost zero in net water rise. I suggest you read http://www.fas.org/spp/military/docops/afwa/ocean-U1.htm and
http://www.fas.org/spp/military/docops/afwa/ocean-U2.htm, U3, etc…

Pamela Gray
August 10, 2009 7:44 pm

1. Jacob, you still haven’t responded to my post other than to say that I oversimplified. Do you have shortwave radiation data over time measured at Earth’s surface before it gets converted to LWR, compared to what hits the outer part of the atmosphere? It appears that your premise is that there is no difference or variation between the two, thus allowing continued increase in CO2 to heat the Earth from here to Armageddon. There is quite a difference and the data is noisy. But if you think not, show me.
2. SST’s and oceanic oscillations oscillate around a rather cherry picked zero. Yes. But over what time scale? And is it an even oscillating swing or one that is predominantly lopsided one century, and lopsided in some other way the next? During the industrial age, are you saying that the swing is even, can be canceled out, and therefore leave us with AGW? Are you kidding? Remember, most average “normal” lines on temperature graphs are no more than 30 years long. Yet we know that some oscillations are at least twice that length and have a rather chaotic swing.

crosspatch
August 10, 2009 7:59 pm

“and yet the Anartic is still warming”
No. It isn’t.
The articles you link to are full of the usual buzz words such as:
“He says the changes could be responsible for up to 20% of the observed global sea-level rise.”
Which is pure blither. The oceans have been rising at a fairly steady rate for about the last thousand years, long before we started using fossil fuels. Oceans reached their maximum height about 7,000 years ago when they were about 2 meters higher than now. In fact, the trend in sea level rise since 2006 is flat with rise which had been going along at a fairly steady rate suddenly stopping.
The ocean’s aren’t warming either. Data from the ARGO project show flat to slightly cooling ocean temperatures since the project started.
The articles you provide are basically not true but have become “conventional wisdom”. The seas are not rising, Antarctica is accumulating ice, the oceans are not warming, the atmosphere is not warming … and you will see 2009 end up with more ice than 2008 if the current trend holds. It looks like the melt is coming to an end early this year … before the Northwest passage has had a chance to open.
If you dig into those stories that you read, you will learn that they are not backed up by the data. They pretty much all simply repeat each others data and since that is what people are taught is “fact” these days, you are primed to receive it as such. Basically you are being hoodwinked in order to get you to agree to give up a good portion of your cash in order to “save the planet”.

Jacob Mack
August 10, 2009 8:05 pm

I will continue my responses tomorrow to Steve and Pamela.

Pamela Gray
August 10, 2009 8:36 pm

Jacob, are you referring to floating ice calved from glaciers or floating ice from freezing ocean saltwater seas? Floating ice calved from glaciers is indeed freshwater sourced. Floating ice from freezing seas is salt water to begin with and when melted returns to salt water. Unless this kind of ice is capable of creation, it cannot have a greater volume as a liquid than it began with before it froze and then reintegrated back into the sea from whence it came. If there is a common misconception. It is this: that the Arctic ice cap is frozen water from river sources, accumulated snow, and ice from rain.

Jacob Mack
August 10, 2009 9:20 pm

Pamela,
I noticed you latest response in my email, so I decided to reply now. The floating ice calved from glaciers have some salinity, albeit lower, however, the changes in density due to the salt water dissolving the mostly fresh water ice sourced from the glaciers, will inrease the volume of the water in addition to thermal expansion; so let us say thermal expansion is currently flat, well, the salinity of the water would still change the density which would change the volume of the water. Since I am posting, I will also ask you to respond to a past post where you asked me about heat transfer of oceans and terrestial land masses in a recent thread, since you never got back to me there either.
Now, regarding incoming SW radiation and LW radiation there is alot of literature which exists showing good approximations and results of LW radiation trapping and I will be more than happy to post links up tomorrow.
Steve, in the interest of being more efficient in my posts: the IPCC reports are an analysis of past and recent literature, data, GCM’s and so forth and of course low, median and high end estimates are made regarding such things as aerosl effects, incoming/outgoing radiation and so forth, however, there is recent and well validated research highlighting aerosol effects and water vapo rise, which I will post tomorrow. You may find plenty of your own at NOAA, NASA GISS, NASA.GOV, AOS Princeton and so forth,but I will paste the most relevean links tomorrow.
I will say now, however, that ocean cooling does not invalidate water vapor feedbacks whatsoever, as this is a natural process through conduction for example. As sea surfaces cool and heat is transferred vertically, more heat is added to the atmosphere where increased levels of H20/C02 can trap it. Keep in mind that evaporation has a 50% cooling effect of seas whereas conduction has only about a 10% total effect, so, the escaping heat is more easily trapped by higher GHG atmospheric levels, and coupled with the high residence time of C02 provides a forcing upon the water vapor, which is itself a postive feedback, and in turn more heat returns to the planet surface. Since land is more greatly affected by heat transfer due to tempertaure differences, the land flucuates greatly while even as the ses absorbs more heat, its temp will not change due to the high heat capacity and specific heat. Stratospheric warming has been shown in severa; studies, along with increases in water vapor in the middle to upper troposphere, even as ARGO floats, shows some ephemeral cooling.
Recently, ARGO floats had to be reclaibrated and when they were, it was revealed the level of cooling was not as significant as it was previously being recorded as. Also ocean cooling has been predicted by the models for the past 25 years and has been well expected and predicted in several papers since the 1980’s. It was first hypothesized in the late 1970’s. Heat over time can also travel to deeper depths of the sea as well and mix, as can C02 which in combination with various natural processes halt warming of water, and/or create a cooling period, though there is no cooling trend of the bodies of water on this planet either.
The doubling of C02 wil not necessarily, lead to equilibrium immediately, and in fact sevral papers site that it may be some time after the doubling of C02 from pre-industrial levels that equlibrium may be reached and the full climate senstivity may be realized/reached through a global mean temperature increase. The range is between 2 degrees C 4.6, (some ranges say 1.5 as a starting point, and 5-6 degrees as not ruled out) but the median clustering is about 3 degrees C, which has been demonstrated to be an accurate approximation in far more than 4 peer reviewed papers. So, if we forget about the IPCC for a minute and Hansen’s estimates we seee a strong agreement with high confidence that the temp increase will be greater than 2 degrees C and the clustering is at about 3 degrees C. Now, averaging in the IPCC report and looking at Hansen’s median (or even low projections) as well as GCM’s with more conservative predictions based upon the physics and the central theroem we are looking at about 3.5 degrees C increase in global mean temp, or so. It is impossible to get from the IPCC report, the peer reviewed literature (99.5 % of it) and the most recent updated data to predict a global mean temp increase of less than 2 degrees C. The statistics shows 3 degree C clustering from many sources which have been repeated many times. You are ignoring the impact of short term sea lag and transfer of heat due to temp differencs, though admittedly it is very complex in such a chaotic system, (as Pamela is quick and correct to point out) but it seems to me you have not considered the system enough in your analysis. I am not predicting a 5 degree C increase or immediate catastrophe at the doubling of C02 over pre-industrial levels, but once equlibration has occurred we are looking at several catastrophic events, and prior, to this many citizens of third and second world countries will die and become deathly ill as a result of global warming due to anthropogenic means and the natural variability response, as well as, dimming/cooling as seen in the brown cloud.
The 1 degree C warming you predict is over halfway there, now as evidenced by global mean temperature analysis which is repeated atleast a hundred times. Hence a total 1 degree increase without considering equilibrium is impossible. AGW is > 99% certain (really 100%, but all measurements contain uncertainty) and a future of increased droughts, floods due to AGW is >90% certain (around 95%) and predictions for great detrimetal weather patterns and climate disruptions is >66%, (AR4) but in light of recent literature and empircal obesrvations is >70%. I, for one,do not want to wager on thoes odds. We are approaching 1 degree global mean temperature warming now, (or in the next 10 years.) so I am confused as to how we can only have 1 degree increase at equlibrium. The reaction will shift to the right and even with chaotic weather, ENSO, and the like, the warming effect of GHG will continue to be part of the trend. T
The GCM’s are amazingly accurate and precise at this point, and in conjunction with real world data input/updated in them, and satellite/proxy data, it is clear that 1 degree warming is a gross underestimate. I will paste the links tomorrow and if I have time some of my own calculations as well; if not then within a few days I will show where I see you miscalculating and show my proofs.

Jacob Mack
August 10, 2009 9:27 pm

Oh, and interestingly enough, the planet has still been warming since 1998 even with 1998 being a warmer year…go figure…

Jacob Mack
August 10, 2009 9:31 pm

XBT showed exaggerated warming and ARGOS showed cooling due to bad sensors and underestimating how long it would take for them to reach under water; once the corrections were made, the cooling was shown not to exist and the warming was shown to be on the incline. Okay now I will digress for the night.

August 10, 2009 9:48 pm

Jacob Mack:

I also want to add that precipitation will slow down Anartic (sic) warming and some evidence suggests that El Nino will also temporarily suppress warming magnitude, and yet the Anartic (sic) is still warming… I also have preparations to make for Steve, soon as he answers my initial questions.

How about you answering the question I’ve been asking you through a number of threads.
OK, here’s the question: You previously have stated, explicitly, that you had a B.S. in Chemistry. But then another poster provided educational links, none of which stated that Jacob Mack had been awarded a degree in Chemistry. Given those facts, my question is which school did you graduate from with a Chemistry degree, and what year was that?
Simple questions, and your two-part answer can be posted here in under a minute.
What are you waiting for?

Jacob Mack
August 10, 2009 9:55 pm

Smokey, one you have been misquoting me the entire time, and two the links you gave me used faulty methods, did not use proper references and have not been repeated and validated; come to think of it, you have not answered any of my questions, and you did not falsify AGW whatsoever.

August 10, 2009 10:20 pm

No, no, I’m not misquoting you at all. I’m just asking two simple questions.
Seems a guy would be proud of the school he graduated from. With his degree in… Chemistry.
So, what school and what year?

tallbloke
August 11, 2009 12:40 am

Leif Svalgaard (19:04:55) :
tallbloke (18:50:01) :
I pointed out the uncertainty in TSI values
The uncertainty is smaller than the solar min to max variation so is hardly relevant. Even a 1 W/m2 uncertainty translates into a 0.05K temperature signal which is negligible in the current context.

Hi Leif, we’ve rehearsed the argument elsewhere and we disagree on this. As far as I can see, your application of the Stefan Boltzman law doesn’t fit the context of a planet with a dynamic atmospheric system. Earth is not a snowball or a lump of coal, and doesn’t behave like either of them.
The calculations I did on ocean heat content changes due to insolation which you confirmed show that sunlight plus terrestrial atmospheric factors can lead to a 4W/m^2 ‘forcing’ on the decadal scale. And that wasn’t ‘peak to trough’ either.
On century long timescales, I think we need to take into account the various issues with projecting the PMOD model beyond the data, and the ACRIM data’s higher peak-trough amplitude, and the bigger than expected fall in TSI from the peak of cycle 23 to now. I think there is a higher climate sensitivity to changes in TSI than a simplistic analysis of the temperature data would indicate. This is due to the curve flattening effect of the ocean’s ability to store solar energy (hiding it temporarily from the surface record at the top of the solar cycle), and el nino’s tendency to occur at solar minimum (raising SST’s at the bottom of the solar cycle and thus further flattening the signal).
Outgoing longwave radiation from the surface jumped 4W/m^2 after 2000, and has stayed at that higher level since. The oceans are shedding some of the heat which my calcs show they have been gaining since the end of the little ice age some 300 years ago, barring some minor downtrends along the way.
This has kept things warmer than they would otherwise be for some 8 years now, which shows what a vast reservoir of heating energy the oceans contain. However, the ocean heat content is consequently diminishing, despite what Josh Willis’ refudging of the ARGO data says, and even he has since admitted (and then recanted) that there has been a “slight fall” since 2003.
Since there is no big tropical tropospheric hotspot developing as a result of this increased OLR, the puny effect of the change in concentration of the trace gases the IPCC worry about is, well, puny, and nothing to worry about.
The bottom line is that in my opinion, your estimate of change in ‘effective temperature’ for a black body earth for a 1W/m^2 change in TSI may be correct, but your estimate of the climate sensitivity to that change is well off the mark.
This is why I say that the climate is very sensitive. But not to co2. The evidence is to be found not in he atmosphere, but in the changes to ocean heat content (which are hidden from the surface record). Because of the oceans vast heat capacity, (the top two metres can store as much as the entire atmosphere above it as Bob Stevenson pointed out), and the oceans ability to move heat from the tropics towards the poles, the earth is a well moderated place to live. This gives the false impression that the climate is insensitive to changes in insolation levels.
Nothing could be further from the truth.

August 11, 2009 2:06 am

Jacob said
“The 1 degree C warming you predict is over halfway there, now as evidenced by global mean temperature analysis which is repeated at least a hundred times.”
I note with interest the precision with which you believe we can analyse modern temperatures and compare them to older ones.
So are you talking about modern warming whose temperatures are based on the Hadley 20 global stations in the year 1850 (which reflects the little ice age) or James Hansens innovative 1880 figures based on a novel grid system also referencing a small number of stations which became the basis for the first IPCC report?
Both of these data bases of course bear no relation to todays stations in terms of subsequent changes in location, numbers, uhi effect or poor siting.
Perhaps you are referring to modern cooler temperatures in relation to previous warmer eras as evidenced by the MWP, the Roman optimums or the Holocenes?
If you can comfirm we can then compare like for like, rather than cite figures which assumes greater accuracy than is evidenced by the methodology employed, or ignores past warming episodes.
Tonyb

August 11, 2009 6:10 am

tallbloke (00:40:03) :
On century long timescales, I think we need to take into account the various issues with projecting the PMOD model beyond the data, and the ACRIM data’s higher peak-trough amplitude, and the bigger than expected fall in TSI from the peak of cycle 23 to now.
Whatever details you may ponder, the uncertainty is still less than the solar cycle variation. And I don’t know what PMOD ‘issues’ you are talking about going back in time [and I know PMOD quite well]. The ACRIM data seems to have a smaller peak-trough amplitude because the trough in 1996 was less deep. The very earliest data before 1980 I’d not make much of. The larger than expected PMOD drop is due to calibration errors, and ACRIM does not have a deeper minimum now than in 1986. But all of this doesn’t matter: There are good reasons to believe that the magnetic field is responsible for TSI variation and since the magnetic field now is just what it was 108 years ago, there is no reason to believe [and no evidence for it] that TSI was any different back then than now.

tallbloke
August 11, 2009 7:32 am

Leif Svalgaard (06:10:11) :
since the magnetic field now is just what it was 108 years ago, there is no reason to believe [and no evidence for it] that TSI was any different back then than now.

Yebbut, the sun’s just gone into a once in 200 year funk hasn’t it?. Before then the magnetic field (I assume you are talking about the solar dipole field?) shows a more or less steadily rising trend throughout the C20th.

Steve Fitzpatrick
August 11, 2009 7:41 am

Leif Svalgaard (18:58:51) :
“The peak to valley change depends on the size of the solar cycle and varies by a factor or 3 or more. A good median value is 0.1% of TSI or ~1.4 W/m2, for some cycles larger, for some smaller.”
Does the mean TSI over a whole cycle remain more or less constant from cycle to cycle, or does the mean TSI for a whole cycle depend on the level of solar activity. For example, if the peak of one cycle has a sunspot number of 75, and the peak of the next 150, would the average TSI over each cycle be the same, or would the cycle with lower peak activity have a lower average TSI?
“A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].”
Does the climate sensitivity not enter into the expected temperature change? If the climate sensitivity were 0.75 degree per watt, then a top of atmosphere variation of 1.4 watts per M^2 would give about 0.25 * 0.7 * 1.4 * 0.75 = 0.184K change, not close to the 0.07K you note above. It seems to me that the above calculation implicitly assumes a sensitivity of about 0.21K per watt/M^2. Am I missing something?
“So for what is is worth: the model is consistent with no substantial change in cyclical variation over the past 130 years.
And I don’t understand this statement. Stefan-Boltzman’s law hasn’t changed. So what is this ‘cyclical variation’?”
What I meant was that there was no obvious evidence for a big overall trend in TSI, past cycles were not very different from recent.

August 11, 2009 7:45 am

tallbloke (07:32:58) :
Yebbut, the sun’s just gone into a once in 200 year funk hasn’t it?.
No, not at all, a 100 year funk, not 200. Cycle 23 was much like cycle 13, and cycle 24 is forecast to be like cycle 14.
Before then the magnetic field (I assume you are talking about the solar dipole field?) shows a more or less steadily rising trend throughout the C20th.
No, not at all. The heliomagnetic field shows the same ~100 year variation as solar activity. Here http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1835.png
In http://www.leif.org/research/Reply%20to%20Lockwood%20IDV%20Comment.pdf we debunk the idea of steady increase. Lockwood et al have come around to our view and now agree with our reconstruction. On http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1900.png we show their reconstruction as the green curve compared with ours [blue curve] and observations by spacecraft [red curve].

tallbloke
August 11, 2009 7:46 am

I should clarify that, the magnetic field at minimum shows a more or less steadily rising trend through the C20th. And it should be noted that the shorter, more vigorous cycles of the latter half of the C20th with their steep up and downramps meant a lot less downtime for the sun, and a lot more TSI overall, right up to 2003 or so.
In Fairness to Steve, I don’t think we should swamp his thread with a continuation of our debate on all this stuff here, though he may want to consider these two points if he has previously simply accepted the facile argument that the last three solar cycles had lower maximum amplitudes than the highest on ever recorded, so therefore the sun is out of step with the temperature record.

August 11, 2009 8:08 am

tallbloke (07:46:31) :
I should clarify that, the magnetic field at minimum shows a more or less steadily rising trend through the C20th.
I thought that this
http://www.leif.org/research/Reply%20to%20Lockwood%20IDV%20Comment.pdf made it clear that there is no such rise.
The minimum in 2008 is on par with that in 1901. The minimum in 1996 with that in 1933. The minimum in 1965 on par with that in 1933 as well. The minimum in 1986 on par with 1945. The minima in the 19th century very much like the ones in the 20th: http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1835.png and the maxima too, BTW.
In other words, the variations of TSI minima have been much less than 1 W/m2.

tallbloke
August 11, 2009 8:10 am

Steve Fitzpatrick (07:41:12) :
What I meant was that there was no obvious evidence for a big overall trend in TSI, past cycles were not very different from recent.

See my points above. To illustrate them and their implications, some facts:
1) Sunspot numbers correlate well with TSI.
2) The average sunspot number from 1875 to 1935 was 42
3) The average sunspot number from 1945 to 2005 was 73
4) The average sunspot number from 1975 to 2005 was 68, so it didn’t drop much after the highest solar cycle ever recorded.
5) Above around 40, the oceans start to gain heat content.
6) Ocean heat content is the main driver of the sea surface and therefore air temperature on all timescales over a couple of months.
7) Ocean heat content is driven by the sun, not co2, because longwave radiation doesn’t penetrate the ocean, it just causes more evaporation at the surface.
I’ll leave you to join the dots.

tallbloke
August 11, 2009 8:24 am

Leif Svalgaard (07:45:07) :
tallbloke (07:32:58) :
Yebbut, the sun’s just gone into a once in 200 year funk hasn’t it?.
No, not at all, a 100 year funk, not 200. Cycle 23 was much like cycle 13, and cycle 24 is forecast to be like cycle 14.

We’ll see soon enough. 🙂
Before then the magnetic field (I assume you are talking about the solar dipole field?) shows a more or less steadily rising trend throughout the C20th.
No, not at all. The heliomagnetic field shows the same ~100 year variation as solar activity.
Well your theory of a 100 year cycle is an interesting one, but I don’t think the data supports it all that well. As you said before, your re-evaluation of C19th solar activity is still in the works. Until it’s done, I’ll carry on using the sunspot numbers, which were generally lower in the C18th and C19th than they have been in the C20th since 1935. By the way, did you see my offer of help with the solar magnetic data digitisation on the NASA admits possibility of Dalton minimum thread?

August 11, 2009 8:30 am

tallbloke (08:10:12) :
I’ll leave you to join the dots.
Some more dots to connect:
Average TSI for
1830-1875 1365.98
1875-1930 1365.78
1930-1975 1365.94
1975-2009 1365.94

tallbloke
August 11, 2009 8:32 am

Leif Svalgaard (08:08:50) :
The minimum in 2008 is on par with that in 1901. The minimum in 1996 with that in 1933. The minimum in 1965 on par with that in 1933 as well. The minimum in 1986 on par with 1945. The minima in the 19th century very much like the ones in the 20th: http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1835.png and the maxima too, BTW.
In other words, the variations of TSI minima have been much less than 1 W/m2.

Well, maybe. then again, maybe not. It depends whose data you use, and how you interpret it.
And regardless of all that, my observations of the overall high levels of TSI in the latter C20th due to short minima, swift up and downramps etc still stand. You never refute them, but you always obfuscate them with an avalanche of links and other matters.
For these simple and indisputable reasons, TSI in the second half of the C20th was much higher than in the first. End of.

August 11, 2009 8:36 am

tallbloke (08:24:08) :
Well your theory of a 100 year cycle is an interesting one, but I don’t think the data supports it all that well.
This is not a theory, but derived from the data. The good news is that HMF B is very well determined the past 170+ years. Even our harshest critics now agree with us. So your statement that ‘the data supports it all that well’ is unfounded.
I’ll carry on using the sunspot numbers
Leif’s law: if data agree with your view they are good.
By the way, did you see my offer of help with the solar magnetic data digitisation on the NASA admits possibility of Dalton minimum thread?
I estimate the work to be of the order of 10 man-years. How many will you contribute? Anything helps.

August 11, 2009 8:39 am

tallbloke (08:32:10) :
For these simple and indisputable reasons, TSI in the second half of the C20th was much higher than in the first. End of.
1st half average 1365.85
2nd half average 1365.98
Much higher?

tallbloke
August 11, 2009 8:41 am

Leif Svalgaard (08:30:46) :
tallbloke (08:10:12) :
I’ll leave you to join the dots.
Some more dots to connect:
Average TSI for
1830-1875 1365.98
1875-1930 1365.78
1930-1975 1365.94
1975-2009 1365.94

I thought you said there was still much work to do on the C19th, which is why I offered to help with the digitisation of the records. Why not put the data and methodology up for discussion in a post so we can all discuss it properly on a separate thread?

tallbloke
August 11, 2009 8:49 am

Leif Svalgaard (08:36:21) :
tallbloke (08:24:08) :
I’ll carry on using the sunspot numbers
Leif’s law: if data agree with your view they are good.

There are a couple of different ways that comment can be understood.
Your way, and my way. 😉

August 11, 2009 8:52 am

tallbloke (08:41:07) :
I thought you said there was still much work to do on the C19th, which is why I offered to help with the digitisation of the records. Why not put the data and methodology up for discussion in a post so we can all discuss it properly on a separate thread?
I may not have been clear. It is not that there are holes that are not covered. The HMF B can be determined from only a few stations and the correction to the sunspot numbers also can be done with only a few stations [because all stations basically show the same]. My goal is to digitize ALL stations anyway, so that the data does not disappear [as it is beginning to do: yearbooks crumble, libraries burn or are flooded, or old books just thrown out].

August 11, 2009 9:04 am

tallbloke (08:41:07) :
Why not put the data and methodology up for discussion in a post so we can all discuss it properly on a separate thread?
There are two separate issues: HMF B and the Sunspot number.
The HMF B methodology is described here:
http://www.leif.org/research/The%20IDV%20index%20-%20its%20derivation%20and%20use.pdf
The sunspot methodology was described by Rudolf Wolf in the 1850s. A modern version is here:
http://www.leif.org/research/Napa%20Solar%20Cycle%2024.pdf
The data are very voluminous. Some are in public archives, others only in my 5 Gigabyte database. You can see some of the original data here: http://www.leif.org/research/todo/
The problem is not only in entering or OCRing the data, but also in correcting them [yes] and interpreting them correctly. As an example of the subtle issues, see http://www.leif.org/research/todo/api_1905_11_h.PDF
look at row 15 over to the right, where the value 596 look like an error, but really isn’t. The secret is in the 0.356 near the top just under Tagesm. It means that the real data value is 0.356+the table entries/100,000, except for row 15 it should be interpreted as 0.35+the table entry/100,000.

tallbloke
August 11, 2009 9:14 am

Leif Svalgaard (08:39:09) :
1875-1930 1365.78
1930-2009 1365.94

Well since the sunspot number averages went from 42 to 73 over the same periods, maybe TSI doesn’t match the sunspot numbers too well after all. There again, your TSI numbers are a back extrapolation, whereas the sunspot numbers are the sunspot numbers. If you’re right, it’s amazing that such a small increase in TSI could cause such an acceleration in the thermal expansion of the oceans, and would imply a truly remarkable sensitivity of the Earth to small changes in insolation. It would certainly put co2’s claimed sensitivity in the shade. I’ll run some calcs on your figures and see what it implies.
To be continued on your own thread I hope.

tallbloke
August 11, 2009 9:21 am

Leif, sorry, not a back extrapolation, a reconstruction using the magnetic field measurements as a proxy. I’d very much like to learn more about the interaction of the Earth’s magnetic field with the solar fields so I can better understand your methodology.
I agree the old records need preserving. I’ll email you soon about my offer of help.

Pamela Gray
August 11, 2009 9:52 am

Jacob, as you can see, discussions are best kept to the thread topic and narrowly focused. Your last post was a jumbled mess when judged against more successful debating techniques. Just pick one thing from your post, one component of AGW that you feel strongly about. Keep it tied to the thread which has to do with how sensitive the earth’s natural climate is to small changes. CO2 is a very small change. Pick something about the warming cycle of CO2. And we will go from there. Bear in mind that we like to stick with observed and measured data, not modeled scenarios. However, modeled scenarios can be compared to measured data. There are sources for clouds, vapor, CO2, LWR, SWR, SST, ENSO, you name it, that can be compared to the modeled scenarios. To remind you, the least amount of temperature rise in the scenarios provided by Hansen, IPCC, and the like assumed strict controlled reduction of CO2, which of course has not occured. So the model we will compare to will be the one that is based on and closest to the current estimated CO2 level.

August 11, 2009 9:53 am

tallbloke (08:49:26) :
“Leif’s law: if data agree with your view they are good.”
There are a couple of different ways that comment can be understood.

I think not. What I have found [empirically] is that people [like you] would gladly use a dataset [even if dubious] if the data agree with their own pet theory, and tend to spread FUD on other data.
I have even come across the following argument: ‘since it is obvious that variation of the Sun is the main [perhaps, sole] driver of climate, the fact that there is climate change proves that the Sun changes accordingly’. From your posts one can only conclude that you wholeheartedly subscribe to that argument, e.g.: “This gives the false impression that the climate is insensitive to changes in insolation levels. Nothing could be farther from the truth.”
BTW, you used the weasel word ‘insolation levels’. Solar insolation drives glaciations, so you are correct about that straw man, but it is irrelevant, because the discussion was not about the large changes in insolation, but about the minute changes in irradiance.

August 11, 2009 10:11 am

tallbloke (09:14:32) :
Well since the sunspot number averages went from 42 to 73 over the same periods, maybe TSI doesn’t match the sunspot numbers too well after all.
When the sunspot number changes by 150, TSI changes by 1.3 W/m2, so 1 spot means a change of 0.009 W/m2, so 73-42 = 31 spots equates to a change of 0.27 W/m2 [but I also think that the SSN should be 50 not 42]. I had 0.16, Preminger has 0.22 W/m2 [to 2004, so a tad smaller if to 2009] based on Greenwich Sunspot Area.
So tiny tiny numbers.
If you’re right, it’s amazing that such a small increase in TSI could cause such an acceleration in the thermal expansion of the oceans, and would imply a truly remarkable sensitivity of the Earth to small changes in insolation
A simpler and much likely explanation [without truly remarkable amazement] is that there is no causal relation [apart at the hundredth of a degree level] between irradiance [don’t use insolation as that changes a lot ~20W/m2 during a year] and temperatures.

George E. Smith
August 11, 2009 10:53 am

“”” Jacob Mack (19:43:19) :
Please see below, regarding the higher volume of salt water due to its higher density. (D=M/V) Again the differences in water’s physical characteristics due to salinity cannot be ignored.
“In a paper titled “The Melting of Floating Ice will Raise the Ocean Level” submitted to Geophysical Journal International, Noerdlinger demonstrates that melt water from sea ice and floating ice shelves could add 2.6% more water to the ocean than the water displaced by the ice, or the equivalent of approximately 4 centimeters (1.57 inches) of sea-level rise.
The common misconception that floating ice won’t increase sea level when it melts occurs because the difference in density between fresh water and salt water is not taken into consideration. Archimedes’ Principle states that an object immersed in a fluid is buoyed up by a force equal to the weight of the fluid it displaces. However, Noerdlinger notes that because freshwater is not as dense as saltwater, freshwater actually has greater volume than an equivalent weight of saltwater. Thus, when freshwater ice melts in the ocean, it contributes a greater volume of melt water than it originally displaced.” “””
Well somehow, I don’t think your predictions will come about. Sea water is certainly denser than fresh water, which in turn is denser than ice. But there’s that slight problem of cooling. Each gram of ice that melts extracts 80 calories from the surrounding ocean water (remember most of the floating ice is actually submerged and surrounded by sea water, so a huge volume of sea water is cooled as that ice melts.
If you mix equal masses of zero deg C ice, and 80 deg C hot water (both fresh), then entire mass of water will reach zero deg C when all the ice is melted.
Also the interior of the sea ice (that initially froze out of the sea) contains pockets of salty brines, so as the ice melts a lot of salt is added to the solution, so ti doesn’t stay fresh very long.
So long as the salinity remains above 2.47% (normal is about 3.5%), the sea water has a positive temperature coefficient of expansion so cooling it reduces the volume, and the sea level will actually go down when the floating sea ice melts; not up.
I made such a prediction in mid 2004, which was published in Jan 2005. In mid 2006 a British Dutch team using a European polar satellite reported on ten years of measurements of the Arctic ocean sea levels; and they reported that it was dropping at a rate of 2 mm per year. they also said they didn’t know why; but they were very coinfident of their numbers.
So now you know why; that period was a period of Arctic sea Ice retreat, and the sea level dutifully declined.

Pamela Gray
August 11, 2009 11:18 am

Jacob, maybe you were confusing ice sheets on land that melt into the sea? Greenland ice sheets and Antarctic ice sheets would raise sea levels if they melted completely off and into the sea. The stuff that is floating, be it land-attached floating ice sheets, ice bergs from glaciers, or sea ice, just does not have the potential for catastrophic sea level rise. And of the two, Greenland is the only one historically that has the greater potential of actually melting.

George E. Smith
August 11, 2009 11:25 am

The pretty colored global long wave radiation map at the top of this essay covers a range from 100 to 350 W/m^2.
This is a bit weird since the NOAA official earth energy budget says the average for the globe is 390 w/m^2 corresponding to a +15 deg C BB temperature number.
Actually the 100 to 350 range corresponds to BB temperatures from 204.9 K up to 280.3 K; -68.2 deg C up to a whopping +7.2 deg C
If you actually cover the entire observed earth surface temperature range from about -90 deg C to +60 deg C then the LWIR emission would be more like 63.8 W/m^2 up to 698.5W/m^2 for the extremes; which is about an 11 to 1 range.
It would be nice if they could measure and report the extreme values, instead of some homogenized average values. That would be good; if for no other reason than the actual spectral peak wavelength changes by a factor of 1.82 over that temperature extreme range, which has a decided influence on the impact that CO2 has.
If they do enough averaging; pretty soon nothing at all ever changes.

Steve Fitzpatrick
August 11, 2009 11:26 am

Jacob Mack (21:20:39) :
It is honestly difficult for me to follow what this post was trying to say, except that you appear to believe the results of all the different climate models at the same time (even though those results are substantially different from each other), and everything the IPCC says as well. Let’s try to narrow it down to a couple of issues at a time.
1. Let’s start with ocean heat accumulation. Do you believe the multiple publications showing essentially flat to very slightly falling heat in the top 700 meters of ocean from 2003 to 2008 are not correct? If so, please explain why you think that. If you accept the results of these studies, then do you agree that ocean heat accumulation is the most accurate way to measure global warming over any specified period of time? If you do not think so, then please explain why not and what other metric you think might be a more reliable gauge of warming.
2. Now please consider aerosol effects. Do you believe the multiple studies that have shown a significant reduction in atmospheric aerosols since about 1993? If you do not believe these studies, then please explain why not. If you accept that there has in fact been a significant “global brightening” since the early 1990’s (net intensity of full sunlight reaching the Earth’s surface has increased), then does this not suggest that, whatever the net “canceling” of aerosols on radiative forcing may have been in the early 1990’s, that canceling effect has already declined significantly? If not, please explain why.
If we can limit discussion to at most a few topics at once, it will be easier to make progress. There is no reason to not cover a wide range of topics, but it is almost not possible to jointly address all at once. More efficient to explore specific differences in understanding and try to understand from where those difference arise.

tallbloke
August 11, 2009 11:53 am

Leif Svalgaard (10:11:09) :
A simpler and much likely explanation [without truly remarkable amazement] is that there is no causal relation [apart at the hundredth of a degree level] between irradiance [don’t use insolation as that changes a lot ~20W/m2 during a year] and temperatures.

Well I already explained why surface temperature isn’t such a good indicator. Let’s stick with oceanic thermal expansion for a while, because there’s no argument about what causes that.
When I last had the calculator hot on this I seem to remember I summed that the ocean retained something like 2.5% of the insolation over the 1993-2003 period. So, if you are right, there is a pretty big terrestrial factor to be accounted for. Nir Shaviv suspects decadal changes in cloud cover, and the ISCCP data hints at multidecadal changes too. There seems to be scope for an accomadation for both our data and theories.

Pamela Gray
August 11, 2009 12:19 pm

One of the best proxies for warming would be how much SWR gets in, how much LWR gets out and what the Net Radiation Budget is. I would caution here to get actual values, not anomolies. And don’t compare to a mean. Just use actual data.
Here are the combinations:
1. If more SWR gets in and less LWR gets out we get warmer.
2. If more SWR gets in and more LWR gets out we might stay the same.
3. If less SWR gets in and less SWR gets out we might stay the same.
4. If less SWR gets in and more LWR gets out we get colder.
What are the entities that influence how much gets in and how much gets out? That depends on whether it is short wave or long wave. I have started the list. By the way, did you know that taken as a whole, clouds have a greater cooling affect by a magnitude of 3 times greater than a doubling of CO2 concentration has? Think what it would be if clouds were doubled.
“The latest results from ERBE indicate that in the global mean, clouds reduce the radiative heating of the planet. This cooling is a function of season and ranges from approximately -13 to -21 Wm-2. While these values may seem small, they should be compared with the 4 Wm-2 heating predicted by a doubling of carbon dioxide concentration.”
A. SWR
1. reflecting particles such as water droplets in clouds
B. LWR
2. absorbing substances such as GHG’s, with water vapor being one

tallbloke
August 11, 2009 12:23 pm

Leif Svalgaard (09:53:09) :
What I have found [empirically] is that people [like you] would gladly use a dataset [even if dubious] if the data agree with their own pet theory, and tend to spread FUD on other data.

People like you say to-mate-o
I say to-mart-o
Let’s agree to disagree rather than resort to incivilities and accusations about motivation for which you have not one jot of evidence. We are both seeking scientific truth, I would hope.
the discussion was not about the large changes in insolation, but about the minute changes in irradiance.
The way I thought it went was that irradiance was what arrived at the top of the atmosphere, and insolation was what hit the weasel on the ground, whatever the timescale.
By the way, how do you tell the difference between a weasel and a stoat?
Reply: A weasel’s weaselly recognized. A stoat’s stoatally different. ~ ctm

Tenuc
August 11, 2009 12:29 pm


Instead of looking for a link to TSI, perhaps worth your while trying to quantify how the proportion of the different wavelengrths that make up TSI, change from solar max to min effect ocean heating.
High energy UV increases at max penetrates the ocean quite deeply. I think X-ray and other highly energetic frequencies may also be worth a look?

tallbloke