How Sensitive is the Earth’s Climate?

Guest Post By Steve Fitzpatrick

Fitzpatrick_Image1

Introduction

Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s).  This high sensitivity depends mainly on three assumptions:

1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.

2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases.  Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.

3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.

However, there is doubt about each of the above three assumptions.

1.  Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.  This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements.  While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked.  Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.

2.  Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings.  There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.  Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.

3.  Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined.  Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds.  Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired.  Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s.  This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.

Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.  A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.

In spite of the many problems and doubts with GCM’s:

1)       It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.

2)   Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.

3)   There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.

There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases.  This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years.  For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.  But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures.  What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period.  The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.

Climate Sensitivity

If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s.  Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity.   An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law.  Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.  With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.

But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles.  The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth.  Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.

Measuring Earth’s Sensitivity

The only way to accurately determine the Earth’s climate sensitivity is with data.

Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters:  1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration.  Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008.  He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend.  The overall trend correlated well with the log of the CO2 ratio.  In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.

There are a few implicit assumptions in Bill’s model.  First, the model assumes that all historical warming can be attributed to radiative forcing.  This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.).  The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.

Second, the model assumes the global average temperature changes linearly with radiative forcing.  While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings.  That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature.  So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.

Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2.  While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration.  In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.

To make Bill’s model more physically accurate, I made the following changes:

1.  Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.

2.  Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.

3.  The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.

4.  The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

Fitzpatrick_Image2

Figure 1 Model results with temperature projection to 2060

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing.  The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma).  All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity.  A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average.  As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.

Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle.  The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140.  Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter.  The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot.  When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.

Fitzpatrick_Image3

Figure 2 Scatter plot of the model versus historical temperatures

Fitzpatrick_Image4

Figure 3 Comparison of the model’s temperature projection under ‘Business as Usual’ with the IPCC projection of ~0.2C per decade, consistent with GCM projections.

Regional Sensitivities

Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter.  The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011.  An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K).  This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors.  The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.

Credibility of Model Projections

Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future.  But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation.  The probability of encountering important new variables increases with the length of the forecast, of course.  So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.

To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4).  The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years.  Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance.  The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

Fitzpatrick_Image5

Figure 4 Model temperature forecast for 1972 through 2008, with model constants based on 1871 to 1971. The model has no “knowledge” of the temperature record after 1971.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period.   The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2.  A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.

Emissions Scenarios

The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:

a)       The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060.  Atmospheric concentration reaches ~518 PPM by 2060.

b)       N2O concentration increases in proportion to the increase in CO2.

c)       CFC’s decrease by 0.25% per year.  The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.

d)       The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.

e)       Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.

The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s.  The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade.  This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.

Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

Fitzpatrick_Image7

Figure 5 Reduced warming via controls on non-CO2 emissions and gradually lower CO2 emissions growth.

One such scenario can be called the “Efficient Controls” scenario.  The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases.  These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency.   Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade.  Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions.  By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060.  So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.

A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions.  The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario.  Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100.  Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100.  Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Fitzpatrick_Image8

Figure 6 Draconian emissions controls may reduce average temperature in 2060 by ~0.21C compared to business as usual.

Conclusions

The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.  This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant).  Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.

Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average.  Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action.  A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

Advertisements

334 thoughts on “How Sensitive is the Earth’s Climate?

  1. It is evident that a lot of work and thought is being presented with this entry.

    This caught my attention:

    Second, the model assumes the global average temperature changes linearly with radiative forcing. While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings. That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature. So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.

    The habit of thinking of the “average” earth makes it easy to forget that responses to “radiative forcings” in this average scenario depend on real temperatures that on a daily basis may change from 0C to over 60C between day and night in some deserts, for example. That is the temperature that the black body emissivity sees, not small at all for T**4 changes, with all the variations of the earth’s surface.

  2. “… based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.”

    So WMGG’s caused the warming from 1871 to 1945, before we started burning petroleum?

  3. One more question. If it’s, simply, increasing CO2 dragging us out of the LIA, What put us in the LIA? And, was it CO2 that put us into the Medeivel Warm Period? What about the Holocene Optimum?

    I’m confused.

  4. George E. Smith posted a simplyfied calculation method for obtaining an aproximate surface emission in watts per square metre which showed how T**4 changed the outgoing long wave radiation.
    Anyone keep a copy please? (I wiped mine accidentally and now can’t remember the formulae :()

  5. “For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming”
    She doesn’t believe that anymore, neither does anybody else [except some climatologists].

  6. For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter
    Using your own numbers this gives an increase of 2*(239/1366)*0.267 = 0.09 degrees, hardly a “significant fraction of the observed warming”.

  7. Maybe I am misunderstanding something, but Bill Illis’ model looks very suspect.

    He created a “simple curve-fit model of the Earth’s average surface temperature based on only three parameters”: 1) AMO, 2) ENSO, and 3) CO2. He adjusted parameters until he got a fit to “observed temperature variation” [isn’t that what “simple curve-fit model” means?] and – surprise surprise – found that factors 1 and 2 contributed to fluctuations over the period, while 3 provided all the underlying increase.

    I would argue that no other result is possible. AMO and ENSO are fluctuating phenomena so can only provide fluctuations and by their very nature cannot provide any long-term trend. CO2 concentration, however, increased monotonically over the whole period, so is the only factor capable of providing a long-term trend, and by its very nature cannot provide fluctuations.

    Splitting the time period, and curve-fitting from 1871 to 1971 then comparing “predictions” with post-1971 sounds impressive, but all it means is that if factors which actually caused the overall trend from 1871 to 1971 remained in place from 1971 to 2008, then the model would match neatly. The actual factors could be the sun, clouds, shipping volumes, or world use of soap. The model would still give a good match.

  8. Thanks for an excellent and very balanced post. The IPCC could learn something from this method of observation based prediction which is likely far more accurate.

    My only comment would this…

    In 1850 we in the middle of a cold period, what caused this is another issue but i doubt it was co2. Lets assume it was due to GCR increasing low level clouds which led to reduced SST (which if correct would throw out the whole model anyways). My question is this, if the earth was out of equilibrium at this time due to an external forcing, the earths sensitivity (which I beleive is dynamic i.e. either positive or negative depending in the energy imbalance) would have to be increased to enable the system to return the “normal” climate state (if there is such a thing). I.e. a cooler sea meant less cloud cover and more incoming solar energy to warm the sea. I believe once the system reaches a certain point, the sensitivity will change, i.e. a warmer sea will cause increased low level cloud reducing incoming solar energy. This helps to maintain the earth climate around an ideal point.

    I think for this reason assuming the sensitivty will remain constant over long periods of time is unlikely to be correct and may throw out hindcasts and predictions.

    In order to confirm this theory we would need long term satellite data of cloud cover %, but unfortunetly this is unavailable. However, Lindzens recent paper attempts to address this using recent data and yeilds a lower sensitivity, this could imply that the climate sensitivity is reducing as the sea / earth warms.

    I am wandering, what happens if you derive your climate sensitivity over say 3 distinct periods, does it reduce with time? In your post you mentioned that the 1980 – 2008 data yields a lower value. If a trend was apparent, this could better aportion the role of the various forcings and enable more reliable forecasts assuming a linear relationship of sensitivtiy with global temperature.

  9. For ease of referral, the original model by Bill Illis was presented here:

    https://wattsupwiththat.com/2008/11/25/adjusting-temperatures-for-the-enso-and-the-amo/

    That post generated ~300 comments, most of them very perceptive (or at least interesting). Many of the comments dealt with weaknesses in the assumptions of the model, not the least of which are the artificial adjustments in the Hadley Centre historical temperature data. The limited thermal absorption capacity of CO2 is another weakness in the assumptions, and a corollary to that is the assumption that CO2 is the likely “unknown” forcing agent.

    Fitzpatrick’s adjustments to Illis’ original model uses these questionable assumptions, but he states (more or less) that they represent a “worst case” scenario. That is, if one provisionally accepts all the assumptions (that point to CO2 as the primary global warming forcing agent), then the worst case is that a doubling of CO2 concentration will force a global temperature rise of ~1C. If one does not accept all the assumptions, then the sensitive of the global climate to CO2 is something less than that.

    That is my (condensed) interpretation, at any rate.

  10. Congratulations both to WWWT and the writer. Its in the great tradition of the informal publication of amateur (ie outside the mainstream of academic research) science resulting from people seriously trying to come to grips with the core of a problem, rather than publish articles in the tenure obstacle course. Whether its right? Well, a different issue.

    Also reveals, along with the use of R on CA, what a revolution has occurred due to the availability of what by previous generations’ standards were super computers on our desktops. When the study of global warming started, to do this sort of work on the desktop would have been inconceivable.

    Is the model source code available someplace for other interested tinkerers? Hopefully the author will not deliver a freezing cold shower suddenly by revealing that it is written in Excel! But even if it is, it can always be rewritten in something more sensible.

  11. What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?

    Thanks

  12. What is clear that both these curved fit models, with and without decadel events, understimate the rate of increase and decrease in temperatures, most evidently from 1910 to 1940. There is something else missing, a natural forcing, from the overall picture.

  13. “A new paper by Lindzen and Choi (described at WUWT on August 23, 2009)”

    August 23, 2009?

  14. Well, it would seem to me as an interested observer that the infamous Mann Hockey Stick is still alive and well despite the hard work produced by Steve McIntyre. This whole document seems to implicitly assume a steady global temperature until man began to emit CO2 in increasing volumes through industrialisation.

  15. I take issue with this statement:

    “it is impossible to refute that greenhouse gases should lead to increased global average temperatures.”

    For the rationale see http://climatechange1.wordpress.com/2009/04/24/the-gaping-hole-in-greenhouse-theory/

    Before we attribute a change in surface temperature to a process that can not be demonstrated, or evidence found of its existence, we should investigate the simple stuff, the coming and going of clouds, and in particular ice clouds that are highly reflective of incoming short wave radiation.

    There is a strong flux in the temperature of the atmosphere above 200hPa (including therefore the top third of the troposphere and the stratosphere) associated with seasonal, biennial, decadal and centennial change in its ozone content. As the temperature in this layer rises (due to increasing ozone content) so does the temperature of the surface of the sea.

    That is the sort of observation that should excite our attention if we wish to explain the ups and downs of surface temperature over short and long time scales.

    Manifestly, we do not understand the warming and cooling of the sea. Until we do, we should steer away from attributing change to the activities of man. To say that we can’t account for change and therefore it must be due to man is just stupid. The fact is, we don’t understand the simplest processes that bring about the change that we observe on a seasonal and inter-annual basis.

    More about the symmetry between the temperature of the tropical stratosphere/upper troposphere and sea surface temperature at: http://climatechange1.wordpress.com/2009/06/29/the-southern-oscillation-the-young-persons-guide-to-climate-change/

  16. This looks very interesting. Have you considered writing a paper and submitting it for peer review?

  17. “2) Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.”

    Dear oh dear. And after I had spent all that time last week explaining why that mechanism was wrong! Does nobody listen? :-)
    (I expected no different, of course. Everybody gets it wrong. We must take this sort of thing in good humour, but Al Gore’s movie has a lot to answer for…)

    With careful parsing, the statement can be interpreted in such a way that it is technically true, but I’m not totally convinced that was deliberate, and it’s terribly misleading. It doesn’t affect the rest of the post at all, but even so it would be nice if we could get the “Basic physics” right.

  18. What’s actually CRITICAL for global policy right now is not the carbon dioxide-only projection but the REAL projection for the next 25 years.

    There are many predicting:
    i. Decreased solar output.
    ii. At a time of cool phase PDO and AMO also declining.

    If that’s the case, would your model predict little if any warming for the next 25 years?

    Because if so, I’d say it was a time window to really understand the total interplay of climate forces, whilst generating new CCS technologies and trialling them in ways which don’t bankrupt the economic system.

    At the same time, switching all homes to energy-neutral running, both through new stock and retrofitting old stock, would be important.

    Finally, you’d force industry to retrofit CCS and aerosol/particulate control technologies to existing power stations by, say 2030.

    What’s clear though is that all this Armageddon doomsday stuff without showing you understand the system won’t work.

    This paper is a welcome addition to furthering the COMMUNICATION of understanding beyond Goreisms.

    Gore needs to ground his Learjet. And to do that, the first thing to do is to impose 50 times the amount of green taxes on corporate jets that you do on Joe Schmo’s car. Because the high rollers aren’t going to continue their giddy life of pleasure whilst imposing sanctions on the rest of us………..

    And you make it a condition of working in Cleantech PE funds that you don’t use Learjets. At all.

    That’d make Gore practice what he preaches, eh?

  19. Good Post – thanks.

    I need to read it again but it seems to sum up pretty much where I stand.

    A couple of points on the solar effect, though. Sean F writes

    For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.

    1. It should be made clear that the 2 w/m2 increase is the increase in solar intensity at the “top of the atmosphere” and not the average increase received by the earth’s surface. Due to albedo and the earth’s geometry this will be ~0.35 w/m2 (i.e. 2 * 0.25 *0.7). There’s lots of confusion here particularly as the current ghg forcing is reckoned to be ~1.6 w/m2.

    2. Leif Svalgaard may be the best person to answer this question. Does Judith Lean still stand by her TSI reconstruction. I know there are a number of other reconstructions, including one by LS himself, which show much less variability. Has a consensus (I tried to avoid this word, but …) been established.

  20. Stevo (02:22:38) :
    .
    .

    Dear oh dear. And after I had spent all that time last week explaining why that mechanism was wrong! Does nobody listen? :-)

    Have you a link to your explanation.

  21. In 2000, Dr Robert E Stevenson, (now deceased), Oceanographer and one-time Secretary General of the International Association for the Physical Sciences of the Oceans (IAPSO), wrote an assessment of Levitus et al (2000) on global heat budget.

    http://www.21stcenturysciencetech.com/articles/ocean.html

    Yes, the Ocean Has Warmed; No, It’s Not “Global Warming”

    This is a small extract:

    “How the Oceans Get Warm
    Warming the ocean is not a simple matter, not like heating a small glass of water. The first thing to remember is that the ocean is not warmed by the overlying air.

    Let’s begin with radiant energy from two sources: sunlight, and infrared radiation, the latter emitted from the “greenhouse” gases (water vapor, carbon dioxide, methane, and various others) in the lower atmosphere. Sunlight
    penetrates the water surface readily, and directly heats the ocean up to a certain depth. Around 3 percent of the radiation from the Sun reaches a depth of about 100 meters.

    The top layer of the ocean to that depth warms up easily under sunlight. Below 100 meters, however, little radiant energy remains. The ocean becomes progressively darker and colder as the depth increases.

    The infrared radiation penetrates but a few millimeters into the ocean. This means that the greenhouse radiation from the atmosphere affects only the top few millimeters of the ocean. Water just a few centimeters deep receives none of the direct effect of the infrared thermal energy from the atmosphere! Further, it is in those top few millimeters in which evaporation takes places. So whatever infrared energy may reach the ocean as a result of the greenhouse effect is soon dissipated.

    The concept proposed in some predictive models is that any anomalous heat in the mixed layer of the ocean (the upper 100 meters) might be lost to the deep ocean. It is clear that solar-related variations in mixed-layer temperatures penetrate to between 80 to 160 meters, the average depth of the main pycnocline (density discontinuity) in the global ocean. Below these depths, temperature fluctuations become uncorrelated with solar signals, deeper penetration being restrained by the stratified barrier of the pycnocline.

    Consequently, anomalous heat associated with changing solar irradiance is stored in the upper 100 meters. The heat balance is maintained by heat loss to the atmosphere, not to the deep ocean. “

  22. The author talks of infra-red absorbing gases such as CO2. My understanding is CO2 on received a quantum of infra-red, instantly radiates in random direction a quantum of infra-red at the same wavelength and energy. If CO2 absorbs IR it must get warm.

    I thought this was one of the main misdirections used in the so-called greenhouse gas theory.

    Not so?

  23. Steve Fitzpatrick: The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, BASED ON THE ASSUMPTION THAT RADIATIVE FORCING FROM WMGG’S HAS CAUSED ALL OR NEARLY ALL THE MEASURED TEMPERATURE INCREASE SINCE ~1871.

    The question I would like to ask is – what would be the climate sensitivity if the radiative forcing from WMGG’s caused only 50% of the warming? As this is the approximate position of the IPCC? What would it be if it caused only 25% of the warming and if it caused only 10% of the warming?

    In each of the above scenarios what would be the average temperature in 2060 compared to the 2008 average? (And I presume this assumes that the average solar intensity and the Earth’s average albedo do not change?)

  24. A couple of things common to most models:

    1) This all assumes that the concentration of greenhouse gases is the same over every part of the planet. Once you state that assumption it becomes obvious that it isn’t going to be completely true since there are both sources and sinks for the emissions, especially for CO2. How big are the variations? I don’t know. It could be that they are insignificant compared to the overall ratio. However, I suspect that you will find that concentrations of any man-made greenhouse gas will be highest in the likely source areas–Northern hemisphere over land primarily, and lower in probable sink areas like forested tropical areas and over tropical oceans. What does that do to the overall impact? It would be interesting to model that.

    2) This all assumes that the measured temperatures are reasonably accurate representations of overall temperatures for the planet. There are a lot of reasons to doubt that. Temperatures tend to get measured in areas where it’s convenient for people to measure them. That often means in cities or near airports. However, you have to be careful about measurements from rural areas too. How many of the sensors are close to hog or cattle confinement operations? Both are major producers of Methane, CO2, Ammonia, and Hydrogen Sulfide. If any of those gases have an impact on temperature they would have their greatest impact near the source. Anthony’s surface station project should probably look at how close rural temperature sensors are to confinement operations.

    3) This all assumes that there will be no biological response to increased CO2. It’s more likely that after a lag of a few decades there will be biological shifts that favor plants (especially microscopic ones) capable of exploiting higher CO2 levels, reducing or at least partially balancing increased emissions.

  25. “World temperatures are set to rise much faster than expected as a result of climate change over the next ten years, according to meteorologists.”

    This is how the people in power see the earth’s climate sensitivity. Unfortunately they seem to share a lot in comman with an end of the world cult, they day after the day of judgement another future date will be picked.

    http://www.telegraph.co.uk/earth/earthnews/5925523/World-temperatures-set-for-record-highs.html

    “a new study by Nasa said the warming effect of greenhouse gases has been masked since 1998 because of a downward phase in the cycles of the sun that means there is less incoming sunlight and the El Nino weather pattern causing a cooling period over the Pacific.”

    “The new study adds the effect of El Nino, which is entering a new warm phase and of the impact of the solar cycle.”

    “Gareth Jones, a climate research scientist at the Met Office, said the effect of global warming is unlikely to be masked by shorter term weather patterns in the future. He said that 50 per cent of the 10 years after 2011 will be warmer than 1998. After that any year cooler than 1998 will be considered unusual.”

  26. Leif Svalgaard (23:04:09) :

    For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter
    Using your own numbers this gives an increase of 2*(239/1366)*0.267 = 0.09 degrees, hardly a “significant fraction of the observed warming”.

    Blimey, Groundhog day again.

  27. So according to the above, the Worst Case Sensitivity for a doubling of CO2 is ~1.0 degree C.

    Other analyses, and current cooling suggests a lower figure, between 0.0 and 0.3 C; either way, so low as to be inconsequential.

    I accept 0.0 to 0.3C.

    I accept inconsequential.

  28. What’s really important with all this is not the absolute precision of any estimate of warming or cooling or sea levels, etc.

    What is important is whether the majority of people in major countries (and therefore their political leadership ) believe one position or another. Those beliefs will drive political, economic, demographic and military decisions and actions regardless of any scientific pronouncements that contradict those beliefs. Some countries may decide that their survival hinges on preparations for repelling a perceived political, economic, demographic or military threat directly related to a belief in global warming and institute policies and actions that exacerbate tensions that already exist either internally or externally. Those actions inevitably result in other countries developing countermeasures to the above to ensure their own survival and prosperity. It quickly becomes a sort of arms race, in which the actual behavior of the climate over time is irrelevant. We have seen the beginnings of this in the recent disputes over Arctic oil and gas resources, as well as other natural resources around the world.

    Perception is everything, and if the future is perceived as a zero sum game then we are all in a lot more trouble than anything that could be brought about by a few degrees of climate change.

  29. DennisA (02:59:26) :

    In 2000, Dr Robert E Stevenson, (now deceased), Oceanographer and one-time Secretary General of the International Association for the Physical Sciences of the Oceans (IAPSO), wrote an assessment of Levitus et al (2000) on global heat budget.

    http://www.21stcenturysciencetech.com/articles/ocean.html
    It is clear that solar-related variations in mixed-layer temperatures penetrate to between 80 to 160 meters, the average depth of the main pycnocline (density discontinuity) in the global ocean. Below these depths, temperature fluctuations become uncorrelated with solar signals, deeper penetration being restrained by the stratified barrier of the pycnocline.

    Consequently, anomalous heat associated with changing solar irradiance is stored in the upper 100 meters.

    While I agree with Stevenson on his analysis of the non-heating of the ocean by greenhouse gases, I take issue with him on this part.

    My calculations (confirmed by Leif Svalgaard) show that to account for the thermal expansion component of sea level rise between 1993 and 2003, the oceans must have retained around 14×10^23J from the sun over and above the energy they receive and retransmit. This is equivalent to a 4W/m^2 forcing and must be solar in origin, plus cloud modulation. There was less cloud in the tropics during the period in question according to ISCCP data.

    The global rise in SST over the same period was around 0.3C. The falloff of temperature to the thermocline is approximately linear below the mixed surface layer and this is consistent with an average increase in the temperature of 0.15C for the top 700m of ocean.

    I asked James Annan, an oceanologist how warmth got mixed down to those depths far below the mixing in the top layer performed by waves. He replied that tidal action, and strongly subducting currents at high latitudes taking down warm water arriving from the tropics explained the deeper mixing.

    This is consistent with extra warmth at deeper levels apparently unconnected with solar forcing, but still leaves room for another possibility : that the amount of heat coming through the thin seabed from the earths interior may vary over time due to changes in the subcrustal currents within the earths mantle. These appear to be connected to variations in Length of Day.

  30. Steve Fitzpatrick:
    “There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.”

    This makes no sense at all. The consensus appears to admit that all warming up to about 1970 was natural. There simply wasn’t enough CO2 to have any effect. In fact the Hadley two-graphs ‘proof’ depends on this.

    I’m sorry, but anyone who publishes a graph showing the global temperature over the next fifty years is probably deluded. No one knows what the climate will be in 2060. It may be warmer. But it may also be colder. About the only thing we can agree on is that the IPCC projections are wildly exaggerated, probably for political reasons.

    Probably the best long-term climate records are provided by the ice cores. They appear to show that CO2 is an effect and not a cause. And over hundreds of millions of years there is essentially no correlation between CO2 and temperature. This strongly suggests that the effect of CO2 on climate is negligible. We were fortunate to have enjoyed a modest warming during the 20th century, but it may not last.

    Although I’m sure that the author is correct when he says the IPCC projections are too high, he may have fallen into the same trap as many other modellers. It’s pretty easy to predict what has already happened. The trick is to accurately forecast the future. Due to the chaotic nature of weather and climate, it’s probably impossible to predict beyond a few weeks. The ridiculous Met Office quarterly forecasts are a good example of this.

    It will be interesting to see how well that straight green line predicts the global temperature over the coming years. My guess is that it’s probably wrong. Sorry.

    Also, congratulations to WUWT for publishing this essentially pro-AGW article. It shows a good sense of balance, something lacking in some other web sites we could name!
    Chris

  31. Some editing comments —

    Fig 4 is a duplicate of Fig 3. The other figures are all moved down one. The real Figure 6 is missing.

  32. This is a very good paper.

    One of the most important insights is that under the Stefan-Boltzmann equations, the very equations thats underpins most of climate science, the surface temperature should only increase 0.27C per watt/metre^2 increase in forcing. Stefan-Boltzmann is actually a logarithmic equation (like the ln(GHG) versus temperature is a logarithmic equation).

    Global warming theory for the long-term climate sensitivity uses 0.75C per watt/metre^2, but the point we are on in Stefan-Boltzmann will only result in 0.27C per watt/metre^2. It is also interesting that the climate models also use 0.27C to a 0.32C per watt/metre^2 for hindcasts and short-term predictions but over time the number is increased.

    I like adding in the other GHGs (which I didn’t do), the expected lags in the climate system are not occurring, and I think the conclusions presented by Steve are quite accurate.

  33. In the report I read ” With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.”

    I am not sure quite how to interpret this. It seems to assume that one can solve all there is to solve about greenhouse gases, by assuming that one need only to consider radiation in the way energy moves through the atmosphere. This ignores the effects of conduction, convention and the latent heat of water. I sort of find this simplification, if that is what it is, to be not very believable.

  34. ok, and what was the co2 concentration in 1850?

    ipcc 280, others 320 to 345ppm….big differences!

    nobody knows, how long human co2 emissions will last in the atmosphere. 10y 50y, 200y?

    why is the CH4 curve flat for some years now?

  35. M White (03:22:28) :

    Gareth Jones also said:

    “The amount of warming we expect from human impacts is so huge that any natural phenomenon in the future is unlikely to counteract it in the long term”.

    We’ll see…

  36. A very thoughtful article. It makes a plausible case. My comments should not be taken as negative towards the author. Rather, these are thoughts I often have regarding any models I have seen.

    As someone above mentioned, the forcing phenomena/mechanisms proposed include cyclic and one monotonically increasing, and the fit is to a temperature series that shows increase; thus, the predicted temperature must increase. This and other models seem to say — it must be CO2 because I can’t think of anything else it could be, given our lack of understanding.

    What I question with any model is:

    Start point — By chance, good temperature records and good CO2 records begin at a local minimum — so only net increase is shown. Better if a model could go back a couple thousand years and work its way up to the present. In the process showing how it would account for historical heating and cooling in the absence of anthropogenic CO2.

    The basic historical temperature data itself — The accuracy of historical temperature data itself is suspect due to many factors and Hadley data is manipulated. Can one actually believe the Hadley (or GISS, etc) anomalies are accurate and precise representations of the actual historical global temperature. I honestly don’t think that has been definitively shown.

    And, lastly (for today at least), the global land and sea temperatures are measured at sites and in ways that may not accurately indicate the heat accumulation (or lack thereof) over the entire earth land surfaces to a depth and throughout the ocean depths. But, AGW is all about global heat accumulation — for which global temperature is only a proxy.

  37. Has anyone ever tried to model a future where CO2 begins to decrease at the current rate of increase? What would happen if we took this steady decrease down to 0? If everything else remained the same, at what level of CO2 would the “tipping point” of unstoppable cooling be achieved? Would that be a reverse demonstration of whether CO2 drives climate or not? Would that be of any use?

  38. “congratulations to WUWT for publishing this essentially pro-AGW article. It shows a good sense of balance, something lacking in some other web sites we could name!
    Chris”

    Hear, Hear!

  39. Is the increase of CO2 due to the burning of “fossil” fuels? When I look at the CO2 data over the last 10 years the increase in atmospheric CO2 concentrations look almost linear. When I look at the last 10 years of energy consumption (excluding nuclear, solar, hydroelectric, wind and geothermal) the increases in consumption are anything but linear. How correlated is atmospheric CO2 increases to changes in energy consumption? Can anyone point me to any papers on this subject?

  40. Please check the figures, I believe a number of them are incorrect. Fig 4 appears to be a repeat of Fig 3 and that maybe what is throwing the other off.

  41. Leif Svalgaard (22:57:52) :

    “For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming”
    She doesn’t believe that anymore, neither does anybody else [except some climatologists].

    How did you do that? You answered my question before I’d asked it.

    John Finn (02:46:54) :

    2. Leif Svalgaard may be the best person to answer this question. Does Judith Lean still stand by her TSI reconstruction. I know there are a number of other reconstructions, including one by LS himself, which show much less variability. Has a consensus (I tried to avoid this word, but …) been established.

  42. Leif Svalgaard (23:04:09) :

    “For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter. Using your own numbers this gives an increase of 2*(239/1366)*0.267 = 0.09 degrees, hardly a “significant fraction of the observed warming”.”

    Please forgive a comment from someone with absolutely no credentials in this field whatsoever, but haven’t you just plugged the 0.267 answer back into the calculation? I took the 0.267 to be the sensitivity of temperature in C per W per m2. For a 2W increase, the temperature would rise 2*0.267 = 0.534C . Or am I doing something fundamentally daft (wouldn’t be the first time) ?

    I do disagree that a rise of 0.09C is ‘hardly a significant fraction’. If my “one minute Google” research of a 1C rise since the Little Ice Age is correct, then the sun has caused 9% of the temperature rise since then. I call that significant, from my point of view.

    Thanks for your attention,
    Jens.

  43. Steve Fitzgerald,
    One of the key problems with an analysis of climate sensitivity from temperature data, such as you have performed is the estimation of lag time for the ocean surfaces to heat up. The use of the solar cycle versus temperature data is problematic. It is ok if the system consists of only one heat resevoir, so a single time constant is appropriate. The problem is that the ocean has a shallow and a deep resevior with different time constants, and the easily observed smaller resevoir, which has a 2 year time constant, will give you too small an answer for the climate sensitivity.
    A more complex model is required to get a correct answer.

  44. Barry R. (03:18:11) :

    A couple of things common to most models:

    1) This all assumes that the concentration of greenhouse gases is the same over every part of the planet. Once you state that assumption it becomes obvious that it isn’t going to be completely true since there are both sources and sinks for the emissions, especially for CO2. How big are the variations? I don’t know. It could be that they are insignificant compared to the overall ratio. However, I suspect that you will find that concentrations of any man-made greenhouse gas will be highest in the likely source areas–Northern hemisphere over land primarily, and lower in probable sink areas like forested tropical areas and over tropical oceans. What does that do to the overall impact? It would be interesting to model that.

    NASA mid-troposphere measurements (worldwide) in July are roughly 375ppm +/- 10ppm, with highest values over western North America and western Atlantic basin adjacent North America, Middle East. So exactly the region you expected.

    Chris Wright (04:07:24) :

    Also, congratulations to WUWT for publishing this essentially pro-AGW article. It shows a good sense of balance, something lacking in some other web sites we could name!

    Congratulations to this site for welcoming all sorts of opinions and thinking, but I don’t see this as necessarily pro AGW. The real issue it seems to me is not whether CO2 or temperature has increased over the past 50 years…we can see the measurements ourselves, but rather, what these observations mean for the future. In this case climate sensitivity is very important, and people who are quite alarmed by the situation see a sensitivity above 0.5C/(W/m^2), while this result places the value at half that. Much less alarming.

    By the way, we can arrive at roughly the same value of sensitivity in three more different ways. Set W=e(sigma)T^4 (Stefan Equation), differentiate T with respect to W, and the result is the sensitivity. If one plugs in e=0.98, sigma = 5.67×10^(-8), and T as 288K, then one gets 0.25. If one takes the estimated cooling of Earth from Pinatubo and the estimated insolation change one gets about 0.5K/2.7(W/m^2)=0.19. If one takes the mean Earth temperature change over the glacial cycle (10C or so), and divides by the change in insolation (50W/m^2), then one finds 0.2 as the implied sensitivity. Of course these are simplified “models” and would be ridiculed in debate…why use a simple idea when a complicated one will do as well? (I jest).

    The issue is more complex, of course, with people worrying about the effect of all the feedback influences. These feedback signals have all sorts of varying time scales, and perhaps regional influences (which makes me wonder the usefulness of mean Earth temperature in the first place), and this is what makes the issue interesting.

    Since longwave radiation depends on temperature to the fourth power, mean temperature in our radiation equations should more meaningfully be fourth root of the mean of temperature to the fourth power. Does anyone know if someone has made such a calculation?

  45. I read this as an attempt at assuming the AGWarmers are right, the warming in the last century and one half is due to greenhouse gases, so make simplifying assumptions that won’t distort that hypothesis too much and see how bad it gets in fifty years. Further check the numbers under the assumptions we make some civilization bending efforts to reduce greenhouse gases, and under the assumption we make some civilization ending efforts to reduce greenhouse gases and see how we do. The answer seems to be if we do nothing it gets a little warmer (~1C), if we bust our collective butts it gets a little less warm (~.9C), and if we destroy civilization it will be less warm still (~.8C). He leaves it up to the politicians to decide which course of action makes the most sense . . . in which case we’ll likely take the civilization ending route.

  46. Minor type alert, beginning of third paragraph. “his” should be “this”:

    Many people, including his author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.

  47. The map of the globe at the head of this post is too important to ignore.

    One must understand the geographical aspects of climatology if one is to come to grips with how the globe warms and cools. One can then appreciate the real place of long wave radiation, cloud cover and the Southern Oscillation in this fascinating process. Greenhouse theory can then be put in its proper context. It is almost totally irrelevant.

    Referring now to the map, notice the low levels of long wave radiation from the three centers of strong convection namely the Amazon, the Congo and the Indian Ocean between India and New Guinea. The air above these regions is characterized by de-compressive cooling associated with strong uplift. The amount of long wave radiation emanating from these regions is slight. Its as slight as that from the coolest parts of the globe. In these locations the air cools via the same de-compressive mechanism that is utilized in a domestic refrigerator. It does not cool by emitting long wave radiation.

    What goes up must come down. If the air rises in strong centers of convection it must fall somewhere else. Where the air descends it will warm via compression and in so doing it loses cloud. That descending air emits high levels of long wave radiation. Notice that high levels of radiation are associated with dry cloud free air and low levels are associated with wet cloudy air. Without water vapor, the presence of the so called greenhouse gas has little effect in trapping outgoing long wave energy. To the extent that a ‘greenhouse gas’ is present it will have the effect of reducing cloud cover in relatively cloud free zones. (No amplifier).

    Notice the extent of the oceans of the southern hemisphere where outgoing long wave radiation is relatively more intense. ( This is a simple function of the shortage of land mass in the Southern Hemisphere). The atmosphere above these areas is relatively cloud free because the air is descending and warming. This ocean accordingly receives a lot of direct sunlight.

    The flux in cloud cover above the southern oceans is the basis of the Southern Oscillation. Cirrus cloud forms on the margins of, and between the zones of descending air over the southern hemisphere oceans. There is a strong seasonal warming of the entire atmosphere between April and September due to radiation of solar energy by the land masses of the northern hemisphere. This causes a loss of cloud cover globally (about 3%) and a strong loss of cloud in the southern tropics. Superimposed on this seasonal oscillation there is a warming and a cooling of the stratosphere and the upper troposphere down to about 200hpa based on a flux in ozone content associated with the relative strength of the polar vortexes. The Arctic vortex is weak, operates only in winter and fluctuates in its strength on decadal and longer time scales. The resulting flux in stratospheric ozone causes a parallel change in the extent and opacity of high altitude cirrus cloud above 200hPa. It has long been known that a sudden stratospheric warming is associated with warming of the tropical ocean and it has become apparent in recent times that this warming is most intense in the southern hemisphere between 20° and 40° of latitude between November and March. Some three or four months following the sudden stratospheric warming, the sea at the equator reflects the intensity and timing of that stratospheric warming. By that time, the stratospheric warming responsible for the sea surface warming is well past.

    On long time scales one must look to the forces that determine the concentration of ozone in the stratosphere if one wants to explain surface warming. Chief amongst these is the flux of nitrogen oxides from the mesosphere, a factor that relates directly to solar activity.

    The change in the temperature of the stratosphere/upper troposphere is a fascinating area of study. Good data is available from 1948. The Southern stratosphere is the most volatile. It warmed strongly up to 1978 and has cooled since that time. That trend continues. Our globe is gaining cloud in direct relation to the diminishing temperature of the upper troposphere/lower stratosphere where cirrus cloud forms. The warming between 1948 and 1978 was abrupt. The cooling since that time has been slow but relentless. In response, the atmospheric windows that allow solar radiation to reach the surface of the ocean in the southern hemisphere are gradually shrinking in extent.

    The forces that determine the character of the Southern Oscillation operate on very long time scales. The Oscillation is constantly changing. A recent study suggested that 70% of the variability in global temperature could be attributed to the Southern Oscillation. On the basis of my knowledge of the temperature of the stratosphere I would guess that this figure is an underestimate.

    A mathematician who does not understand the dynamics of climate is in no position to predict anything. His tools are of no value.

    Climatology, as we know it today, has nothing to say about the causes of the Southern Oscillation. Until this phenomenon is understood we will be at the mercy of snake oil salesmen , charlatans and cranks.

  48. A new paper by Lindzen and Choi (described at WUWT on August 23, 2009)

    Do you have something scheduled to drop on August 23?

  49. There is another assumption that needs to be adjusted to reality. The notion of “well mixed”. GHG’s are not well mixed. Water vapor is not well mixed. Aerosols are not well mixed. And salt spray is the leading contender for Aerosols by lengths. Storms, jet streams, and just plain ol’ wind can knock down both GHG’s and aerosols as well as remix them in ever changing globby concentrations. The rotation away from the Sun can change ozone amounts here and there. Seasons change the globby mix. Ocean conditions change the globby mix. If we were to color each of these components of our atmosphere and then take a video of our planet from the outside looking in, we would find a swirling ball of ribbonned colors. Quiet pretty actually. But well mixed it ain’t.

  50. The other thing I have sticking in my craw has to do with the dynamical nature of models. They have yet to be proven in chaotic systems with multiple variables that are not readily predicted. The proper use of dynamical models is to include a control set of statistical models. Whatever happened to the idea of comparing something new to a gold standard? Did that get nixed from proper scientific research too?

  51. If the models works so well, why can’t they with any precision outside of 30-60 days render a decent temperature forecast. The GCMs (using the assumption that WMGHG are the driving mechanism for climate) cannot provide a decent forecast of changes in ENSO, the AMO, etc… Niether NASA, NOAA, nor HadCrut have any skill in predicting even regional short-term climatic changes. Almost invariably they come off too warm.

    To make matters worse, they are trapping themselves into a corner by shortening the time periods in question. Climate Science deals with anomalies that cover hundreds of years. But these people now are in the business of making seasonal variations predictions, and in the process are getting burned. Climate is now defined (according to our experts) as changes from year to year. When they come out wrong (which is quite often), they point out it on just “weather”; when their predictions come out correct, it is AGW or Climate Change.

    .

  52. John Finn (06:36:29) :
    Does Judith Lean still stand by her TSI reconstruction.
    Lean is continuously updating her reconstruction [as is proper when new data or insight comes along]. Here latest view [which I share and which most other researchers are coming around to] were expressed by her at TSI-meeting in 2008 [SORCE Santa Fe]: “no long-term variation has been detected – do they occurs?”. All reconstructions have been converging to the ‘flat’ version with little or no long-term variation.

    It is unfortunate that people [deliberately?] confuse the Top of Atmosphere and actual insolation at ground level. The TOA is 1361 [or 1366 if you like that better – makes to difference]. Lean’s obsolete 2 W/m2 were for this figure, so translated to the ground would only be 239/1361 times 2W/m2, corresponding to 0.09 degrees. But even that 0.09 did not occur as TSI [at TOA] did not change the 2 W/m2. So, if you want to ascribe the LIA to the Sun, you can’t invoke TSI. Cosmic ray proxies do not show any marked change over that time either.

  53. Interesting. Speculative, but interesting.

    I once create a simple EBM which could explain the effect of the Eruption of Pinatubo with a very low sensitivity (about .6 K per CO2 doubling). Maybe I can dig out the excel file…

  54. Steve, thanks for your contribution to the discussion. However, I don’t understand why you think you can use AMO and ENSO in a model. You say:

    4. The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

    The problem is that the AMO Index and the Nino 3.4 index are both measurements of sea surface temperature (SST). You present it as some kind of revelation that changes in SST today will affect the global temperature in the near future … but this is trivially true for any autocorrelated dataset, and is particularly true for SST and global temperature.

    Since AMO and Nino3.4 are measurements of today’s temperatures, they are absolutely useless as model “input”. It’s like saying “I can hindcast yesterday’s temperature with success way better than chance … if I can use day before yesterday’s temperature as input.” Yes, this is also trivially true, and works very well on hindcasts … but if you think that will allow you to forecast the next decade you are very mistaken.

    So no, the AMO and the Nino 3.4 do not predict the drop in temperature 1940 – 1970, nor the drop 1875 – 1900. They do not forecast those temperature drops at all, they measure those drops. So claiming that they make your model more accurate, while true, is meaningless.

    You present it as if it were a surprising result that if you remove measured temperature variations for the Atlantic and the Pacific oceans, the variance in global temperature is reduced … but doing that has no effect on the accuracy of any model. It also means nothing about the credibility of any model.

    Absent the AMO and the Nino 3.4, you are simply asserting (without providing a scintilla of evidence) that temperature will rise with log CO2 … but that is what the debate is about, you can’t just assume that.

    w.

  55. Quote:
    1) It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.

    Isn’t this the definition of positive forcing?

    The article mentions the effect of aerosols and carbon particles in the atmosphere. I have never seen any discussion of the effect of the various “Clean Air Acts” introduced over the last 60+ years.

  56. Steve F. – I have a full-time job, kids, wife, house, etc. and would like to understand more about climate models. I really don’t have time to learn everything necessary to start from scratch. I was wondering if you could share your model. I might help those of us who are not professionals, but have a background in science that would allow us to (eventually) come to a better understanding of climate models.

  57. Steve – does your model take into account the dark side of the Earth that is exposed to near zero radiational temperature?

  58. Thanks for the great post. It’s posts like this that reinforce my rejection of positive feedback and Hansen’s ‘x3 CO2 forcing factor’, a factor that was decided before most of the science was even started. No feedback has a better fit with past temperature. So a 1-1.2C rise for a doubling of CO2 is the default case.

    CO2 can only warm the oceans if it warms the atmosphere first. The atmosphere has no thermal inertia so warming from CO2 must happen every time the sun comes up – something we would see by now. How can warming be hiding in the oceans if the warming of the atmosphere is the very mechanism that is supposed to warm the oceans?

    Whole world cloud feed back is near to zero – strongly negative in the tropics, less so at middle latitudes and positive at the poles. Roy Spencer has intimated that negative feedback lessens away from the tropics and we know that clouds warm the poles so it’s only a matter of degree in between.

    The whole positive feedback case is based upon an increase of CO2 causing a small increase in temperature which causes an increase of absolute humidity. This in turn leads to a second round of warming. But the 1998 El Nino raised temperatures quite a bit. If positive feedback was true then the temperature should have stayed permanently higher. It didn’t so how can positive feedback possibly be right? The reason it didn’t is because the atmosphere isn’t saturated with humidity – the extra humidity gets taken out of the atmosphere.

    Climate isn’t complicated. It’s actually very simple. The problem we are up against is, it doesn’t matter what is said, only who says it.

  59. Willis, I disagree. Using statistical models you can get in the ballpark with predictions. What you can’t predict as well is whether or not the conditions will happen as they have happened in the past. So you run several statistical models. One will win out by pulling ahead of the others. Dynamical models don’t work so well because we don’t know how the chaotic system works. Statistical models don’t care. They just spit out what happened in the past given the current set of conditions. And usually a number of things happened more than others under the same setting in the past so you set your confidence level and hit the button to get the most likely temp outcome.

  60. Just to add to Steve’s point about the 0.27C per Watt/metre^2.

    I built a few charts of how temperatures are related to the forcings and the change in the forcing using the Stefan-Boltzmann equations.

    Here is how Earth’s Surface temperature changes with changes in solar radiation (at the top of the atmosphere). The point of this is to show that the equation is logarithmic and as one goes up in Watts, the successive temperature increase is less and less as it increases. [I used solar since so many people are interested in that.]

    Now here is how the surface temperature will change for each 1 Watt/metre^2 of forcing at the surface over the range of Watts we are concerned about. (this is now the surface so it is solar/4 plus the greehouse effect).

    This really means global warming is only one-third of that estimated give or take other changes such as albedo which could happen with global warming – it might also explain why temperatures have not kept up with the predictions of the models – whatever forcing is being absorbed in the oceans is actually taking away Watts – so there is less forcing at the surface than predicted, not less Temp C response per Watt than expected – the limit is still 0.27C (which declines to 0.26C in a few more watts of increase).

    I think many people have been using averages for these calculations and one really needs to get down to the “each successive Watt equals Y temp increase”. The Sun is giving us 240 watts at the surface which translates into 255K or 1.06C per Watt. But actually, the first Watt was 64C, the next one was 34C and so on. It is only 0.27C now.

    [Steve actually tuned me onto this in another venue which is going to help me finish another project I’m working on. Thanks.]

  61. Pamela, I guess my writing is not clear. The problem is not that it is a statistical model. It is the idea that you can remove variation by using AMO and Nino3.4.

    In general, you can’t use observations to remove variance from observations. In particular, you can’t use sea temperature observations to remove variance from air temperature observations.

    Or to be more accurate, you can do that, but you add absolutely nothing to the predictive ability of a model by doing so.

    Sea surface temperature (SST) is very, very closely related to air temperature. The Hadley sea surface temperature (HadSST) enjoys a correlation of 0.93 with the Hadley global surface air temperature (HadCRUT). Steve is using a subset of the SST (Nino3.4 and AMO) to remove variations from the air temperature … I hope you can see the huge problems with that approach.

    If Steve’s approach worked, we could use the SST to remove 93% of the variance from the air temperature, leaving a straight line … but how on earth would that improve the model?

    His model is nothing but “Temperature is proportional to CO2” dressed up in fancy mathematical clothes. Open your eyes, folk … it doesn’t help to do that, it makes no difference, at the end of the day you’re just left with “temp ~ CO2”.

    w.

  62. If atmospheric CO2 falls to 220 ppm, plants get sick. They die at
    160 ppm.
    …and if plants die..your beautiful lives little global warmies will end too!

  63. Steve Fitzpatrick,

    You Wrote:

    “So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.”

    There is a real puzzle here. Thoughout the course of the year each hemisphere cycles between hot summers and cold winters. The non-linerarity, if present, might show in the climatology. Basically the individial local climatologies (e.g. as in HadCRU’s abstem3) ought show a marked asymmetry between the (hottest month – annual average) and the (annual average – coldest month). Given that the range of temperatures in some continental areas is extreme (+/- 30C) the asymmetry ought to be marked. But the asymmetry is not apparent in the climatogies.

    Now I do not no why this should be, so it puzzles me. It could look horribly like negative feedback. Over the oceans and in maritime areas one can rightly argue that the oceans are exporting there low annual range to the land and somehow this removes the asymmetry, but I can not see how this applies to the areas like Eastern Siberia that have the largest annual range, which implies that the are largely cut off from the oceans.

    As best as I can calculate from the S-B equation the asymmetry there should be around +9% of the annual range, but is actually very small ~1% and in the opposite sense.

    Now this may all be just rubbish, so if anyone knows better please tell.

    *******

    On another tack, one area that I feel is not much commented on is the abilites of the GCMs to reproduce Earth Like Climatologies. That is getting the mean temperature, annual range, and annual phase lags correct for each location. As I recall they do not do very well. There really ought to be a climatology test. Such as, if I was to wake up in a modelled world, could I tell within the course of a year whether the climatology was right or wrong. Now I am not talking about whether which nobody can predict, but the climate (strictly speaking the climaotolgy).

    One yet another tack, a guy from the MET was interviewed on the box a week or so back over the seasonal forecast for the UK and its failure. He siad that it was easier to predict the climate in the distant future (I can not remember but I think ~30 years out) than it is to predict the coming seasons. Could that be because it takes a lot longer before you can be judged.

    Alexander Harvey

  64. What happens to the scenario presented by Steve Fitzpatrick if one factors out the overestimation of warming that is likely to be part of the CRUT3V temperature anomaly. The CRUT3V anomaly does not account fully for socioeconomic contamination of the temperature record, as demonstrated by McKitrick and others. The real post-Little Ice Age global temperature anomaly trend is probably lower than that used in the post under discussion here.

  65. I wonder what effect man’s deforestation and desertification has had on climate in the past century ? Australia alone has lost 70% of its natural vegetation.

  66. The paper is interesting, but if the draconion restrictions of emissions isn’t enough to stop AGW like that paper says the models suggest, does that mean the only way would be an unprecedented reduction of global population by more than 90 percent and maybe even 99 percent? Would we have to completely and rapidly de-populate Africa, China, and India and make that whole continent and the whole of those countries massive plant and animal preserves to make the temps. stay even?

  67. One should bear in mind that when playing with the simple stefan’s law balance sensitivity that the surface average radiation is around 390 W/m^2 and the average radiated power is 240 W/m^2. Take the difference of that and you find that about 150 W/m^2 is lost going through the atmosphere and not made up for by radiation from higher altitudes. If you divide 240 by 390 you get 0.61 as being the fraction of power that is radiated. For a small change, you can assume that an increase in surface temperature via stefan’s law required to achieve balance from a 1W/m^2 increase in absorption is actually more like 1.67 W/m^2. The result for this is more like 0.31 K per W/m^2 for a radiative only sensitivity (in clear sky). Making a further stretch that this is valid for all W/m^2 forcings, one sees that 150*0.31 = 46 Kelvins above that of a BB which is far greater than the observed 33K rise caused by all GHGs. The sensitivity of this is the 33K riside divided by the 150W/m^2 GHG absorption forcing which amounts to 0.22 K/ W/m^2. That places us in the position that real world forcings are subject to net negative feedback over that of a simple radiative transfer. THe only thing that this doesn’t provide for is the additional amount of other ghg forcing changes caused by a change in temperature – i.e. the water vapor feedback. Note that the 0.22 K/ W/m^2 is the actual Earth system average sensitivity value to a co2 only (or any other forcing only) rise of 1W/m^2. Of course if there is a variation in sensitivity to W/m^2 as ghg absorption is increased, then the current sensitivity must be lower than this average if earlier forcings had higher sensitivity levels. For the current levels to be at higher sensitivities, the earlier had to have a lower effect – which makes little sense as the power absorption attenuation itself is a log function of decreasing effect.

    What this cannot show directly is a feedback that changes the total w/m^2 forcing from another gas – such as how many W/m^2 increase in h2o vapor forcing happen when the T rises from the original – such as a 1 W/m^2 increase in co2 forcing. A co2 doubling of 3.6w/m^2 should then result in 0.22×3.6 = 0.8 Kelvins increase in T. If a 0.8 Kelvins rise in T creates an increase in h2o vapor forcing in some sort of positive feedback mechanism, its effect must definitely be less than 0.8 Kelvins. Otherwise any small variatiations in upward T would result in the complete runaway of h2o vapor driving itself upward. Also, any net positive feedback of h2o vapor forcing with T leads to wild swings and variations. Negative feedback net still results in variations but they are both stable and reduce the total swing from what it would be otherwise.

    Note that a modest increase in T can only result in the possibility of h2o ‘feedback’ where there is liquid h2o available to be brought into the atmosphere. Some areas do not have this ready reservoir. Also, bringing in more h2o vapor into the atmosphere increases convective power transfer and can bring in that real unknown of additional cloud cover which can reduce added h2o vapor forcing from possibly being positive to being seriously negative – and this is the area that is poorly known and practically ignored in modeling.

  68. It would unfortunately take more time to answer all the comments/questions posted until now than it took to prepare the post, so I can’t possibly address all. I will try to address at least some:

    With regard to this be a ‘pro-AGW’ post, I want to point out that I am very skeptical of the large temperature increases projected by GCM’s and the IPCC, and that the net climate sensitivity the model suggests (~0.27 degree per watt) is in the range of 1/3 of the sensitivities used in making those much larger projections. On the other hand, adding any infrared absorbing gas to the atmosphere makes it more difficult for infrared radiation to escape from the Earth’s surface. The net effect of these gases has been pretty well studied, and their “radiative forcings” are reasonably well known, so the addition of infrared absorbing gases to the atmosphere ought to increase the surface temperature. The key issue is the magnitude of warming that might reasonably be expected from surface temperature. Is the feedback that operates on this radiative forcing negative, positive, or near zero? My post was an effort to define the warming that might take place due to greenhouse gases IF they were the only cause for warming since the mid 1800’s. The several comments suggesting that part/most/all the warming was due to other causes seem to miss the point I was trying to make: this is pretty much a worst case scenario.

    With regard to the model itself, I was not ware when Anthony would place the post on WUWT; had hoped to have the spreadsheet (sorry, it is not R or something else that some might prefer) available at the time of the post. I will ask Anthony to make the spreadsheet available.

    Lief: I was not aware that Lean had changed her mind about the 2 watts change since the little ice age. It certainly was not my intent to misrepresent her current views. The calculations I did were based on recently measured changes in intensity over the solar cycle (peak to valley) of ~1 watt per square meter at the top of the atmosphere, and the model assumed this variation was the same since 1871. This works out to ~0.7 * 0.25 = 0.175 watt per square meter, and an expected solar signal from the solar cycle of 0.047C (peak to valley) for a sensitivity of 0.27 degree per watt per square meter. What I found interesting was that the best model fit to the temperature data corresponded to ~0.168 watt per square meter, remarkably (at least to me) close to the 0.175 watt per square meter that would be expected based on the measured variation over the last few cycles. So for what is is worth: the model is consistent with no substantial change in cyclical variation over the past 130 years.

    Willis Eschenbach (09:13:53) : “The problem is that the AMO Index and the Nino 3.4 index are both measurements of sea surface temperature (SST). You present it as some kind of revelation that changes in SST today will affect the global temperature in the near future … but this is trivially true for any autocorrelated dataset, and is particularly true for SST and global temperature.”

    If you look at the AMO and Nino 3.4 historical data you will see that in spite of overall warming since 1871, these indexes have shown essentially no net trend, and so appear to have contributed virtually nothing to the observed total warming. Graphs showing the historical trends of these two indexes are included in the spreadsheet. AMO and Nino 3.4 most certainly are related to “climate/weather noise”, and that is the point of including them in the model: these indexes account for most (not all) of the variation around the long term trend. AMO and Nino 3.4 can be measured at any time you want, and their contributions subtracted from the currently measured global average temperature to reveal the “true” temperature trend (or at least a much “truer” trend). Indexes like the AMO and Nino3.4 are well known to capture shorter term climate variation, and I was not suggesting that including them in a model was any kind of “revelation”; they were included in Bill Illis’s model (based on only CO2, AMO, and Nino 3.4) back in 2008.

  69. eric (06:40:53) :

    “Steve Fitzgerald,
    One of the key problems with an analysis of climate sensitivity from temperature data, such as you have performed is the estimation of lag time for the ocean surfaces to heat up. The use of the solar cycle versus temperature data is problematic. It is ok if the system consists of only one heat resevoir, so a single time constant is appropriate. The problem is that the ocean has a shallow and a deep resevior with different time constants, and the easily observed smaller resevoir, which has a 2 year time constant, will give you too small an answer for the climate sensitivity.
    A more complex model is required to get a correct answer.”

    I do not know who this Steve Fitzgerald person is, but he is not me.

    If you assume that the “short” time constant is <2 years, and this shallow reservoir represents only ~40%, and you also assume that there is a larger (~60%) reservoir with a time constant of ~30-40 years, then the model will calculate (with with a lower R^2) a sensitivity of about 0.37 degree per watt per square meter, or ~1.37C for a doubling of CO2. To reach a sensitivity in the range of the IPCC projections, you need BOTH much longer ocean lags (which do not appear consistent with recent ARGO data) and assume that man-made aerosols have "canceled" much of the radiative forcing (once again, not consistent with 'global brightening' observed since the early 1990's. The model will also suggest that the solar cycle forcing has to be substantially bigger than has been observed by satellites.

    The key point is: what will happen in the next 50 years? A relatively straightforward (and simple) curve fit analysis suggests that warming may continue, but at much less than the IPCC projected rate. Please look at the accuracy of the 1972 to 2008 model projection, and then compare with the accuracy of GCM projections since at least 2000 (Lucia has many relevant posts on this subject); the GCM's consistently predict more warming than actually happened. Do you honestly think that the prediction accuracy of the model I showed will change from good to poor starting in 2009, and the GCM's will suddenly become more accurate? If so, what change(s) in the sun/oceans/atmosphere do you think is(are) happening right now that will make the curve fit model less accurate than it was for 1972 to 2008, and make the GCM's more accurate?

  70. Regarding Willis’ comments about the AMO and Nino 3.4 region being part of the temperature record – they are. But they are only a small part of it.

    The AMO region represents about 5.5% of the globe (probably less compared to the regions which are actually counted in the global temperature record since Hadcrut3 and GISS don’t use the whole AMO region in their global temperature record.)

    So one is using (up to) 5.5% of the dataset to predict 100% of the dataset.

    The Nino 3.4 region as well represents about 0.7% of the globe and is fully counted in the global temp record. But the correlation for Nina 3.4 is lagged 3 months if one is using a monthly model so one is using 0.7% of the temperature record of 3 months ago to predict today’s temperature record.

    Willis is, thus, partly correct but If 5% of the globe and 0.7% of the record of 3 months ago, can predict up to 70% of the temperature variation, then it is certainly something worth looking at.

    If anything, they are responsible for a large part of the “noise” in the temperature record which is rather easy to demonstrate. Take out some of the noise and the underlying trends are more evident.

    Both indices are detrended, so over the long-term, they are not adding to the underlying trend. But on short time-scales, they obviously have an impact on the trend. Hadcrut3 increased by 0.6C from the beginning of the 1997-98 El Nino to the end – only 15 months. Why wouldn’t one want to adjust for that?

  71. Steve Fitzpatrick, thanks for your reply. You wrote:

    If you look at the AMO and Nino 3.4 historical data you will see that in spite of overall warming since 1871, these indexes have shown essentially no net trend, and so appear to have contributed virtually nothing to the observed total warming. Graphs showing the historical trends of these two indexes are included in the spreadsheet. AMO and Nino 3.4 most certainly are related to “climate/weather noise”, and that is the point of including them in the model: these indexes account for most (not all) of the variation around the long term trend. AMO and Nino 3.4 can be measured at any time you want, and their contributions subtracted from the currently measured global average temperature to reveal the “true” temperature trend (or at least a much “truer” trend). Indexes like the AMO and Nino3.4 are well known to capture shorter term climate variation, and I was not suggesting that including them in a model was any kind of “revelation”; they were included in Bill Illis’s model (based on only CO2, AMO, and Nino 3.4) back in 2008.

    You are correct that there is no trend in the AMO or the Nino3.4. This is because they are a ratio of SST values, and not SST itself.

    However, they are measurements of the climate system, and as such, you can’t use them to reduce the variance in the data. You treat them as though they were external forcings, or new data which you could subtract from the existing measurements to “reveal the true temperature trend”.

    But they are not external forcings or new data in any sense. They are temperature measurements of the system. You can’t use them to “reveal the true temperature trend”, that’s simply not possible. You can’t “bootstrap” more information out of measurements by subtracting some subset of those measurements from the data. It’s the same as just smoothing out the data to get rid of short term variability. Makes your data look better … but it doesn’t make your model more accurate in the slightest.

    This is a fundamental and central point, which obviates your basic thesis. Please do some research on the question, as your claims as they stand are simply not tenable.

    Heck, if you want to take your path to the ultimate, just detrend the SST. This gives you the ultimate measure of the natural variability. Then subtract the detrended SST from the air temperature, and voila!! The true temperature trend is revealed!

    But that doesn’t make your model any more or less accurate, not by one bit. It does show the trend … but we knew that already.

    w.

  72. Dr A Burns (14:31:15) :

    “I wonder what effect man’s deforestation and desertification has had on climate in the past century ? Australia alone has lost 70% of its natural vegetation.”

    Very complex question. A dense cover of trees has low albedo, while a desert has much higher albedo, so at first glance you might think that conversion of forest to desert would have a net negative effect on net solar heating. However, there are other issues like rainfall patterns being changed by forests which could modify the heat balance.

  73. Anthony (or moderator) how can I best send you the spreadsheet so people can paly with it if they want? Should I send it to Anthony’s email?

    REPLY: WordPress.com does not allow hosing of Excel spreadsheets or Zip files for security. Best to publish it to a 3rd party file service and provide a URL – Anthony

  74. Bill Illis, good to hear from you. You say:

    Regarding Willis’ comments about the AMO and Nino 3.4 region being part of the temperature record – they are. But they are only a small part of it.

    The AMO region represents about 5.5% of the globe (probably less compared to the regions which are actually counted in the global temperature record since Hadcrut3 and GISS don’t use the whole AMO region in their global temperature record.)

    So one is using (up to) 5.5% of the dataset to predict 100% of the dataset.

    The Nino 3.4 region as well represents about 0.7% of the globe and is fully counted in the global temp record. But the correlation for Nina 3.4 is lagged 3 months if one is using a monthly model so one is using 0.7% of the temperature record of 3 months ago to predict today’s temperature record.

    Willis is, thus, partly correct but If 5% of the globe and 0.7% of the record of 3 months ago, can predict up to 70% of the temperature variation, then it is certainly something worth looking at.

    If anything, they are responsible for a large part of the “noise” in the temperature record which is rather easy to demonstrate. Take out some of the noise and the underlying trends are more evident.

    Both indices are detrended, so over the long-term, they are not adding to the underlying trend. But on short time-scales, they obviously have an impact on the trend. Hadcrut3 increased by 0.6C from the beginning of the 1997-98 El Nino to the end – only 15 months. Why wouldn’t one want to adjust for that?

    Certainly we can use those measures to reduce the variance of the temperature record. But how does this differ from just smoothing the record? It has the same advantages (reduction of short-term variability) and the same disadvantages (reduction of degrees of freedom). How does it help the modeling effort?

    A model contains one or more dependent variables (temperature, precipitation) and a number of independent variables (changes in CO2, aerosols, water vapor, black carbon, and the like). Removing the effect of one of the independent variables helps us to establish the true strength of the remaining independent variables.

    However, AMO and Nino3.4 are dependent variables, not independent variables. As such, removing them does not improve the model at all. So yes, you can do what you propose … but how does it help?

    w.

  75. Richard Sharpe (07:45:19) :

    ” A new paper by Lindzen and Choi (described at WUWT on August 23, 2009)

    Do you have something scheduled to drop on August 23?”

    Sorry, a simple error: July 23, 2009; I guess I was getting ahead of myself.

  76. tallbloke (00:52:31) :

    “What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”

    There is no data I am aware of that would lead to assignment of 75% of warming to solar contributions. Let me know if you have this data and where it comes from.

  77. Steve said

    “On the other hand, adding any infrared absorbing gas to the atmosphere makes it more difficult for infrared radiation to escape from the Earth’s surface.”

    Steve, thanks for an interesting article. Under what circumstances could infrared radiation leave the earth?

    On a similar tack I read that co2 molecules could leave the earth provided they attained sufficient velocity but how that was achieved and what % of co2 ‘leaks’ from Earth the article did not say.

    Anyone able to eleboarate on either of these issues?

    tonyb

  78. Allen63 (05:24:10) :
    “But, AGW is all about global heat accumulation — for which global temperature is only a proxy.”

    Of course.

    Unfortunately, we do not have 137 years of Argo heat data (which would completely settle the question of climate sensitivity). We only have temperature data, with all it’s warts, uncertainties, and problems. We do not have reliable temperature records from before the 1800’s, so it is not possible to verify if the model results would be consistent with earlier periods. Substantial warming and cooling certainly have taken place over very long periods (hundreds to thousands of years) including the medial warm period, little ice age, Roman warm period, and the holocene optimum.

    My intent was not to explain the recent climate history of the Earth. I was trying only to make a reasonable prediction for the next 50 years (about two generations), assuming that the recorded warming since the 1800’s has been almost all due to greenhouse forcing. Will the prediction be perfect? For sure not. Will the prediction be petty close? Probably. If I were young enough to have a chance to be around in 30 or 40 years, I would happily take bets on the accuracy of the prediction. The standard error of the temperature estimate is about 0.095C, so there is about a 2/3 chance that the model’s prediction will be within ~+/-0.1C of the measured temperature 30 or 40 years from now.

  79. Bill Illis: A few clarifications.

    You wrote, “The AMO region represents about 5.5% of the globe (probably less compared to the regions which are actually counted in the global temperature record since Hadcrut3 and GISS don’t use the whole AMO region in their global temperature record.)”

    NOAA ESRL calculates the AMO as detrended North Atlantic SST anomalies from 0 to 70N.
    http://www.cdc.noaa.gov/data/timeseries/AMO/

    HADSST2 (used by GISS up to November 1981, also used in HADCrut) appears to capture the North Atlantic as far as 80N. It should vary with ice extent:
    http://hadobs.metoffice.com/hadsst2/

    OI.v2, which GISS has used since December 1981, appears to capture anything that’s not ice. http://i26.tinypic.com/2v0hbid.png

    (And, curiously, on occasion appears to indicate SST anomalies where ice exists.)

    So both GISS and HADCrut should capture all of the AMO. And what’s the surface area of the North Atlantic, about ½ of the Atlantic? So, if the Atlantic represents 30% of the global ocean area, and if the North Atlantic occupies half of it, and if the oceans represent 70% of the global surface area, then the North Atlantic should represent about 10% of the global surface area, should it not?

    You wrote, “The Nino 3.4 region as well represents about 0.7% of the globe and is fully counted in the global temp record. But the correlation for Nina 3.4 is lagged 3 months if one is using a monthly model so one is using 0.7% of the temperature record of 3 months ago to predict today’s temperature record. ”

    Not to be nitpicky but… The distance between 5N and 5S is 1111 km. The distance between 170W and 120W is 5533 km.
    http://www.nhc.noaa.gov/gccalc.shtml

    The surface area for the NINO3.4 area is ~6.147 million sq km. And if the surface of the globe is 510.072 million sq km, then the NINO3.4 area represents ~1.2% of the globe.

    BUT

    El Nino events affect more of the eastern tropical Pacific than the NINO3.4 area:

    The SST anomalies of the NINO3.4 area are used for comparison to global temperatures because they agree statistically with global temperature variations better than the SST anomalies of the NINO3, NINO4, and NINO1+2 areas. Thus your reason for using the NINO3.4 region for your predictions.

    Regards

  80. Willis Eschenbach (16:08:58) :

    “However, AMO and Nino3.4 are dependent variables, not independent variables. As such, removing them does not improve the model at all. So yes, you can do what you propose … but how does it help?”

    If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871. They were included to improve the accuracy of the model predictions. Yes, the short term variation in global temperature is closely correlated with these indexes, but these indexes are completely independent of radiative forcing and clearly are not responsible for any significant net global warming over the entire period…. their trends are completely flat.

  81. I was wondering, when reading in Ian Plimers book “Heaven and earth”, how is anybody going to enforce tax payment on CO2 emissions made by Mammoth Hot Spring at Yellowstone, which emits from 160 to 190 tonnes per day of CO2?
    It will be a bit troublesome, though visitors could be charged instead of the spring itself…

  82. TonyB (16:54:04) :

    “Steve, thanks for an interesting article. Under what circumstances could infrared radiation leave the earth?”

    Infrared radiation leaves the Earth continuously, with an average rate that is very is very close to the average rate of solar heating. Any difference shows up as heat gain or loss in the oceans. How much infrared radiation is lost per square meter varies a lot. In general, the rate is highest in the tropics (or close to them) where the rate of solar input is highest, and lowest near the poles, where the solar input is small. The rate of loss also varies with time of day, weather, season, if ocean or land area, and with local geography on land.

  83. Steve Fitzpatrick (17:28:10) : “If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871.”

    Well, that’s exactly what I said a while ago. When Nino and AMO indices are taken away, the only factor left is CO2. So all the model is doing is ascribing all the observed temperature trend to CO2.

    What value is the model? IMHO precisely zero.

  84. ***************
    Nogw (17:36:06) :
    I was wondering, when reading in Ian Plimers book “Heaven and earth”, how is anybody going to enforce tax payment on CO2 emissions made by Mammoth Hot Spring at Yellowstone, which emits from 160 to 190 tonnes per day of CO2?
    It will be a bit troublesome, though visitors could be charged instead of the spring itself…
    ****************
    What with all the talk about the role of water in general and clouds in particular, I wonder how much CO2 cold rain in the tropics sweeps into the ocean? The cold rain should be pretty efficient at absorbing CO2. I guess being fresh water, it wouldn’t mix well with the ocean water and end up out-gassing pretty quickly.

  85. “Adam from Kansas (14:41:13) :

    The paper is interesting, but if the draconion restrictions of emissions isn’t enough to stop AGW like that paper says the models suggest, does that mean the only way would be an unprecedented reduction of global population by more than 90 percent and maybe even 99 percent? Would we have to completely and rapidly de-populate Africa, China, and India and make that whole continent and the whole of those countries massive plant and animal preserves to make the temps. stay even?”

    Is your question tongue in cheek? If not, then yes, there are a lot of green crazies out there who advocate drastic reductions in present worldwide populations, numbers like 50% to 80% fewer people than today are kicked about, combined with drastic reductions in per-capita fossil fuel use. They basically want very few babies born over the next 100 years (Worldwide lotteries for the right to have offspring, or will the IPCC just pick the winners? And what to do with those pesky babies born to people who didn’t have permission?).

    If you want get really depressed about what sane people have to overcome in the age of Obama, read a while at the Green Hell Blog. James Hansen (a very main stream green, not nearly as extreme as many) calls for reducing CO2 to 350 PPM through a combination of herculean efforts over the next 100 years. It is truly mind boggling.

  86. “does that mean the only way would be an unprecedented reduction of global population by more than 90 percent and maybe even 99 percent? ”

    A back-of-the-envelope calculation shows that body heat from the current 6.7 billion people is enough to heat the atmosphere by 0.8 degrees C in 100 years.

    The point is that the changes in temperature being discussed as so small that almost anything can affect them.

  87. John (03:06:17) :

    You wrote.

    “The author talks of infra-red absorbing gases such as CO2. My understanding is CO2 on received a quantum of infra-red, instantly radiates in random direction a quantum of infra-red at the same wavelength and energy. If CO2 absorbs IR it must get warm.

    I thought this was one of the main misdirections used in the so-called greenhouse gas theory.

    Not so?”

    Please, anybody correct me if I have got this wrong since I am not a physicist.

    If a CO2 molecule is excited to a higher energy level by a photon of radiation and “immediately” re-radiates the same energy photon, there is no heating of the CO2 molecule. This extended to multiple absorptions and emissions is the simple but not very correct model that some use to illustrate the so called greenhouse effect. In other words, these photons with wavelengths corresponding to the absorption bands of CO2 are shown to go ricocheting in random directions and eventually escape to space or collide with the earth where the process starts all over again. The delay in the escape of the photons within the absorption bands is put forth as creating the greenhouse effect.
    The above description may well be accurate in a rarefied gas where the decay time for the raised energy state in the CO2 is substantially shorter than the mean time between collisions with other molecules but in the lower atmosphere, this is not the case.

    So what we have in the lower atmosphere is the earth emits approximately like a black body with a small portion of this energy being in the absorption bands of CO2. The photon travels only a few metres before exciting a molecule of CO2. Most of these excited CO2 molecules collide with adjacent molecules before the high energy state can decay. The photon energy is converted into heat energy and the heated gases again radiate as a black body with the radiated energy spread out over the infrared spectrum. This “dilution” of the original energy in the absorption bands means that the CO2 has done its thing in the lower atmosphere and is of diminishing importance as the concentration rises and this is exemplified in the logarithmic relation of CO2 concentration to temperature rise.

    Of course, the thing is immensely more complicated when you introduce clouds, albedo, convection but I cringe when I hear what I call the ping pong ball explanation of the greenhouse effect.

  88. TonyB (16:54:04) :

    Steve said

    “On the other hand, adding any infrared absorbing gas to the atmosphere makes it more difficult for infrared radiation to escape from the Earth’s surface.”

    Steve, thanks for an interesting article. Under what circumstances could infrared radiation leave the earth?

    On a similar tack I read that co2 molecules could leave the earth provided they attained sufficient velocity but how that was achieved and what % of co2 ‘leaks’ from Earth the article did not say.

    Anyone able to eleboarate on either of these issues?

    Q1: IR leaves Earth surface upward toward space as long any surface material has an absolute temperature above 0K, which includes everything. Certain bands in the IR leave Earth unimpeded simply because no gaseous material in the atmosphere absorbs this radiation. In other bands, however, there are gases that absorb IR strongly. CO2 for instance absorbs strongly in the band from about 12 to 16 micrometers wavelength. IR absorbing gases do not store IR, they absorb then re-emit in new directions including back toward the Earth. It is the radiation emitted back toward Earth that we call the “greenhouse effect.”

    Q2: Gases do escape Earth all the time, but do so only up in the exosphere weher the atmosphere is so tenuous that the distance between successive collisions of gas molecules with one another is large. If you took any high school chemistry and remember such, you will recall that at any temperature all molecules in gas possess the same mean kinetic energy, therefore the least massive molecules possess the highest speed. These are those that escape Earth most easily; so, as hydrogen and helium manage to reach the exosphere they will leave the Earth quite quickly. For example, there is no primordial helium left in the atmosphere. A gas like CO2, on the other hand, doesn’t even reach the exosphere because of the cold trap at the mesopause. The principle means by which CO2 leaves our atmosphere is through weathering of surface rocks, which produces bicarbonate and carbonate minerals carried to the oceans in rivers, and the direct solution of CO2 into ocean water. This dissolved CO2, in turn, reacts with oceanic crust and is stored more permanently in minerals there. There are other parts to the carbon cycle was well.

  89. Mike Jonas (18:26:22) :
    “Well, that’s exactly what I said a while ago. When Nino and AMO indices are taken away, the only factor left is CO2. So all the model is doing is ascribing all the observed temperature trend to CO2.

    What value is the model? IMHO precisely zero.”

    Well actually, the model includes trends in radiative forcings from CO2, N2O, chloro-fluorocarbons, and methane. These separate trends were included so that divergence in their trajectories could be considered (instead of just a single trajectory for CO2, which is really not as accurate a representation of radiative forcing). The value of the model is to make reasonable predictions over reasonably long periods, under a worst case assumption that greenhouse forcing has caused all of the observed warming.

    “Splitting the time period, and curve-fitting from 1871 to 1971 then comparing “predictions” with post-1971 sounds impressive, but all it means is that if factors which actually caused the overall trend from 1871 to 1971 remained in place from 1971 to 2008, then the model would match neatly. The actual factors could be the sun, clouds, shipping volumes, or world use of soap. The model would still give a good match.”

    Of course; and if the world use of soap continues to match the radiative forcing for the next 50 years, then the model will continue to make accurate predictions. Remember this is a WORST CASE prediction. If there are other factors that are:

    1) truly “causative”
    2) independent of radiative forcing
    3) which by coincidence have historically tracked radiative forcing over 100+ years, and
    4) which will now no longer do so

    then the model predictions will be way too high.

    On the other hand, if radiative forcing has caused all or even most of the warming, then the model should make pretty accurate predictions. The US Congress is currently working on an absolutely horrible cap-and-trade scheme, which will cost a fortune, reduce carbon emissions very little, and which is justified only on the basis of extreme predictions of global warming. If a realistic projection of warming in the worst case shows the warming will be lower (eg. <0.7C in 50 years instead of 1.4C), then perhaps this dreadful legislation loses some of it's presumed justification. Is this not a good thing?

  90. Dennis A (02:59:26):

    Thank you for unearthing Stevenson’s realistic description of how differently the oceans respond to SW and LW radiation. That description should be studied carefully by all would-be climatologists, too many of whom are still stuck on simplistic blackbody concepts.

    Willis Eschenbach (09:13:53):

    Thank you for bringing a sorely lacking physical distinction between external sources and internal redistribution of heat into the discussion, which seems myopically centered on curve-fitting. You seem to be one of the few who truly grasps the categorical difference between active forcing and passive system response.

  91. John (03:06:17) :

    You need to stop thinking of absorption and emission as ying and yang, they are not that closely related.

    Absorption is governed by the number of absorbers and the incoming IR.

    Emission is governed by the number of emitters and the temperature of the local gas.

  92. I’ve read that a climate sensitivity that is too low means that ice age changes are not possible. The ~3c sensitivity is corroborated in various ways, and one of them is to estimate from large-scale global temp changes – quaternary ice age cycles serving well because the land masses, and hence distribution of ice sheets, ocean/air currents etc are very similar to today.

    Perhaps the author could make a global model of ice age changes (specifically deglaciation), and plug in the lower climate sensitivity posited here to see if it accomodates (proxy) observations.

    This is the main problem I’ve read regarding lower climate sensitivities – whether Lindzen’s Iris effect or whatever. If the climate doesn’t respond as much as is thought, then the extreme swings in the geoliogical record, allegedly, aren’t possible.

  93. Mr Fitzpatrick’s model and the IPCC models seem to both be based on the assumption that GHG increases are the only significant cause of temperature change. Yet, they produce greatly different values for the climate sensitivity. If someone could give a quick summary of how Mr Fitzpatrick’s approach differs from the IPCC’s, that would really help me.

  94. Steve, thanks for your excellent post. I’m just a poor dumb lawyer, so it was hard for me to sort through some of your discussion, but I appreciate what you were attempting to demonstrate in terms of a “worst case” scenario if what the AGWers are saying proves to be correct (although I doubt all of it can be proven correct, and I also tend to doubt whether your method of calculating climate sensitivity is really useful in making long term climate predictions). I also agree with your policy suggestion concerning making nuclear power the source of our future electricity. When John McCain was looking for an “insurance policy” in connection with this overall question, he didn’t stress this particular solution enough. Your post makes a compelling case for long-term further rational study and gradual action as opposed to short-term hysteria and draconian solutions.

  95. It’s funny, last week we had a couple of days which were a little warmer than average and then we get this…

    http://www.smh.com.au/environment/chilliest-sydney-morning-for-a-year-20090809-ee8v.html

    I can confirm it was cold this morning, and in fact last night, working on my car with a flat battery. Car broke down, left out for about an hour and a half, condensation on the rear windscreen and roof as well as my breath condensing too. This was about 9-10pm on Sunday the 9th.

  96. Building on Willis’ comments, here is an excerpt of a file from 2005.

    Ascribing all of the alleged 0.6C rise in global temperature to increased atmospheric CO2 gives a climate sensitivity to CO2 doubling of ~1.2C (1.189C) from a one-line solution.

    But there is, imo, no evidence to ascribe such warming to increased CO2.

    The global cooling from ~1945 to 1975 and the cooling since ~2000 are not explained by this assumption.

    As Willis suggests, such cooling cannot be properly explained by ascribing it to another measured temperature, whether it be PDO, AMO or other.

    IPCC modelers have attempted to attribute the 1945-75 cooling to aerosols, principally SO2, but they have had to invent the data to do so – I accept Douglas Hoyt’s comment that there are no such trends in his real measured data, save volcanoes which are clearly apparent.

    Here is the 2005 Excel file, copied onto WORD – hope it is legible.

    9. EXTRAPOLATING OBSERVED WARMING TRENDS
    by Jarl Ahlbeck (Turku, Finland)

    We should not confuse the word “possibility” with “probability” as some
    people do when they compare different simulated results with each other.
    Everything is possible, but probability has a mathematical definition and
    should not be used when comparing simulated results. These reported
    (Nature, 27 Jan 2005) values of 1.9 to 11.5 deg C warming are
    possibilities, computerized speculations, nothing else. Also: Let’s not
    to talk about percent possibilities. All possibilities are
    100% possible.

    But of course, a kind of reality check can be made very easily: Say that
    half of the observed 20th century warming of 0.8 deg is due to greenhouse
    gases (CO2 increase from 280 to-370 ppm) and half is due to increased sun
    activity. As the relation is logarithmic, 0.4 deg=k*ln(370/280), giving
    k=1.435. For 2*CO2 (560 ppm), an additional warming of 1.435*ln(560/370)
    =0.59 deg C could be expected. This is a speculation as good as any
    produced by a computer climate entertainment program.

    In fact, 0.59 deg may be an overprediction as the observed warming has been
    partly caused by CFCs and CH4. As we know, the atmospheric concentration of
    CFC has decreased, and there is no more increase in CH4. This means that
    the k-value for CO2 should be lower than 1.435.

    k = deltaT/ln(CO2b/CO2a)
    deltaT = k*ln(CO2b/CO2a)

    For various % of 0.8 degree C temp rise in 20th century ascribed to CO2:
    (MacRae calculations and comments below)

    k CO2a CO2b deltaT As Above Case
    1.435 280 370 0.4 checks Assumes 50% deltaT
    1.435 370 560 0.595 checks due to >CO2.

    2.870 280 370 0.8 Assumes 100% deltaT
    2.870 370 560 1.189 due to >CO2.
    Both 50% and 100% seem too much high, given the better correlations below.

  97. I think our climate can be very sensitive but maybe not to the things people have put forth so far. Take the Bering Strait as an example. It is pretty shallow, only about 55m at its deepest point. The average depth is about 40m. Maximum ice thickness in winter has approached 15m in a cold year in the 20th century. General flow is Northward from the Pacific into the Arctic.

    Now imagine we have some really cold years and the ice freezes to 20 or 25m. That would mean that the ice will freeze completely to the bottom over a larger area and will freeze the mud under it. Ice floating on water mainly melts from the bottom up. While this is going on, the amount of water transported North will decrease because the size of the channel available will decrease. This could act as a positive feedback that causes the Arctic waters to become even colder. So now we have a situation where it takes much longer to melt the ice as it must melt from top down or from the edges in rather than bottom up as is the case with floating ice. If combined with that, we see an accumulation of ice on land, sea levels could begin to drop exacerbating the problem even more.

    At some point you don’t need to wait for sea levels to drop completely to expose the “land bridge” between Asia and North America. Once the Bering Strait freezes all the way to the bottom, the cutoff of warm water to the Arctic could trigger a situation where ice accumulation in the Northern hemisphere rapidly increases and you get rapid sea level decline. In fact, it might not need to freeze all the way to the bottom to have a significant impact. Simply freezing 5 more meters of depth might be enough to reduce the volume of Pacific flow to have an impact on Arctic ocean temperatures.

    Once the Arctic cools significantly and we see ocean levels begin to drop or more area freezing all the way to ground, we begin to see other bodies begin to constrict such as the English Channel, North Sea, and Irish Sea further modifying the amount or exchange between colder and warmer regions of water.

    And you can have an alignment of several things that could combine to produce a larger impact. For example, the mean flow through the Bering is Northward but sometimes weather patterns and pressure gradients can set up in such a way as to create a mean Southerly flow instead. If such a pattern was unusually persistent, it could be the first domino that sets things in motion. So a combination of declining insolation due to orbital change, an unusual weather/wind pattern, and a colder than normal winter or series of winters could result in a dramatic change in climate that might cause the Northern Hemisphere to tip rapidly into glacial conditions.

    So yeah, I believe Earth’s climate can be very sensitive but I also believe that changes in the ocean temperatures, currents, and flow volumes might have a much greater impact on climate than human emissions. Reduction of exchange between Pacific and Arctic might be enough to start the ball rolling and the freezing of the Bering to the bottom would be the final step in tipping the Northern Hemisphere into glaciation.

    Once that freezes all the way to bottom, it would be difficult to get open again but once it did open, it could also mean a very quick transition to interglacial conditions again.

  98. Further to the above post:

    Another issue is the divergence between satellite and Hadcrut3.

    I estimate that Hadcrut3 ST has a warming bias of 0.07C per decade over UAH LT, since ~1979.

    See the first graph at
    http://www.iberica2000.org/Es/Articulo.asp?Id=3774

    Furthermore, it is clear that CO2 lags temperature at all measured time scales, from ice core data spanning thousands of years to sub-decadal trends – the latter as stated in my 2008 paper, and previously by Kuo (1990) and Keeling (1995) .

    My 2008 paper is located at
    http://icecap.us/images/uploads/CO2vsTMacRae.pdf

    Considering all the evidence, and the work of Roy Spencer , Richard Lindzen and others, it is difficult to attribute more than 0.3C average global warming to a hypothetical doubling of atmospheric CO2.

    The actual sensitivity could be much less, approaching 0.0.

    In any case, less than 0.3C seems inconsequential to me.

    Those who feel the need to panic should find something more credible to panic about.

    Regards, Allan

  99. Sorry – typo in the above, should be 0.8C not 0.6C – correction reads:

    “Ascribing all of the alleged 0.8C rise in global temperature to increased atmospheric CO2 gives a climate sensitivity to CO2 doubling of ~1.2C (1.189C) from a one-line solution.”

  100. Steve Fitzpatrick (16:36:50) :

    There is no data I am aware of that would lead to assignment of 75% of warming to solar contributions. Let me know if you have this data and where it comes from.

    Hi Steve, it’s my data. Like all of us who are doing a bit of home brewed modelling, I’m working on scenarios in which the parameters are a bit different to the ones generally accepted by the modelers who’s models don’t work.

    In my case, I’ve worked out some values from the satellite altimetry showing sea level rise, the amount of solar energy retained in the oceans needed to get the thermal expansion component of that rise, and a value I’ve estimated for the level of solar activity at which the oceans neither gain nor lose heat. Coupled with a correlation Ive found between small changes in the length of day and changes in global temperature, I’ve come up with this graph:

    The mismatch around the war years is due to the LOD proxy not capturing el nino events very well, plus a well known bias introduced to the SST data by the engine cooling water inlet sensors used by military vessels.

    I haven’t yet worked out all the energy relationships, but given the uncertainty over TSI measurements and the poor state of knowledge regarding the amount of heat coming through the relatively thin seabed from changing and overturning currents of radioactive molten stuff in the earths mantle which are responsible for about 90% of changes in LOD, it seems plausible to me at the moment.

    So I’m just asking a hypothetical question for now, if you don’t mind a little speculation:

    “What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”

  101. Adam Grey (20:45:45) : The problem with that is that it assumes that 1. The forcings that control the glaciations are all known and determined 2. That the sensitivity is independent of the climate state and timescale and most importantly 3. That the influence of Milankovitch effects are adequately described by the very small net changes in recieved solar radiation at the top of the atmosphere.

    You see, the concept of climate sensitivity is really only appropriate for dealing with effects of forcings which are more or less spatially homogeneous (that is, the global mean forcing is essentially the same as that anywhere else). This is the case with CO2, essentially, but it is NOT true of orbital effects, which vary strongly not only with latitude but also season. One reason this matters is that, as was pointed out by Lindzen and Pan, 1994, such variations would mean that heat fluxes between the Equator and Poles would be greatly altered by Milankovitch effects-if, as the Iris hypothesis suggests, Tropical climate is strongly constrained, then such changes in transport would lead to large changes in global mean temperatures, and Polar temperatures would be additionally boosted in variation by ice-albedo feedback.

    So the Ice Age comparison is ill posed ( not to mention that arguing against observed negative feedbacks because it becomes to hard for you to explain ice ages is pretty silly).

    Lindzen, R.S., and W. Pan (1994) A note on orbital control of equator-pole heat fluxes. Clim. Dyn., 10, 49-57.

  102. Common Sense and Politics are opposing electrical view points.

    The complexity of Climate is confusing to those of use that are told to shut up and sit down by government. No easy pill to swallow if the Captain, leaning over the rail, throws you a brick and says “catch this, it will keep you afloat”. The guy floating next to me says “don’t believe, its just a brick”. I can see its just a brick, but why does the Captain (if the ship is really sinking) keep throwing a brick?

    Spotted owl: “experts (gov’t)” say that it is endangered (30 years ago) Habitat Loss. But wait, another expert, private biologist says ” no, there are sick communities, that is what needs to be studied”. Today, the Feds have hired shooters. Yep. To shoot the invading eastern Barred Owl that says to the Spotted Owl, “move out or I will kill you”. The Spotted Owl says to self, move out this guy is tougher than me” and too many Spotted Owls were then living in the same location, creating sickness loss in hatch survival.

    Watts the point: This site is progressively being attacked(my suspicions by an intentional seed plant) at chit-chat blogs-common citizen type, on an increasing frequency. Name calling, sometimes with foul language. Constantly siting this expert and that expert (no names or data) but an identical government sermon similar to the 30 yr ago sick owls, with repeated talking points condemning the Surface Station project. The Fed grants have not begin to study with conviction the migration over the Rocky Mt. of the Barred Owl.

    Blow vi ate is akin to Bovine chewing cud and expelling (burp) the large quantities of greenhouse gas. Shame on me.

  103. Adam Grey (20:45:45) :

    “I’ve read that a climate sensitivity that is too low means that ice age changes are not possible. The ~3c sensitivity is corroborated in various ways, and one of them is to estimate from large-scale global temp changes – quaternary ice age cycles serving well because the land masses, and hence distribution of ice sheets, ocean/air currents etc are very similar to today.”

    I think it is fair to say that nobody knows for certain all the causes for ices ages, which by the way the Earth is currently in if you consider the history over the last hundered million years; significant ice was not present over the much of the last 100 million years. Over the past ~3 million years ice has always been present at high latitudes, with relatively rapid advances and retreats of ice sheets and significant (eg 3-6 C) shifts in temperature. For the vast majority of the last 3 million years, Earth was substantially less hospitable to land animals than it is today, because a significant fraction of the total land area was covered with ice sheets. It is clear that variations in the shape of Earth’s orbit and axial inclination are correlated with the repeated ice ages of the last few million years.

    Some people have suggested that climate sensitivity to radiative forcing is not a fixed value, but rather depends on feed-backs from albedo changes caused by ice sheets and the sea level drops that go along with ice sheets. The atmospheric concentrations of CO2 and methane were also substantially lower 25,000 years ago than at any time during the holocene. Since the net forcing from these gases goes as the log of the concentration, the additional forcing for a fixed change in concentration from a lower base (200 to 225 ppm for example) would be larger than the additional forcing for the same change from a higher base (375 to 400 ppm for example). For these example numbers, the change at the lower level has ~83% more net radiative forcing than the change at the higher level, independent of any ice sheet or sea level feed-backs. So net sensitivity could have been substantially higher 25,000 years ago (maximum ice sheet coverage) than today, allowing relatively small forcings to make relatively big net changes in climate.

    Whatever causes substantial long term (glacial/interglacial) variation, it is not possible to reliably infer from these variations that the present climate sensitivity to radiative forcing is high today

    GCM’s have quite a large range of climate sensitivities (http://en.wikipedia.org/wiki/File:Global_Warming_Predictions.png), and a similarly wide range of projected warming, with the least sensitive models predicting only about 50% more warming than my curve fit model; the GCM’s can’t all be right, and it’s quite possible that none of them are right. The temperature history of the past 100+ years is consistent with relatively low sensitivity.

  104. Steve Fitzpatrick (17:28:10) : “If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871.”

    Well, that’s exactly what I said a while ago. When Nino and AMO indices are taken away, the only factor left is CO2. So all the model is doing is ascribing all the observed temperature trend to CO2.

    So let me see if I can parse this.

    the long term trend in both indexes is flat since 1871.

    When Nino and AMO indices are taken away, the only factor left is CO2.

    I believe that makes the contribution of CO2 approximately zero.

  105. I like the Bering Straight hypothesis.

    Once that freezes all the way to bottom, it would be difficult to get open again but once it did open, it could also mean a very quick transition to interglacial conditions again.

    The answer is high explosives. Lots of them. And nuclear powered icebreakers.

  106. crosspatch (01:56:05) :

    “…And you can have an alignment of several things that could combine to produce a larger impact…”

    Yes, I think this is the point which is missed by many climatologists who forget that climate is a chaotic system and who try to extract bits of the ‘machine’ to treat in a linear way. Observing historic behaviour, our climate seems to have warm and cool periods, with cool being the dominant trend. We need to treat the sun and planets (including Earth) as one complete system so that bifurcation points can be better predicted.

    Doing this will allow us to plan how to mitigate the effects of change before crisis point is reached, rather than reacting to red-herrings like GHGs.

  107. M. Simon (07:15:12) :

    “So let me see if I can parse this.

    the long term trend in both indexes is flat since 1871.

    When Nino and AMO indices are taken away, the only factor left is CO2.

    I believe that makes the contribution of CO2 approximately zero.”

    Let me say one time more:

    1. The calculated sensitivity (0.27 degree per watt) is based on the ASSUMPTION that all net warming since 1871 was due to radiative forcing. I did not say that radiative forcing is the only cause for observed warming, nor even that it is the most important cause. The calculated sensitivity represents a worst case estimate for sensitivity to radiative forcing. The measured variation in TSI over the last three solar cycles (about 1 watt per sq. meter) shows up in the temperature record quite clearly over the last 100+ years, with a best estimate solar cycle effect that is almost exactly what would be expected for a radiative sensitivity of 0.27 degree/watt. This does not prove 0.27 is the correct sensitivity, but it certainly shows the measured solar cycle signal is at least consistent with a sensitivity in this range.

    2. Total radiative forcing is not the same as forcing from CO2. Total forcing includes radiative forcing from N2O, methane, chloro-fluorocarbons topospheric ozone from VOC’s, and solar cycle forcing based on measured varaition in TSI over the last three solar cycles. The reason for including all these forcings individually is that they may not (indeed, we already know they do not) follow the trajectory of forcing from CO2, and any projection for warming based only on forcing from CO2 ignores that these some of these other forcings are likely to not increase as much as CO2 in the future, and may actually decrease, canceling some expected forcing from CO2.

    3. You may believe that all observed warming has been caused by other factors. My post does not and was never intended to address other possible causes, nor to suggest that greenhouse gases have caused a known X% of total warming. It was intended to place a realistic ceiling on the warming that could possibly be attributed to greenhouse gases.

    If you do not believe there is any possibility that radiative forcing has contributed to observed warming, that is OK with me, but this really has nothing to do with my post.

  108. Dr A Burns (18:55:43) :

    “A back-of-the-envelope calculation shows that body heat from the current 6.7 billion people is enough to heat the atmosphere by 0.8 degrees C in 100 years.

    The point is that the changes in temperature being discussed as so small that almost anything can affect them.”

    No doubt true if you fed everyone food from somewhere outside the earth or made all food using only fossil fuels (with no sunlight involved). But since essentially all caloric value in the foods people eat (animal or plant) comes from chemical conversion of the energy in sunlight to the energy in carbohydrates, the heat given off by people’s bodies can have no effect on the whole of the climate, not even a tiny one. The total heat input to the earth is unchanged by our food’s caloric value.

    The inside of a plane filled with people, sitting at the gate for an hour, presents a micro-climate with a very different response to body heat….

  109. This appears to be an excellent step ahead in the quest to better define the positive forcing factors. Now we need to get a handle on negative forcings and feedback loops, both positive and negative. Perhaps a better model may be possible within the next 5 years.

  110. Hmmm I wonder if that is real.

    ““A back-of-the-envelope calculation shows that body heat from the current 6.7 billion people is enough to heat the atmosphere by 0.8 degrees C in 100 years.”

    What is the heating caused by several hundred million automobiles sitting in the sun heating up the air inside them? Open a window a little and you have enough circulation so you have hundreds of millions of little solar heaters sitting in the sun.

  111. *****************
    Steve Fitzpatrick (06:20:56) :
    Adam Grey (20:45:45) :
    Some people have suggested that climate sensitivity to radiative forcing is not a fixed value, but rather depends on feed-backs from albedo changes caused by ice sheets and the sea level drops that go along with ice sheets. The atmospheric concentrations of CO2 and methane were also substantially lower 25,000 years ago than at any time during the holocene.
    ****************
    It seems even with less ocean liquid water volume, the colder ocean water due to the ice age could soak up a lot more CO2.

  112. Steve, thanks for your response. You say:

    If you run the regression without the Nino3.4 and AMO indexes, then the reported sensitivity to radiative forcing is just about the same as with them. The R^2 for the model (the quality of it’s hindcast, if you will) is much worse, since theses indexes account for much of the sort term variation. These indexes DO NOT change the overall trend in any way, since the long term trend in both indexes is flat since 1871. They were included to improve the accuracy of the model predictions. Yes, the short term variation in global temperature is closely correlated with these indexes, but these indexes are completely independent of radiative forcing and clearly are not responsible for any significant net global warming over the entire period…. their trends are completely flat.

    The misunderstanding seems to be in this statement:

    [Nino3.4 and AMO] were included to improve the accuracy of the model predictions.

    I say again: including observational data cannot improve the accuracy of model predictions. It can only improve the accuracy of model hindcasts.

    But improving the accuracy of your hindcasts by including actual observations is a mug’s game. Sure, you can improve hindcast accuracy by including PDO (Pacific Decadal Oscillation), or AMO (Atlantic Multi-Decadal Oscillation), or MJO (Madden-Julian Oscillation), or SOI (Southern Ocean Index), or NAO (North Atlantic Index), or any other index you choose. Any one of these, or any combination of them, will improve the accuracy of your hindcast. See the NOAA Climate Indices web page for a complete list, you can play with and compare any and all of these indices to any other or to global temperature.

    But using your method as shown in the head post of this thread, all you have proven is that model estimates of past observations can be improved by using past observations in creating your model estimates …

    Surely you can see how pointless that exercise is.

    w.

  113. Steve Fitzpatrick (09:18:45) :

    1. The measured variation in TSI over the last three solar cycles (about 1 watt per sq. meter) shows up in the temperature record quite clearly over the last 100+ years, with a best estimate solar cycle effect that is almost exactly what would be expected for a radiative sensitivity of 0.27 degree/watt. This does not prove 0.27 is the correct sensitivity, but it certainly shows the measured solar cycle signal is at least consistent with a sensitivity in this range.

    Hmmm.

    1) El nino tends to occur at solar min, and is the manifestation of solar input to the oceans at solar max. This masks some of the true solar input to the climate system by ‘flattening’ the temperature curve.

    2) PMOD data uses a model to calculate TSI, based on the splicing together of records from several satellites used to measure irradiance over the last 30 years. PMOD and the IPCC prefer the use of ERBS data to calibrate the change during the ‘ACRIM gap’. The ACRIM team maintain this is not as good as the data from the other satellite, NEPTUNE which was working when the gap occurred and that consequently, TSI shows a little trend when it should show a rising trend at the end of the C20th.

    3) Additionally, the Acrim data shows that cycle 21 had a difference of nearer 2W/m^2 between solar max and min than the 1W/m^2.

    All this adds up to a spread of uncertainty about the effect of Solar max-solar min on temperatures.

    PMOD says 0.05 to 0.1C

    I say it could be more like 0.35-0.4C depending how you account for heat storage in the oceans and heat energy release in el nino.

    If correct, this means the temp change over the C20th can mostly be explained by the sun, as the lower, longer cycles with longer minima of the early part of the C20th averaged out means a lot less TSI recieved at Earth.

    By the way, I replied to your question earlier.

  114. “”” Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s). This high sensitivity depends mainly on three assumptions:

    1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.

    2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases. Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.

    3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.

    However, there is doubt about each of the above three assumptions. “””

    Well I haven’t read the paper yet; but I printed it out so I can study it and let it sink in.

    But I couldn’t easily get past the introduction; pasted above.

    Well I hope to shout there is doubt about those three assumptions that are part of the GCM models.

    #1 The “slow heat accummulation” in the oceans. What is so darn slow about it ? A solar photon is going to get absorbed in those oceans in less than one millisecond; not >20 yrs; and the “heat” in the ocean comes from those solar photons. The long wavelength re-radiation from a solar and GHG warmed atmosphere is completely absorbed in the top ten microns or less of the ocean surface, and that promotes prompt evaporation from the locally warmed surface. I wouldn’t be hunting for a lot of that long wave energy being stored in the oceans for any length of time.

    #2 Cut with the cloaking hypotheses; aerosols (aka clouds) have always been an integral part of the earth’s atmosphere and always will; so stop making up excuses, and properly account for clouds in those silly GCM computer models; they aren’t cloaking anything, they are a part and parcel of the water NEGATIVE feedback effect.

    #3 Balderdash ! Water vapor is a GHG; the most prominent GHG, and it doens’t need any other GHG to spur it into action; it is perfectly capable of causing all the warming the atmosphere needs, and in cloud form of causing all the cooling the earth surface needs. The notion of H2O positive feedback “enhancement” of some other GHG caused atmospheric warming; is simply a crutch to ignore the fact that it is the water that is controlling the whole temperature balance; not the GHGs.

    And as for buying any notion that a linear approximation to a highly non-linear process is valid; don’t count on planet earth approximating the heating effect of incoming radiant energy, and cooling from outgoing IR by any linear approximation. The earth will apply the correct physics to the situations, and compute the correct answer, not some linear guess of unreality.

    The problem with the predictions of the GCMs is simply the GCMs themsleves; Use the earth’s own GCM and then you will get the right answers.

  115. Kevin Kilty (06:59:28) :

    “By the way, we can arrive at roughly the same value of sensitivity in three more different ways. Set W=e(sigma)T^4 (Stefan Equation), differentiate T with respect to W, and the result is the sensitivity. If one plugs in e=0.98, sigma = 5.67×10^(-8), and T as 288K, then one gets 0.25.”

    Your view is correct! May be there is a fault in the calculation.
    sensitivity = (dW/dT)^–1=(4 ε σ T^3)^–1
    If one plugs in the given values, one gets 0.188 (not 0.25). Do you agree with this?

    I found a remarkable article, where much is explained: http://www.webcommentary.com/climate/monckton.php
    I have not read it yet entirely because it requires some attention…

  116. Rik Gheysens (12:15:39) :

    “sensitivity = (dW/dT)^–1=(4 ε σ T^3)^–1
    If one plugs in the given values, one gets 0.188 (not 0.25). Do you agree with this?”

    The value of T in the above equation is the effective emitting temperature of the Earth, not it’s surface temperature. The infrared headed out to space is emitted over a range of effective temperatures (this is clear from looking at the NASA graphic of infrared intensity at the beginning of the post), but all these emissions are at blackbody equivalent temperatures well below the surface temperature that lies under the emitting atmosphere.

    The average emission temperature is the blackbody temperature which will balance the solar energy absorbed by the Earth’s surface: about 0.7 * 0.25* 1365 watts/M^2 = 239 watts/M^2. The blackbody temperature in equilibrium with the absorbed solar energy is ~255K, and the corresponding blackbody sensitivity is about 0.266 degree per watt/M^2. If you assumed 288K as the average emission temperature, the associated sensitivity would be 0.185 degree per watt/M^2, but the heat loss to space would then be (288/255)^4 = 1.672 times the solar energy that is actually absorbed.

  117. Steve Fitzpatrick,

    Thanks for your substantial reply to my posting of the obvious. From time to time I post the seemingly obvious to see if others feel the same.

    I can accept your reasoning for what it is — an attempt to project warming during the next few decades assuming CO2 has been and will be the sole source of the warming. Your results indicate that some climate models over estimate the impact of CO2. I think your effort adds value to the debate.

    In general, I agree with you that CO2 is probably causing some warming — however, not enough to worry about — assuming the “official” historical global temperature anomaly plots are “accurate” (they may not be — and that’s another issue for another time).

  118. “”” DennisA (02:59:26) :

    In 2000, Dr Robert E Stevenson, (now deceased), Oceanographer and one-time Secretary General of the International Association for the Physical Sciences of the Oceans (IAPSO), wrote an assessment of Levitus et al (2000) on global heat budget.

    http://www.21stcenturysciencetech.com/articles/ocean.html

    Yes, the Ocean Has Warmed; No, It’s Not “Global Warming”

    This is a small extract:

    “How the Oceans Get Warm
    Warming the ocean is not a simple matter, not like heating a small glass of water. The first thing to remember is that the ocean is not warmed by the overlying air.

    Let’s begin with radiant energy from two sources: sunlight, and infrared radiation, the latter emitted from the “greenhouse” gases (water vapor, carbon dioxide, methane, and various others) in the lower atmosphere. Sunlight
    penetrates the water surface readily, and directly heats the ocean up to a certain depth. Around 3 percent of the radiation from the Sun reaches a depth of about 100 meters.

    The top layer of the ocean to that depth warms up easily under sunlight. Below 100 meters, however, little radiant energy remains. The ocean becomes progressively darker and colder as the depth increases.

    The infrared radiation penetrates but a few millimeters into the ocean. This means that the greenhouse radiation from the atmosphere affects only the top few millimeters of the ocean. Water just a few centimeters deep receives none of the direct effect of the infrared thermal energy from the atmosphere! Further, it is in those top few millimeters in which evaporation takes places. So whatever infrared energy may reach the ocean as a result of the greenhouse effect is soon dissipated. “””

    Well I had to cut and paste this piece of history. I have been harping on this question for some time now; but make no claim to having said so first; although it came to me quite independent of any earlier publications; it’s so obvious that anyone could think of it.

    And my only amendment to the late Dr Stevenson’s comments would be to say that the long wave IR from the atmospheric radiation is absorbed in the top ten microns of the ocean water not “a few millimeters”.

    So I concur with Dr Stevenson that atmospheric warming of the oceans is a losing thesis; surface evaporation quickly removes any surface energy supplied by the atmospheric long wave downdraft.

    Any simple analysis of the up/down propagation of long wave infra-red radiation in a non-uniform atmosphere that has both a density and temperature gradient and a principally CO2 (other than water) GHG component, will clearly demonstrate that upward propagation is favored over downward; simply because of the way that the CO2 absorption band changes in width with altitude (gets narrower at greater heights).

    As for high Cirrus clouds creating a positive feedback warming of the surface (and the higher and less dense those clouds, the warmer the surface gets). That’s just plain silly; those high cirrus clouds are there because of the warmer surface; they are not the cause of the warmer surface, and because of the usual temperature relaxation with altitude, the warmer the surfrace is, the higher the water vapor has to rise (due to convection) before the dew point is reached and clouds can form, and if the water content is lower so the relative humidity is lower; the vapor has to go higher still so the clouds get less and less dense as a result.

    And like ANY other cloud; they still reflect sunlight from the tops (albedo enhancement, and they still block additional solar radiation from the surface; and it still gets colder when one of those clouds passes in front of the sun; it never gets warmer in the shadow zone as a result of those clouds; at any height.

    So I didn’t know the late Dr Stevenson; but I’m happy to know there have been others who find the standard line to be ludicrous.

    George

  119. George E. Smith (11:39:18) :

    “However, there is doubt about each of the above three assumptions. ”

    Well I hope to shout there is doubt about those three assumptions that are part of the GCM models.

    Don’t waste your time here George. Lip service is paid to doubt, but little heed is given to any serious aberration from the orthodoxy.

  120. George, I am with you on your post. CO2 and other greenhouse gasses, regardless of source are poor ways to heat water. Can you imagine using that method on a camping trip to heat water for morning coffee? It is the most amateurish part of global warming notions, let alone that the heat from air is somehow locked away in a vault to be spewed onto land like some B monster movie.

  121. Willis Eschenbach (10:32:08) :

    “I say again: including observational data cannot improve the accuracy of model predictions. It can only improve the accuracy of model hindcasts.”

    I understand exactly what you are saying and why you think that these indexes do not “improve the model forecast”. As I already said, removing the AMO and Nino 3.4 indexes from the regression does not significantly change the calculated climate sensitivity, nor should they… they are detrended indexes!

    Let me try to explain why I use them.

    1. If you plot up Nino 3.4 against global average SST, you find virtually no correlation. Yes, Nino 3.4 is an index that comes from measured SST in a certain ocean region. No, it is not a simple proxy for average SST as you appear to be suggesting. Nino 3.4 does provide information about the current state of the Earth’s climate system relative to an “average” state. To be more specific, Nino 3.4 helps us understand if a currently measured “higher than normal” or “lower than normal” global average temperature is a result of specific short term conditions in the ENSO or if that measured average temperature is more likely associated with a “background” trend in temperature. If someone says to me “Look how hot last month was!” and I know that we are in the middle of the biggest El Nino in history, then I can pretty confidently reply: “I’ll bet it will cool off by more than 0.2C in the next year of two.”

    2. AMO is a bit more complicated, since it comes from a much larger ocean area, and does correlate with global average SST (R^2 about 0.4), and some of the AMO index is simply a proxy for average SST. But AMO is detrended over 100+ years of temperature records; when the AMO index is well above or well below zero (well above or well below the long term trend line), it is telling us that the current measured global average temperature is not typical in a historical sense, and that the current measured average temperature is probably not an accurate representation of the underlying long term temperature trend. A very high AMO index fairly well screams that the temperature will fall back toward the long term trend line within several years.

    AMO and Nino 3.4 are not perfect stand-ins for short term variation, but are much better than nothing. AMO and Nino3.4 are determined each month, just as is the global average temperature, and can be used to better evaluate the ‘true’ underlying warming (or cooling!). Most everyone who thinks about climate change already knows this, and people often use these indexes just like the model does. For example, the current El Nino, which started just a month or two ago, “suggests” that the global average temperature will be above the trend line for at least a few months. Most every climate blog that you can think of has probably discussed this expected “El Nino warming” at least once in the last month or so, and official climate and weather organizations have also had press releases about it.

    If they were really just a proxy for the average SST (as you suggest), then why would anyone even bother to calculate them?

    If you believe that these indexes are not useful in the model, that is OK with me. But you can count on people to continuing to look at these indexes to better understand what is shorter term variation and what is longer term change.

  122. How Sensitive is the Earth’s Climate?

    Very.

    But not to co2.

    This is because co2’s puny near surface radiative activity is completely drowned by far larger atmospheric processes which only have to make a very minor adjustment to deal with it’s effect.

    However extra insolation does warm the ocean’s, and so, the Earth. Enjoy it while it lasts, the oceans are losing heat.

  123. I would also add that Sunlight is EASILY reflected away from warming the oceans. It’s measure at the surface is very noisy while highly stable just outside our atmosphere! And what varies it? Earth’s atmosphere. One of the most variable planets in the solar system in terms of its climate and weather.

  124. tallbloke (14:23:29) :
    and
    Pamela Gray (14:25:11) :

    Please tell me which of the following you disagree with:

    1. The concentrations of CO2, chloro-fluorocarbon’s, N2O, and methane in the atmosphere have increased by some amount in the last 100 years.

    2. All the above gases have well characterized infrared absorption spectra.

    3. Based on these known spectra, the escape of infrared radiation from Earth’s surface through the atmosphere to space should be slightly inhibited compared to escape under the same conditions, but with the concentrations of these gases reduced to what they were 100 years ago.

    4. It is therefore reasonable that all else being equal, some warming of the Earth’s surface (however small) should result from the increased concentration of these gases in the atmosphere.

    My understanding of chemistry and physics suggests that the above statements are not at all speculative. I am really trying to understand why you seem to object so strenuously to my post, which says clearly that the projections of warming based on GCM’s are much too high. What exactly do you take issue with?

  125. John Finn,

    “Have you a link to your explanation.”

    I initially wasn’t going to bother replying (no offence intended, but I didn’t really have the time to go through it all again should anyone want to debate it), but I notice some people have spent time arguing against the wrong model of the greenhouse effect (the radiative one), so I’ll refer to it again for anyone interested. My comment above was meant to be more light-hearted.

    The first comment was here. More further down.

  126. Steve Fitzpatrick (14:22:17) :
    If you plot up Nino 3.4 against global average SST, you find virtually no correlation.

    If you go to Bob Tisdale’s website you’ll find a post on how you can add nino3.4 values cumulatively (along the lines of what I did with sunspot numbers as I described in the post answering your question which you ignored), to get an uncannily accurate history of SST’s.

  127. The physics is off on this post; there is no way the increase in global mean temp would be so low when C02 is double that of pre-industrial levels; when I have time I will come back to this point.
    Pamela what references are you using? I would love to see those if you would paste them up. You may want to see this:

    http://www.fas.org/spp/military/docops/afwa/ocean-U1.htm
    here:
    http://www.theallineed.com/biology/07052901.htm

    and here:
    http://www.learner.org/courses/envsci/unit/text.php?unit=12&secNum=0

  128. Steve,
    they take issue with any global warming due to greenhouse gases from the burning of fossil fuels; some here, will agree that man has slightly helped along natural variation so as long as the warming stated is so negligible that it has no potential detrimental effect.

  129. Steve Fitzpatrick (15:08:01) :

    4. It is therefore reasonable that all else being equal,

    They are not. Lots of other things have changed. The atmosphere is a big place. These gases you obsess about occupy a vanishling small part of it, and although they may have some properties which might have some effects, they are a drop in the bucket which the massive processes continuing in the atmosphere can shrug off with a tiny average shift of the jetstreams towards the poles here, or a changing of the extratropical hadley cell boundaries there.

    This is the second set of questions you’ve asked me that I’ve replied to. Are you going to continue ignoring the first?

  130. ” Steve Fitzpatrick (15:08:01) : ”

    “3. Based on these known spectra, the escape of infrared radiation from Earth’s surface through the atmosphere to space should be slightly inhibited compared to escape under the same conditions, but with the concentrations of these gases reduced to what they were 100 years ago.”

    Maybe, maybe not. What if increased CO2 concentrations in the atmosphere displaces H2O and results in lower absolute humidity in response to the increased CO2 content and total greenhouse impact is reduced? Suddenly what was thought to be a positive feedback turns into a negative feedback as a less absorptive gas displaces one with a wider absorption range.

    What if the atmosphere is already practically opaque to IR at the most important wavelengths and adding CO2 is like putting a shade across an already bricked up window?

    And the bottom line is, based on my understanding of physics, is that if you have this increase in IR absorption, you should see that elusive hot spot. If you are catching radiated heat from the surface and re-radiating it back down, you should be able to measure it. So far that hasn’t happened. That heat isn’t there. There is currently no indication that the atmosphere is increasing its absorption of IR the way the models predict it would.

    Then we have the whole problem of convection that the models ignore. If the gasses DID absorb more IR and heat up, they would rise and give the heat up as they do. Direct radiation would convert to convection/radiation and the atmosphere would still give up its heat to space. The models (according to what I have read in the writing of others) seem to rely on a static atmosphere that collects IR, heats up, and just sits there re-radiating the heat back to the ground or is “infinitely thick” and never gives up the heat to space the way a gas would in real life.

    Earth has a natural refrigeration system using water vapor as the working fluid. The only other atmospheres we have to look are made up of mostly CO2 with no water to speak of (Venus and Mars). Hansen’s group was originally formed to model the atmospheres of these (and the other) planets. They have spent a lot of their life modeling CO2 greenhouse atmospheres. But things might not work the same way here. It would take, I believe, a HUGE amount of CO2 increase to even make as much difference as the normal variability of H2O. I believe CO2 adds so little greenhouse warming that it gets “lost in the noise”.

    Looking at NCDC’s data for the continental US, we are looking at a warming rate of 1.2 degrees Fahrenheit per century since 1895 to present and the most recent 12 month period is below the trend line. ( Go here and enter “most recent 12-month period in the “Period” pull down, its the last item in the list and select “Submit).

    Since 1999 to present we see a cooling trend of 8.6 degrees Fahrenheit per century. That is quite dramatic cooling over CONUS over the past 10 years.

    Why? And why aren’t the global satellite averages tracking with CO2 emissions?

  131. I would suggest that AGW skeptics [snip] see Spencer Weart’s work. Just google him, and you will find an immense resource of information regarding why AGW is a fact from the standpoing of solid physics. The upper atmosphere contains little to no water vapor and therefore any contribution made by CO2 would have a net warming effect, since it acts as a blanket. Also, the lower and middle troposphere is far from being satuarted as of yet, but even if it were, the C02 in the upper atmosphere where it is cool and dry would absorbs wavelengths at different bands at varying altitudes and thus reflect LWR back down towards Earth.

  132. Pamela Gray, what are your references regarding: “I would also add that Sunlight is EASILY reflected away from warming the oceans. It’s measure at the surface is very noisy while highly stable just outside our atmosphere! And what varies it? Earth’s atmosphere. One of the most variable…”?

  133. #1. Disagree. We don’t know the long term trend or average during the past 500 years. All we have is AIRS showing us observed measures during a single warm oceanic oscillation. All the rest is proxied calculations, which have a reasoned large standard deviation compared to a gas chamber. We do not know what happens to CO2 under different oceanic oscillation conditions in terms of measuring small changes in ppm. What we do know is that torrential rains can send tons of CO2 captured by plants out to sea and down the ocean floor. In essence, bad weather can scrub carbon out of the air. Bad weather comes when oscillations fight each other, IE one is cool while the other is warm.

    #3. Disagree. Real world conditions include stormy weather and uncooperative jet streams as well as aerosols that change the amount of short wave solar radiation reaching the surface and then longwave radiation available to be absorbed by CO2 and other GHG’s to add warmth. There is no such thing as same conditions in the actual world and besides, CO2 is only available to warm the air after the Sun’s rays are changed by natural and highly variable parameters. CO2 cannot overcome that. Its ability to warm is stable. It’s all the other variables that do not let it do its job very well. With full amount of incoming solar shortwave radiation reaching the ground and with the full amount of outgoing longwave radiation reaching greenhouse gas layers, we get about 30 degrees Celsius of warming. But we never actually get that because of the highly variable atmosphere of our planet.

    #4. Nothing is equal in the real world.

    On the contrary, all the other variables create lots of speculation. See:

    http://www.arm.gov/publications/proceedings/conf02/extended_abs/ellingson_rg.pdf

    and Anthony, help me understand this ppt:

    http://clarreo.larc.nasa.gov/workshops/2009-02-24/docs/Huang_Langley-visit_20090224.ppt

  134. BTW, this is an interesting site with data available. Well worth a visit. There maybe evidence of coolaid drinking but they are collecting longwave radiation data. Something that is highly variable depending on how much shortwave radiation actually hits the surface.

    http://www.arm.gov/acrf/

  135. tallbloke (15:45:04) :

    I obsess about nothing, and it would help maintain civility by not making this kind of non-constructiive comment.

    “What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”

    I replied that I knew of no data set that could be used to make solar contribution 75%. You replied with an address that held a series of graphs, without any description of those graphs that I could find. I really have no idea how those graphs relate to solar forcing. That is why I did not reply further. I still have no idea what the graphs mean or how they might relate to solar forcing.

    With regard to:

    “If you go to Bob Tisdale’s website you’ll find a post on how you can add nino3.4 values cumulatively (along the lines of what I did with sunspot numbers as I described in the post answering your question which you ignored), to get an uncannily accurate history of SST’s.

    I have not seen this post, though I have seen Bob’s site a few times. I am not sure I understand what the connection might be between a sum of historical values of Nino 3.4 and historical sea surface temperatures. If you can offer a brief summary it might help. It would also help if you could explain how/why the sum of historical sunspots relates to total solar forcing; I have never heard of this before. How far back do you sum, and how do you choose the starting point for the sum? What information does this sum of sunspots provide about solar forcing?

  136. “The upper atmosphere contains little to no water vapor and therefore any contribution made by CO2 would have a net warming effect,”

    The same can be said for polar winter … and no such warming has happened.

  137. Crosspatch: “Maybe, maybe not. What if increased CO2 concentrations in the atmosphere displaces H2O and results in lower absolute humidity in response to the increased CO2 content and total greenhouse impact is reduced? Suddenly what was thought to be a positive feedback turns into a negative feedback as a less absorptive gas displaces one with a wider absorption range.”
    For one CO2 does not displace H20 and the lower atmosphere is not “saturated.” Also the absolute humidity/specific humidity do flucuate, but generally the relative humidity remains stable. In the upper atmosphere, it is cooler, and dry, an the presence of water vapor begins to fade, but lo and behold, C02 is still on the incline where it was formerly virtually non-existent. Also keep in maind that water vapor tends towards equilibrium, in relative humidity, but water vapor levels are also currently rising.

  138. “The same can be said for polar winter … and no such warming has happened.”
    You are neglecting altitude dependent changes.

  139. Jacob Mack (16:28:50) : ” the C02 in the upper atmosphere where it is cool and dry would absorbs wavelengths at different bands at varying altitudes and thus reflect LWR back down towards Earth.”

    Why doesn’t LWR get reemitted equally in all directions? You certainly aren’t implying that gravity comes into play are you?

  140. Tom: “Why doesn’t LWR get reemitted equally in all directions? You certainly aren’t implying that gravity comes into play are you?”
    No, I am not implying gravity.
    There are pressure changes at different altitudes as well as temperature, which will influence how a given gas will absorb and emit LWR. Check out Peter Atkins 2009 book titled Elements of Physical chemistry book on Google books, cowritten by Julio de Paula, but first check out the work of Spencer weart., “Discovery Of Global Warming,” also on Google Books.

  141. Pamela Gray (17:08:35) :

    “Jacob, shortwave and longwave radiation of Sunlight 101. From the description found here, one can easily reason that these variables create a very noisy data stream of how much gets in, and how much is reflected.”

    This site is an oversimplification of the empirical data, the physics, and not just the GCM approximations. You are also neglecting GHG mixing and the reduction in ice cover, albedo.

  142. Jacob Mack (16:28:50) :

    “I would suggest that AGW skeptics [snip] see Spencer Weart’s work.”

    Not a very constructive start.

    Should I reply by sending you to read dissertations by Richard Lindzen? Better that you stick to issues related to the thread. Did you read my post? Did you have doubts, questions, or suggestions? Do you think that there is factually incorrect information presented? If so, then I would be happy to address those subjects.

    Sending me to read what “this authority” says is at least a bit odd; I have some 35 years experience in chemistry, physics, and nanotechnology, and do not need you to suggest that I become better informed on the basic technical issues of AGW.

    Pleeeease!

  143. If I remember correctly, the absorption bands of CO2 are, ~4, 7.5 & 15µm bands.

    4 & 7.5 are pretty well covered by H2O so that leaves 15!

    That’s just the poles, North & South! South cooling, North warming, where’s the CO2?

    DaveE.

  144. ” Jacob Mack (17:14:41) : ”

    I don’t believe I am.

    And by that I mean that the pole is at the same altitude now as it was 50 years ago. Any increase in CO2 greenhouse should have a much greater impact in the polar region because the air is so dry. CO2 plays a much greater role in any atmospheric greenhouse at the pole than anywhere else on the planet. Most of Antarctica has been cooling with the exception of the Western peninsula and that is due to wind currents.

    The impact of CO2 warming has not been documented anywhere. NONE of the predicted indications have been observed, not a single one.

  145. Steve,
    the post was not directed at you for starters. Secondly, I did make a post regarding your work briefly; you grossly underestimate the climate sensitivity. I am still going through your post with a fine tooth comb, but I will comment directly regarding your calculations, assumptions, and methods. I am referring other posters to this work as it is important work that both you and they neglect to mention. Now, several times I have highlighted the physics and findings that you have neglected to cover; mainly the C02 in the upper atmosphere which will most certainly lead to global warming and the water vapor feedback which you clearly underestimate. I will be more specific soon and show you where the chemistry and physics does not add up, but that is for when I have more time to give you post my undivided attention. Also you still hold that GHG lead to some, albeit mild global warming, so you are hardly a [snip] and as far as skepticism, your work is not to dat repeatable and validated and subject to peer review, so we shall wait and see as to how valid many of your claims are. I can tell you though that it is impressive your long career and experience with chemistry and physics, but you make several minor errors that makes the warming look far more negligble for a future prediction than it is already, which is of course, impossible. You may need a review in atmospheric physics and physical chemistry, my friend.

    Reply: Future use of the term “denier” as a pejorative will lead to deletion of posts without notice or explanation. ~ charles the moderator.

  146. I think Steve, that we should discuss this physics and chemistry of AGW in depth here real soon.

  147. Crosspatch you are mistaken:
    Quote: “Scientists on Wednesday unveiled evidence to suggest global warming is affecting all of Antarctica, home to the world’s mightiest store of ice.

    The average temperature across the continent has been rising for the last half century and the finger of blame points at the greenhouse effect, they said.

    The research, published in the British journal Nature, takes a fresh look at one of the great unknowns — and dreads — in climate science.” End quote.
    dsc.discovery.com/news/2009/01/…/antarctica-warming.html
    Also see: http://www.cnn.com/2009/WORLD/…warmingantarctic/index.html
    And: BE Barrett, KW Nicholls, T Murray, AM Smith … – Geophysical Research Letters, 2009 – agu.org

    Also there are pressure and temp differences for the Antartic elevation and the atmpospheric; there is also more precipitation in the Anartic then the upper atmosphere.

  148. “”” Jacob Mack (16:28:50) :

    I would suggest that AGW skeptics [snip] see Spencer Weart’s work. Just google him, and you will find an immense resource of information regarding why AGW is a fact from the standpoing of solid physics. The upper atmosphere contains little to no water vapor and therefore any contribution made by CO2 would have a net warming effect, since it acts as a blanket. Also, the lower and middle troposphere is far from being satuarted as of yet, but even if it were, the C02 in the upper atmosphere where it is cool and dry would absorbs wavelengths at different bands at varying altitudes and thus reflect LWR back down towards Earth. “””

    Now why would you say “the upper atmosphere contains little to no water vapor”. Why would that be; other than the upper atmosphere contains little to no gases of any kind. The upepr atmosphere no matter how rarified is perfectly capable of sustaining a water vapor content in accordance with the saturated vapor pressure of Water as a function of temperature; and even at -90 C, the earth’s atmosphere still contains water vapor; so I don’t see why it should disappear with altitude; any more than CO2 would.

    And as that upper atmosphere becomes ever more rarified so does the density of CO2 molecules up there so the GHG warming effect also diminishes. Oh maybe the local atmospheric temperature still changes somewhat since the reduced amount of captured IR long wave radiation is shared with a reduced mass of atmospheric gases; or when high enough the mean free path may be long enough for the CO2 to simply decay to the ground state and re-emit the absorbed photon. And that re-emission spectrum would be quite narrow, because of the lowered temperature and density so the Doppler and collision broadening would be reduced.

    That narrower CO2 absorption/emission spectrum would have quite a chore making it through the denser warmer lower atmosphere, with its broader CO2 absorption band. Remember that each re-absorption and eventual emission from either the excited GHG or the atmosphere, results in an essentially isotropic re-radiation pattern; so roughly half of the total flux can be expected to be up and half down at each such level. The upward path would be expected to be favored over the downward, becasue of the temperature and density relaxation with altitude.

    And for one more time, can I re-iterate that the GHG components of the atmosphere do NOT reflect long wave radiation from the surface. The process is an inelastic scattering process, and not an optical reflection. Reflection does not involve a frequency shift.

    As for learning from Spencer Weart; see letters to the editor in “Physics Today” for January 2005; where I casually mentioned that when floating sea ice melts; the laws of physics require that the sea level will fall; not rise, and not stay fixed either.

    Weart pooh poohed that idea; and substituted his own problem in place of mine, simply asserting that when the oceans warm the water expands, and the sea level rises. No doubt true; but totally unrelated to my comment about “when floating sea ice melts.”

    So I’ll find a more on the ball teacher thank you.

  149. Dave,
    there is also 17 as well from C02 and varying behaviors of C02 under different altitude conditions and mixing ratios in relation to varying amounts of water vapor, N02, CH4 and of course, 1/2 RHO V^2 is the altitude density equation and at higher altitudes, air pressure decreases (density decreases as D= M/V) and thus pressure does as well so gases will tend to spreead out more under such circumstances, but become less thermally excited at higher altitudes, and yet in the absence of other significant GHG, some of the CO2 does go to space, while the rest is held in and re-radiated to the Earth.

  150. Quote: “Reply: Future use of the term “denier” as a pejorative will lead to deletion of posts without notice or explanation.” ~ charles the moderator.’
    I do not engage in name calling hence why I put ” or ‘ around such words, Charles.
    I in no way meant it with any intent of contempt; it was actually to indicate that I did not mean my statements in a prejorative manner.

    [Reply: Best to avoid using the “D” word entirely. ~dbstealey, moderator]

  151. I’ve redone some charts I posted from above.

    Here is how it plays out when you separate the solar forcing from the greenhouse effect. This IS the greenhouse effect.

    Each extra Watt of GHG forcing is really only adding 0.18C right now. The 2.4 extra Watts assumed to have occured since 1850 or so would translate into 0.5C of warming (convienently close to what has actually occurred).

    To get to +3.0C by 2100, GHGs will have to add an extra 13 Watts [which is an impossible amount – you can do your own math for CO2 alone with this formula – Watts = 5.35 ln(CO2future/387)]

    Each extra watt is now only adding 0.18C.

    The climatologists rely on these equations for everything. It underpins most of the physics and the models themselves. As far as I can tell, they have not calculated how each extra Watt will affect temperatures, they are just using the averages over the whole spectrum (surprising since they should know these are logarithmic/exponential equations).

    This is a falsification as far as I am concerned.

    This is also more-or-less consistent with Trenberth’s new Earth Radiation Budget paper. He’s bumped the surface Watts from 390 (my charts) to 396 assuming there is a lag in emissions from the surface ocean and deserts but this change also seems like an impossible amount.

    http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/BAMSmarTrenberth.pdf

  152. Steve Fitzpatrick (17:45:30),

    He’s trolling. And he’s bringing up Spencer Weart, because Weart is realclimate’s tame pet. If Weart had the …um …gumption, he would write an article for the web’s “Best Science” site like you did, and let people try to knock it down if they can. That’s how real scientists do it. Even Dr. Steig wrote an article that was posted here.

    But Mr. Weart likes being scratched behind his ears, so he hides out at RC and similar agenda-driven sites — where he never has to face any uncomfortable questions. Because he hides out from answering inconvenient questions, he carries little weight here. Zero, actually.

    Kudos to you, BTW, for an interesting article — and for being willing to respond to numerous questions.

  153. Jacob Mack (17:59:21) :

    “I think Steve, that we should discuss this physics and chemistry of AGW in depth here real soon.”

    I would be happy to do so, if you can keep the conversation civil and constructive.

    Most everyone honestly believes what they say, even if they may sometimes be mistaken. A constructive dialog requires that anyone involved enter admitting that they may sometimes be wrong. If you can enter an exchange honestly saying that you may sometimes be wrong, then it will be worthwhile. If you enter certain that you (or worse, some distant authority you will point to) is 100% correct, then any discourse would be a terrible waste of time.

  154. ““Scientists on Wednesday unveiled evidence to suggest global warming is affecting all of Antarctica, home to the world’s mightiest store of ice.”

    Oh, I take it that you are not familiar with the errors that were discovered in that “study”. That is Steig’s paper, I believe. It has been shown to be in error. Steig has produced a corrigendum which you can read about here. Basically the error bars are so wide now that the result of his study is “temperatures have risen 0.12 degrees +/- 0.12 degrees.

    A station had great weight in the study but the data attributed to that station didn’t come from there. It was actually a splicing of data from several other stations. That study is, at this point, pretty much debunked.

    Please, feel free to try again.

  155. Steve, no not 100% certainty, but I find your doubts of the “assumptions,” to be questionable, without further reference to data. I am confused as to how you make a confident statement like:
    (1.) “Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.”
    Next: (2.) “Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings. There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.”
    (1.)What about the high heat capacity and specific heat of water, the changes in salinity recently noted, the higher ocean C02 content, and the lagging conduction of heat to the atmosphere from the ocean? (only about 10% of total heat, but evaporation and water vapor feedbacks come into play as well)
    I will stop there for now, but it seems to me that the physics and chemistry (and the intricate weather patterns and long term climate trends, (say 50 years to present?) indicate otherwise for aerosols. There is signifcant research on aerosol scattering effects, so I am confused by your statement that there is no evidence regarding their current and future effects.

  156. Steve Fitzpatrick (16:59:37) :

    tallbloke (15:45:04) :

    I obsess about nothing, and it would help maintain civility by not making this kind of non-constructiive comment.

    Apologies, your asking a question and then ignoring the reply annoyed me.

    “What would the climate sensitivity to co2 look like if the solar contribution to the warming was, say, 75% ? Simply 1/4 of your figure, or is it more complicated?”

    I replied that I knew of no data set that could be used to make solar contribution 75%.

    Thus avoiding answering the question. Which you still are… I pointed out the uncertainty in TSI values, and asked you to treat my question as speculative. But instead of giving me a single, clear answer, you have asked another four questions.

    When you’ve answered my single reasonable question, I’ll answer your additional ones.

  157. Steve Fitzpatrick (15:02:40) :
    I was not aware that Lean had changed her mind about the 2 watts change since the little ice age. It certainly was not my intent to misrepresent her current views. The calculations I did were based on recently measured changes in intensity over the solar cycle (peak to valley) of ~1 watt per square meter at the top of the atmosphere, and the model assumed this variation was the same since 1871.

    The peak to valley change depends on the size of the solar cycle and varies by a factor or 3 or more. A good median value is 0.1% of TSI or ~1.4 W/m2, for some cycles larger, for some smaller.

    This works out to ~0.7 * 0.25 = 0.175 watt per square meter, and an expected solar signal from the solar cycle of 0.047C (peak to valley) for a sensitivity of 0.27 degree per watt per square meter.
    A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].

    What I found interesting was that the best model fit to the temperature data corresponded to ~0.168 watt per square meter, remarkably (at least to me) close to the 0.175 watt per square meter that would be expected based on the measured variation over the last few cycles.
    In view of my simple calculation above where the sensitivity doesn’t enter at all I don’t see the relevance of the correspondence.

    So for what is is worth: the model is consistent with no substantial change in cyclical variation over the past 130 years.
    And I don’t understand this statement. Stefan-Boltzman’s law hasn’t changed. So what is this ‘cyclical variation’?

  158. tallbloke (18:50:01) :
    I pointed out the uncertainty in TSI values
    The uncertainty is smaller than the solar min to max variation so is hardly relevant. Even a 1 W/m2 uncertainty translates into a 0.05K temperature signal which is negligible in the current context.

  159. Fair enough Crosspatch, but here are other recent papers, some preliminary or up for peer review, while others are already published:

    http://www.atmos-chem-phys-discuss.net/9/…/2009/acpd-9-1703-2009.pdf

    http://www.sciencemag.org/cgi/content/abstract/311/5769/1914

    http://www.newscientist.com/article/dn16740-global-warming-reaches-the-antarctic-abyss.html (the attribution is not made prematures to C02, in fact it is stated that it is too soon to know by the researchers)

    http://netbnr.net/loc.html?http://www.climatehotmap.org/antarctica.html

    http://netbnr.net/loc.html?http://www.sciencedaily.com/releases/2006/03/060330181319.htm

    I also want to add that precipitation will slow down Anartic warming and some evidence suggests that El Nino will also temporarily suppress warming magnitude, and yet the Anartic is still warming, while Greenland is, and the Artic is warming at even a faster pace.
    More on all this later… I also have preparations to make for Steve, soon as he answers my initial questions.

  160. Jacob Mack (18:48:21) :

    “Steve, no not 100% certainty, but I find your doubts of the “assumptions,” to be questionable, without further reference to data.”

    I should hope a lot less than 100% certainty. The IPCC models differ by a factor of about 3 in their projections of warming through 2100. At a minimum that ought to lower the certainty level a fair amount below 100%; they can’t all be correct. If you have one parti`cuylar model that you think is almost certainly correct, then OK, but please tell me which model that is.

    I am confused as to how you make a confident statement like:
    (1.) “Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.”

    There have been four published studies (that I am aware of) where total heat content in the top 700 meters of ocean was calculated based on Argo float data (following the correction of errors in a small subset of floats, of course) as well as independent confirmation by ocean mass and altimeter readings (satellite). One showed a modest fall in heat from 2003 to 2008, two shows a very slight fall toflat in heat, and one a slight increase in ocean heat. The best available data is that there has been no heat accumulation (or a very slight fall) in the top 700 meters of ocean since 2003.

    “Next: (2.) Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings. There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.”

    The IPCC’s uncertainty limits for net areosol focings range from tiny to huge. All global estimates based on measurements I have seen indicate a gradual fall (a total reduction amounting to about 50% of the 1993 value) since the effects of Pinatubo ended in about 1993. Measured aerosol effects declined through at least 2005, (the well known global brightening) which should have increased solar intensity and heat accumulation in the ocean… it did not happen.

    “What about the high heat capacity and specific heat of water, the changes in salinity recently noted, the higher ocean C02 content, and the lagging conduction of heat to the atmosphere from the ocean? (only about 10% of total heat, but evaporation and water vapor feedbacks come into play as well)”

    I honestly have no idea what you are trying to say in the above paragraph. Perhaps you could explain in a different way.

    Jacob, what I see is that for extreme greenhouse forced warming to be correct, you have to believe that 1) ocean heat accumulation is extremely slow (lagging far behind the surface, if you will), that 2) human generated aerosols have canceled a large fraction of radiative warming, 3 ) that this “aerosol cancellation” is going to decline in the near future, and 4) that the total of water vapor and cloud feedbacks is strongly positive.

    In addition, all these things must be correct for the whole structure to “hold together”; if ocean lags do not extend to hundreds of years, then the forcing can’t be what is claimed, if the forcing is not what is claimed then the feedbacks can’t be right, etc., etc. Simulating the atmosphere and ocean is a remarkably difficult problem, and this explains the wide range of model predictions (produced by groups of dedicated and honest scientists and programmers, no doubt)…. but none of it inspires confidence in their predictions. Finally, please note that most of the models do not even correctly predict the average surface temperature of the Earth…. today.

  161. Please see below, regarding the higher volume of salt water due to its higher density. (D=M/V) Again the differences in water’s physical characteristics due to salinity cannot be ignored.

    “In a paper titled “The Melting of Floating Ice will Raise the Ocean Level” submitted to Geophysical Journal International, Noerdlinger demonstrates that melt water from sea ice and floating ice shelves could add 2.6% more water to the ocean than the water displaced by the ice, or the equivalent of approximately 4 centimeters (1.57 inches) of sea-level rise.

    The common misconception that floating ice won’t increase sea level when it melts occurs because the difference in density between fresh water and salt water is not taken into consideration. Archimedes’ Principle states that an object immersed in a fluid is buoyed up by a force equal to the weight of the fluid it displaces. However, Noerdlinger notes that because freshwater is not as dense as saltwater, freshwater actually has greater volume than an equivalent weight of saltwater. Thus, when freshwater ice melts in the ocean, it contributes a greater volume of melt water than it originally displaced.”

    Also this is discussed in General Chemistry (by most college professors) as is thermal expansion. So, there is a double net positive effect here; many HS textbooks and 8-9th grade science websites get this wrong and assert (incorrectly) that sea ice melt would contribute little to nothing to sea level rise. Once you take a college level physics/chemistry course it becomes clear that salinity levels affect the heat capacity of water, the density/volume of melting ice which displaces the water, and thus sea level rise. If we were discussing melting of fresh ice into fresh water than the displacement would be almost zero in net water rise. I suggest you read http://www.fas.org/spp/military/docops/afwa/ocean-U1.htm and
    http://www.fas.org/spp/military/docops/afwa/ocean-U2.htm, U3, etc…

  162. 1. Jacob, you still haven’t responded to my post other than to say that I oversimplified. Do you have shortwave radiation data over time measured at Earth’s surface before it gets converted to LWR, compared to what hits the outer part of the atmosphere? It appears that your premise is that there is no difference or variation between the two, thus allowing continued increase in CO2 to heat the Earth from here to Armageddon. There is quite a difference and the data is noisy. But if you think not, show me.

    2. SST’s and oceanic oscillations oscillate around a rather cherry picked zero. Yes. But over what time scale? And is it an even oscillating swing or one that is predominantly lopsided one century, and lopsided in some other way the next? During the industrial age, are you saying that the swing is even, can be canceled out, and therefore leave us with AGW? Are you kidding? Remember, most average “normal” lines on temperature graphs are no more than 30 years long. Yet we know that some oscillations are at least twice that length and have a rather chaotic swing.

  163. “and yet the Anartic is still warming”

    No. It isn’t.

    The articles you link to are full of the usual buzz words such as:

    “He says the changes could be responsible for up to 20% of the observed global sea-level rise.”

    Which is pure blither. The oceans have been rising at a fairly steady rate for about the last thousand years, long before we started using fossil fuels. Oceans reached their maximum height about 7,000 years ago when they were about 2 meters higher than now. In fact, the trend in sea level rise since 2006 is flat with rise which had been going along at a fairly steady rate suddenly stopping.

    The ocean’s aren’t warming either. Data from the ARGO project show flat to slightly cooling ocean temperatures since the project started.

    The articles you provide are basically not true but have become “conventional wisdom”. The seas are not rising, Antarctica is accumulating ice, the oceans are not warming, the atmosphere is not warming … and you will see 2009 end up with more ice than 2008 if the current trend holds. It looks like the melt is coming to an end early this year … before the Northwest passage has had a chance to open.

    If you dig into those stories that you read, you will learn that they are not backed up by the data. They pretty much all simply repeat each others data and since that is what people are taught is “fact” these days, you are primed to receive it as such. Basically you are being hoodwinked in order to get you to agree to give up a good portion of your cash in order to “save the planet”.

  164. Jacob, are you referring to floating ice calved from glaciers or floating ice from freezing ocean saltwater seas? Floating ice calved from glaciers is indeed freshwater sourced. Floating ice from freezing seas is salt water to begin with and when melted returns to salt water. Unless this kind of ice is capable of creation, it cannot have a greater volume as a liquid than it began with before it froze and then reintegrated back into the sea from whence it came. If there is a common misconception. It is this: that the Arctic ice cap is frozen water from river sources, accumulated snow, and ice from rain.

  165. Pamela,
    I noticed you latest response in my email, so I decided to reply now. The floating ice calved from glaciers have some salinity, albeit lower, however, the changes in density due to the salt water dissolving the mostly fresh water ice sourced from the glaciers, will inrease the volume of the water in addition to thermal expansion; so let us say thermal expansion is currently flat, well, the salinity of the water would still change the density which would change the volume of the water. Since I am posting, I will also ask you to respond to a past post where you asked me about heat transfer of oceans and terrestial land masses in a recent thread, since you never got back to me there either.
    Now, regarding incoming SW radiation and LW radiation there is alot of literature which exists showing good approximations and results of LW radiation trapping and I will be more than happy to post links up tomorrow.
    Steve, in the interest of being more efficient in my posts: the IPCC reports are an analysis of past and recent literature, data, GCM’s and so forth and of course low, median and high end estimates are made regarding such things as aerosl effects, incoming/outgoing radiation and so forth, however, there is recent and well validated research highlighting aerosol effects and water vapo rise, which I will post tomorrow. You may find plenty of your own at NOAA, NASA GISS, NASA.GOV, AOS Princeton and so forth,but I will paste the most relevean links tomorrow.
    I will say now, however, that ocean cooling does not invalidate water vapor feedbacks whatsoever, as this is a natural process through conduction for example. As sea surfaces cool and heat is transferred vertically, more heat is added to the atmosphere where increased levels of H20/C02 can trap it. Keep in mind that evaporation has a 50% cooling effect of seas whereas conduction has only about a 10% total effect, so, the escaping heat is more easily trapped by higher GHG atmospheric levels, and coupled with the high residence time of C02 provides a forcing upon the water vapor, which is itself a postive feedback, and in turn more heat returns to the planet surface. Since land is more greatly affected by heat transfer due to tempertaure differences, the land flucuates greatly while even as the ses absorbs more heat, its temp will not change due to the high heat capacity and specific heat. Stratospheric warming has been shown in severa; studies, along with increases in water vapor in the middle to upper troposphere, even as ARGO floats, shows some ephemeral cooling.
    Recently, ARGO floats had to be reclaibrated and when they were, it was revealed the level of cooling was not as significant as it was previously being recorded as. Also ocean cooling has been predicted by the models for the past 25 years and has been well expected and predicted in several papers since the 1980’s. It was first hypothesized in the late 1970’s. Heat over time can also travel to deeper depths of the sea as well and mix, as can C02 which in combination with various natural processes halt warming of water, and/or create a cooling period, though there is no cooling trend of the bodies of water on this planet either.
    The doubling of C02 wil not necessarily, lead to equilibrium immediately, and in fact sevral papers site that it may be some time after the doubling of C02 from pre-industrial levels that equlibrium may be reached and the full climate senstivity may be realized/reached through a global mean temperature increase. The range is between 2 degrees C 4.6, (some ranges say 1.5 as a starting point, and 5-6 degrees as not ruled out) but the median clustering is about 3 degrees C, which has been demonstrated to be an accurate approximation in far more than 4 peer reviewed papers. So, if we forget about the IPCC for a minute and Hansen’s estimates we seee a strong agreement with high confidence that the temp increase will be greater than 2 degrees C and the clustering is at about 3 degrees C. Now, averaging in the IPCC report and looking at Hansen’s median (or even low projections) as well as GCM’s with more conservative predictions based upon the physics and the central theroem we are looking at about 3.5 degrees C increase in global mean temp, or so. It is impossible to get from the IPCC report, the peer reviewed literature (99.5 % of it) and the most recent updated data to predict a global mean temp increase of less than 2 degrees C. The statistics shows 3 degree C clustering from many sources which have been repeated many times. You are ignoring the impact of short term sea lag and transfer of heat due to temp differencs, though admittedly it is very complex in such a chaotic system, (as Pamela is quick and correct to point out) but it seems to me you have not considered the system enough in your analysis. I am not predicting a 5 degree C increase or immediate catastrophe at the doubling of C02 over pre-industrial levels, but once equlibration has occurred we are looking at several catastrophic events, and prior, to this many citizens of third and second world countries will die and become deathly ill as a result of global warming due to anthropogenic means and the natural variability response, as well as, dimming/cooling as seen in the brown cloud.
    The 1 degree C warming you predict is over halfway there, now as evidenced by global mean temperature analysis which is repeated atleast a hundred times. Hence a total 1 degree increase without considering equilibrium is impossible. AGW is > 99% certain (really 100%, but all measurements contain uncertainty) and a future of increased droughts, floods due to AGW is >90% certain (around 95%) and predictions for great detrimetal weather patterns and climate disruptions is >66%, (AR4) but in light of recent literature and empircal obesrvations is >70%. I, for one,do not want to wager on thoes odds. We are approaching 1 degree global mean temperature warming now, (or in the next 10 years.) so I am confused as to how we can only have 1 degree increase at equlibrium. The reaction will shift to the right and even with chaotic weather, ENSO, and the like, the warming effect of GHG will continue to be part of the trend. T
    The GCM’s are amazingly accurate and precise at this point, and in conjunction with real world data input/updated in them, and satellite/proxy data, it is clear that 1 degree warming is a gross underestimate. I will paste the links tomorrow and if I have time some of my own calculations as well; if not then within a few days I will show where I see you miscalculating and show my proofs.

  166. Oh, and interestingly enough, the planet has still been warming since 1998 even with 1998 being a warmer year…go figure…

  167. XBT showed exaggerated warming and ARGOS showed cooling due to bad sensors and underestimating how long it would take for them to reach under water; once the corrections were made, the cooling was shown not to exist and the warming was shown to be on the incline. Okay now I will digress for the night.

  168. Jacob Mack:

    I also want to add that precipitation will slow down Anartic (sic) warming and some evidence suggests that El Nino will also temporarily suppress warming magnitude, and yet the Anartic (sic) is still warming… I also have preparations to make for Steve, soon as he answers my initial questions.

    How about you answering the question I’ve been asking you through a number of threads.

    OK, here’s the question: You previously have stated, explicitly, that you had a B.S. in Chemistry. But then another poster provided educational links, none of which stated that Jacob Mack had been awarded a degree in Chemistry. Given those facts, my question is which school did you graduate from with a Chemistry degree, and what year was that?

    Simple questions, and your two-part answer can be posted here in under a minute.

    What are you waiting for?

  169. Smokey, one you have been misquoting me the entire time, and two the links you gave me used faulty methods, did not use proper references and have not been repeated and validated; come to think of it, you have not answered any of my questions, and you did not falsify AGW whatsoever.

  170. No, no, I’m not misquoting you at all. I’m just asking two simple questions.

    Seems a guy would be proud of the school he graduated from. With his degree in… Chemistry.

    So, what school and what year?

  171. Leif Svalgaard (19:04:55) :

    tallbloke (18:50:01) :
    I pointed out the uncertainty in TSI values

    The uncertainty is smaller than the solar min to max variation so is hardly relevant. Even a 1 W/m2 uncertainty translates into a 0.05K temperature signal which is negligible in the current context.

    Hi Leif, we’ve rehearsed the argument elsewhere and we disagree on this. As far as I can see, your application of the Stefan Boltzman law doesn’t fit the context of a planet with a dynamic atmospheric system. Earth is not a snowball or a lump of coal, and doesn’t behave like either of them.

    The calculations I did on ocean heat content changes due to insolation which you confirmed show that sunlight plus terrestrial atmospheric factors can lead to a 4W/m^2 ‘forcing’ on the decadal scale. And that wasn’t ‘peak to trough’ either.

    On century long timescales, I think we need to take into account the various issues with projecting the PMOD model beyond the data, and the ACRIM data’s higher peak-trough amplitude, and the bigger than expected fall in TSI from the peak of cycle 23 to now. I think there is a higher climate sensitivity to changes in TSI than a simplistic analysis of the temperature data would indicate. This is due to the curve flattening effect of the ocean’s ability to store solar energy (hiding it temporarily from the surface record at the top of the solar cycle), and el nino’s tendency to occur at solar minimum (raising SST’s at the bottom of the solar cycle and thus further flattening the signal).

    Outgoing longwave radiation from the surface jumped 4W/m^2 after 2000, and has stayed at that higher level since. The oceans are shedding some of the heat which my calcs show they have been gaining since the end of the little ice age some 300 years ago, barring some minor downtrends along the way.

    This has kept things warmer than they would otherwise be for some 8 years now, which shows what a vast reservoir of heating energy the oceans contain. However, the ocean heat content is consequently diminishing, despite what Josh Willis’ refudging of the ARGO data says, and even he has since admitted (and then recanted) that there has been a “slight fall” since 2003.

    Since there is no big tropical tropospheric hotspot developing as a result of this increased OLR, the puny effect of the change in concentration of the trace gases the IPCC worry about is, well, puny, and nothing to worry about.

    The bottom line is that in my opinion, your estimate of change in ‘effective temperature’ for a black body earth for a 1W/m^2 change in TSI may be correct, but your estimate of the climate sensitivity to that change is well off the mark.

    This is why I say that the climate is very sensitive. But not to co2. The evidence is to be found not in he atmosphere, but in the changes to ocean heat content (which are hidden from the surface record). Because of the oceans vast heat capacity, (the top two metres can store as much as the entire atmosphere above it as Bob Stevenson pointed out), and the oceans ability to move heat from the tropics towards the poles, the earth is a well moderated place to live. This gives the false impression that the climate is insensitive to changes in insolation levels.

    Nothing could be further from the truth.

  172. Jacob said

    “The 1 degree C warming you predict is over halfway there, now as evidenced by global mean temperature analysis which is repeated at least a hundred times.”

    I note with interest the precision with which you believe we can analyse modern temperatures and compare them to older ones.

    So are you talking about modern warming whose temperatures are based on the Hadley 20 global stations in the year 1850 (which reflects the little ice age) or James Hansens innovative 1880 figures based on a novel grid system also referencing a small number of stations which became the basis for the first IPCC report?

    Both of these data bases of course bear no relation to todays stations in terms of subsequent changes in location, numbers, uhi effect or poor siting.

    Perhaps you are referring to modern cooler temperatures in relation to previous warmer eras as evidenced by the MWP, the Roman optimums or the Holocenes?

    If you can comfirm we can then compare like for like, rather than cite figures which assumes greater accuracy than is evidenced by the methodology employed, or ignores past warming episodes.

    Tonyb

  173. tallbloke (00:40:03) :
    On century long timescales, I think we need to take into account the various issues with projecting the PMOD model beyond the data, and the ACRIM data’s higher peak-trough amplitude, and the bigger than expected fall in TSI from the peak of cycle 23 to now.
    Whatever details you may ponder, the uncertainty is still less than the solar cycle variation. And I don’t know what PMOD ‘issues’ you are talking about going back in time [and I know PMOD quite well]. The ACRIM data seems to have a smaller peak-trough amplitude because the trough in 1996 was less deep. The very earliest data before 1980 I’d not make much of. The larger than expected PMOD drop is due to calibration errors, and ACRIM does not have a deeper minimum now than in 1986. But all of this doesn’t matter: There are good reasons to believe that the magnetic field is responsible for TSI variation and since the magnetic field now is just what it was 108 years ago, there is no reason to believe [and no evidence for it] that TSI was any different back then than now.

  174. Leif Svalgaard (06:10:11) :
    since the magnetic field now is just what it was 108 years ago, there is no reason to believe [and no evidence for it] that TSI was any different back then than now.

    Yebbut, the sun’s just gone into a once in 200 year funk hasn’t it?. Before then the magnetic field (I assume you are talking about the solar dipole field?) shows a more or less steadily rising trend throughout the C20th.

  175. Leif Svalgaard (18:58:51) :

    “The peak to valley change depends on the size of the solar cycle and varies by a factor or 3 or more. A good median value is 0.1% of TSI or ~1.4 W/m2, for some cycles larger, for some smaller.”

    Does the mean TSI over a whole cycle remain more or less constant from cycle to cycle, or does the mean TSI for a whole cycle depend on the level of solar activity. For example, if the peak of one cycle has a sunspot number of 75, and the peak of the next 150, would the average TSI over each cycle be the same, or would the cycle with lower peak activity have a lower average TSI?

    “A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].”

    Does the climate sensitivity not enter into the expected temperature change? If the climate sensitivity were 0.75 degree per watt, then a top of atmosphere variation of 1.4 watts per M^2 would give about 0.25 * 0.7 * 1.4 * 0.75 = 0.184K change, not close to the 0.07K you note above. It seems to me that the above calculation implicitly assumes a sensitivity of about 0.21K per watt/M^2. Am I missing something?

    “So for what is is worth: the model is consistent with no substantial change in cyclical variation over the past 130 years.
    And I don’t understand this statement. Stefan-Boltzman’s law hasn’t changed. So what is this ‘cyclical variation’?”

    What I meant was that there was no obvious evidence for a big overall trend in TSI, past cycles were not very different from recent.

  176. tallbloke (07:32:58) :
    Yebbut, the sun’s just gone into a once in 200 year funk hasn’t it?.
    No, not at all, a 100 year funk, not 200. Cycle 23 was much like cycle 13, and cycle 24 is forecast to be like cycle 14.

    Before then the magnetic field (I assume you are talking about the solar dipole field?) shows a more or less steadily rising trend throughout the C20th.
    No, not at all. The heliomagnetic field shows the same ~100 year variation as solar activity. Here http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1835.png
    In http://www.leif.org/research/Reply%20to%20Lockwood%20IDV%20Comment.pdf we debunk the idea of steady increase. Lockwood et al have come around to our view and now agree with our reconstruction. On http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1900.png we show their reconstruction as the green curve compared with ours [blue curve] and observations by spacecraft [red curve].

  177. I should clarify that, the magnetic field at minimum shows a more or less steadily rising trend through the C20th. And it should be noted that the shorter, more vigorous cycles of the latter half of the C20th with their steep up and downramps meant a lot less downtime for the sun, and a lot more TSI overall, right up to 2003 or so.

    In Fairness to Steve, I don’t think we should swamp his thread with a continuation of our debate on all this stuff here, though he may want to consider these two points if he has previously simply accepted the facile argument that the last three solar cycles had lower maximum amplitudes than the highest on ever recorded, so therefore the sun is out of step with the temperature record.

  178. tallbloke (07:46:31) :
    I should clarify that, the magnetic field at minimum shows a more or less steadily rising trend through the C20th.
    I thought that this
    http://www.leif.org/research/Reply%20to%20Lockwood%20IDV%20Comment.pdf made it clear that there is no such rise.
    The minimum in 2008 is on par with that in 1901. The minimum in 1996 with that in 1933. The minimum in 1965 on par with that in 1933 as well. The minimum in 1986 on par with 1945. The minima in the 19th century very much like the ones in the 20th: http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1835.png and the maxima too, BTW.
    In other words, the variations of TSI minima have been much less than 1 W/m2.

  179. Steve Fitzpatrick (07:41:12) :

    What I meant was that there was no obvious evidence for a big overall trend in TSI, past cycles were not very different from recent.

    See my points above. To illustrate them and their implications, some facts:

    1) Sunspot numbers correlate well with TSI.
    2) The average sunspot number from 1875 to 1935 was 42
    3) The average sunspot number from 1945 to 2005 was 73
    4) The average sunspot number from 1975 to 2005 was 68, so it didn’t drop much after the highest solar cycle ever recorded.
    5) Above around 40, the oceans start to gain heat content.
    6) Ocean heat content is the main driver of the sea surface and therefore air temperature on all timescales over a couple of months.
    7) Ocean heat content is driven by the sun, not co2, because longwave radiation doesn’t penetrate the ocean, it just causes more evaporation at the surface.

    I’ll leave you to join the dots.

  180. Leif Svalgaard (07:45:07) :

    tallbloke (07:32:58) :
    Yebbut, the sun’s just gone into a once in 200 year funk hasn’t it?.
    No, not at all, a 100 year funk, not 200. Cycle 23 was much like cycle 13, and cycle 24 is forecast to be like cycle 14.

    We’ll see soon enough. :-)

    Before then the magnetic field (I assume you are talking about the solar dipole field?) shows a more or less steadily rising trend throughout the C20th.
    No, not at all. The heliomagnetic field shows the same ~100 year variation as solar activity.

    Well your theory of a 100 year cycle is an interesting one, but I don’t think the data supports it all that well. As you said before, your re-evaluation of C19th solar activity is still in the works. Until it’s done, I’ll carry on using the sunspot numbers, which were generally lower in the C18th and C19th than they have been in the C20th since 1935. By the way, did you see my offer of help with the solar magnetic data digitisation on the NASA admits possibility of Dalton minimum thread?

  181. tallbloke (08:10:12) :
    I’ll leave you to join the dots.
    Some more dots to connect:
    Average TSI for
    1830-1875 1365.98
    1875-1930 1365.78
    1930-1975 1365.94
    1975-2009 1365.94

  182. Leif Svalgaard (08:08:50) :
    The minimum in 2008 is on par with that in 1901. The minimum in 1996 with that in 1933. The minimum in 1965 on par with that in 1933 as well. The minimum in 1986 on par with 1945. The minima in the 19th century very much like the ones in the 20th: http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1835.png and the maxima too, BTW.
    In other words, the variations of TSI minima have been much less than 1 W/m2.

    Well, maybe. then again, maybe not. It depends whose data you use, and how you interpret it.

    And regardless of all that, my observations of the overall high levels of TSI in the latter C20th due to short minima, swift up and downramps etc still stand. You never refute them, but you always obfuscate them with an avalanche of links and other matters.

    For these simple and indisputable reasons, TSI in the second half of the C20th was much higher than in the first. End of.

  183. tallbloke (08:24:08) :
    Well your theory of a 100 year cycle is an interesting one, but I don’t think the data supports it all that well.
    This is not a theory, but derived from the data. The good news is that HMF B is very well determined the past 170+ years. Even our harshest critics now agree with us. So your statement that ‘the data supports it all that well’ is unfounded.

    I’ll carry on using the sunspot numbers
    Leif’s law: if data agree with your view they are good.

    By the way, did you see my offer of help with the solar magnetic data digitisation on the NASA admits possibility of Dalton minimum thread?
    I estimate the work to be of the order of 10 man-years. How many will you contribute? Anything helps.

  184. tallbloke (08:32:10) :
    For these simple and indisputable reasons, TSI in the second half of the C20th was much higher than in the first. End of.
    1st half average 1365.85
    2nd half average 1365.98
    Much higher?

  185. Leif Svalgaard (08:30:46) :

    tallbloke (08:10:12) :
    I’ll leave you to join the dots.
    Some more dots to connect:
    Average TSI for
    1830-1875 1365.98
    1875-1930 1365.78
    1930-1975 1365.94
    1975-2009 1365.94

    I thought you said there was still much work to do on the C19th, which is why I offered to help with the digitisation of the records. Why not put the data and methodology up for discussion in a post so we can all discuss it properly on a separate thread?

  186. Leif Svalgaard (08:36:21) :

    tallbloke (08:24:08) :

    I’ll carry on using the sunspot numbers

    Leif’s law: if data agree with your view they are good.

    There are a couple of different ways that comment can be understood.

    Your way, and my way. ;-)

  187. tallbloke (08:41:07) :
    I thought you said there was still much work to do on the C19th, which is why I offered to help with the digitisation of the records. Why not put the data and methodology up for discussion in a post so we can all discuss it properly on a separate thread?
    I may not have been clear. It is not that there are holes that are not covered. The HMF B can be determined from only a few stations and the correction to the sunspot numbers also can be done with only a few stations [because all stations basically show the same]. My goal is to digitize ALL stations anyway, so that the data does not disappear [as it is beginning to do: yearbooks crumble, libraries burn or are flooded, or old books just thrown out].

  188. tallbloke (08:41:07) :
    Why not put the data and methodology up for discussion in a post so we can all discuss it properly on a separate thread?
    There are two separate issues: HMF B and the Sunspot number.
    The HMF B methodology is described here:
    http://www.leif.org/research/The%20IDV%20index%20-%20its%20derivation%20and%20use.pdf
    The sunspot methodology was described by Rudolf Wolf in the 1850s. A modern version is here:
    http://www.leif.org/research/Napa%20Solar%20Cycle%2024.pdf

    The data are very voluminous. Some are in public archives, others only in my 5 Gigabyte database. You can see some of the original data here: http://www.leif.org/research/todo/
    The problem is not only in entering or OCRing the data, but also in correcting them [yes] and interpreting them correctly. As an example of the subtle issues, see http://www.leif.org/research/todo/api_1905_11_h.PDF
    look at row 15 over to the right, where the value 596 look like an error, but really isn’t. The secret is in the 0.356 near the top just under Tagesm. It means that the real data value is 0.356+the table entries/100,000, except for row 15 it should be interpreted as 0.35+the table entry/100,000.

  189. Leif Svalgaard (08:39:09) :
    1875-1930 1365.78
    1930-2009 1365.94

    Well since the sunspot number averages went from 42 to 73 over the same periods, maybe TSI doesn’t match the sunspot numbers too well after all. There again, your TSI numbers are a back extrapolation, whereas the sunspot numbers are the sunspot numbers. If you’re right, it’s amazing that such a small increase in TSI could cause such an acceleration in the thermal expansion of the oceans, and would imply a truly remarkable sensitivity of the Earth to small changes in insolation. It would certainly put co2’s claimed sensitivity in the shade. I’ll run some calcs on your figures and see what it implies.

    To be continued on your own thread I hope.

  190. Leif, sorry, not a back extrapolation, a reconstruction using the magnetic field measurements as a proxy. I’d very much like to learn more about the interaction of the Earth’s magnetic field with the solar fields so I can better understand your methodology.

    I agree the old records need preserving. I’ll email you soon about my offer of help.

  191. Jacob, as you can see, discussions are best kept to the thread topic and narrowly focused. Your last post was a jumbled mess when judged against more successful debating techniques. Just pick one thing from your post, one component of AGW that you feel strongly about. Keep it tied to the thread which has to do with how sensitive the earth’s natural climate is to small changes. CO2 is a very small change. Pick something about the warming cycle of CO2. And we will go from there. Bear in mind that we like to stick with observed and measured data, not modeled scenarios. However, modeled scenarios can be compared to measured data. There are sources for clouds, vapor, CO2, LWR, SWR, SST, ENSO, you name it, that can be compared to the modeled scenarios. To remind you, the least amount of temperature rise in the scenarios provided by Hansen, IPCC, and the like assumed strict controlled reduction of CO2, which of course has not occured. So the model we will compare to will be the one that is based on and closest to the current estimated CO2 level.

  192. tallbloke (08:49:26) :
    “Leif’s law: if data agree with your view they are good.”
    There are a couple of different ways that comment can be understood.

    I think not. What I have found [empirically] is that people [like you] would gladly use a dataset [even if dubious] if the data agree with their own pet theory, and tend to spread FUD on other data.

    I have even come across the following argument: ‘since it is obvious that variation of the Sun is the main [perhaps, sole] driver of climate, the fact that there is climate change proves that the Sun changes accordingly’. From your posts one can only conclude that you wholeheartedly subscribe to that argument, e.g.: “This gives the false impression that the climate is insensitive to changes in insolation levels. Nothing could be farther from the truth.”

    BTW, you used the weasel word ‘insolation levels’. Solar insolation drives glaciations, so you are correct about that straw man, but it is irrelevant, because the discussion was not about the large changes in insolation, but about the minute changes in irradiance.

  193. tallbloke (09:14:32) :
    Well since the sunspot number averages went from 42 to 73 over the same periods, maybe TSI doesn’t match the sunspot numbers too well after all.
    When the sunspot number changes by 150, TSI changes by 1.3 W/m2, so 1 spot means a change of 0.009 W/m2, so 73-42 = 31 spots equates to a change of 0.27 W/m2 [but I also think that the SSN should be 50 not 42]. I had 0.16, Preminger has 0.22 W/m2 [to 2004, so a tad smaller if to 2009] based on Greenwich Sunspot Area.
    So tiny tiny numbers.

    If you’re right, it’s amazing that such a small increase in TSI could cause such an acceleration in the thermal expansion of the oceans, and would imply a truly remarkable sensitivity of the Earth to small changes in insolation

    A simpler and much likely explanation [without truly remarkable amazement] is that there is no causal relation [apart at the hundredth of a degree level] between irradiance [don’t use insolation as that changes a lot ~20W/m2 during a year] and temperatures.

  194. “”” Jacob Mack (19:43:19) :

    Please see below, regarding the higher volume of salt water due to its higher density. (D=M/V) Again the differences in water’s physical characteristics due to salinity cannot be ignored.

    “In a paper titled “The Melting of Floating Ice will Raise the Ocean Level” submitted to Geophysical Journal International, Noerdlinger demonstrates that melt water from sea ice and floating ice shelves could add 2.6% more water to the ocean than the water displaced by the ice, or the equivalent of approximately 4 centimeters (1.57 inches) of sea-level rise.

    The common misconception that floating ice won’t increase sea level when it melts occurs because the difference in density between fresh water and salt water is not taken into consideration. Archimedes’ Principle states that an object immersed in a fluid is buoyed up by a force equal to the weight of the fluid it displaces. However, Noerdlinger notes that because freshwater is not as dense as saltwater, freshwater actually has greater volume than an equivalent weight of saltwater. Thus, when freshwater ice melts in the ocean, it contributes a greater volume of melt water than it originally displaced.” “””

    Well somehow, I don’t think your predictions will come about. Sea water is certainly denser than fresh water, which in turn is denser than ice. But there’s that slight problem of cooling. Each gram of ice that melts extracts 80 calories from the surrounding ocean water (remember most of the floating ice is actually submerged and surrounded by sea water, so a huge volume of sea water is cooled as that ice melts.

    If you mix equal masses of zero deg C ice, and 80 deg C hot water (both fresh), then entire mass of water will reach zero deg C when all the ice is melted.

    Also the interior of the sea ice (that initially froze out of the sea) contains pockets of salty brines, so as the ice melts a lot of salt is added to the solution, so ti doesn’t stay fresh very long.

    So long as the salinity remains above 2.47% (normal is about 3.5%), the sea water has a positive temperature coefficient of expansion so cooling it reduces the volume, and the sea level will actually go down when the floating sea ice melts; not up.

    I made such a prediction in mid 2004, which was published in Jan 2005. In mid 2006 a British Dutch team using a European polar satellite reported on ten years of measurements of the Arctic ocean sea levels; and they reported that it was dropping at a rate of 2 mm per year. they also said they didn’t know why; but they were very coinfident of their numbers.

    So now you know why; that period was a period of Arctic sea Ice retreat, and the sea level dutifully declined.

  195. Jacob, maybe you were confusing ice sheets on land that melt into the sea? Greenland ice sheets and Antarctic ice sheets would raise sea levels if they melted completely off and into the sea. The stuff that is floating, be it land-attached floating ice sheets, ice bergs from glaciers, or sea ice, just does not have the potential for catastrophic sea level rise. And of the two, Greenland is the only one historically that has the greater potential of actually melting.

  196. The pretty colored global long wave radiation map at the top of this essay covers a range from 100 to 350 W/m^2.

    This is a bit weird since the NOAA official earth energy budget says the average for the globe is 390 w/m^2 corresponding to a +15 deg C BB temperature number.

    Actually the 100 to 350 range corresponds to BB temperatures from 204.9 K up to 280.3 K; -68.2 deg C up to a whopping +7.2 deg C

    If you actually cover the entire observed earth surface temperature range from about -90 deg C to +60 deg C then the LWIR emission would be more like 63.8 W/m^2 up to 698.5W/m^2 for the extremes; which is about an 11 to 1 range.

    It would be nice if they could measure and report the extreme values, instead of some homogenized average values. That would be good; if for no other reason than the actual spectral peak wavelength changes by a factor of 1.82 over that temperature extreme range, which has a decided influence on the impact that CO2 has.

    If they do enough averaging; pretty soon nothing at all ever changes.

  197. Jacob Mack (21:20:39) :

    It is honestly difficult for me to follow what this post was trying to say, except that you appear to believe the results of all the different climate models at the same time (even though those results are substantially different from each other), and everything the IPCC says as well. Let’s try to narrow it down to a couple of issues at a time.

    1. Let’s start with ocean heat accumulation. Do you believe the multiple publications showing essentially flat to very slightly falling heat in the top 700 meters of ocean from 2003 to 2008 are not correct? If so, please explain why you think that. If you accept the results of these studies, then do you agree that ocean heat accumulation is the most accurate way to measure global warming over any specified period of time? If you do not think so, then please explain why not and what other metric you think might be a more reliable gauge of warming.

    2. Now please consider aerosol effects. Do you believe the multiple studies that have shown a significant reduction in atmospheric aerosols since about 1993? If you do not believe these studies, then please explain why not. If you accept that there has in fact been a significant “global brightening” since the early 1990’s (net intensity of full sunlight reaching the Earth’s surface has increased), then does this not suggest that, whatever the net “canceling” of aerosols on radiative forcing may have been in the early 1990’s, that canceling effect has already declined significantly? If not, please explain why.

    If we can limit discussion to at most a few topics at once, it will be easier to make progress. There is no reason to not cover a wide range of topics, but it is almost not possible to jointly address all at once. More efficient to explore specific differences in understanding and try to understand from where those difference arise.

  198. Leif Svalgaard (10:11:09) :
    A simpler and much likely explanation [without truly remarkable amazement] is that there is no causal relation [apart at the hundredth of a degree level] between irradiance [don’t use insolation as that changes a lot ~20W/m2 during a year] and temperatures.

    Well I already explained why surface temperature isn’t such a good indicator. Let’s stick with oceanic thermal expansion for a while, because there’s no argument about what causes that.

    When I last had the calculator hot on this I seem to remember I summed that the ocean retained something like 2.5% of the insolation over the 1993-2003 period. So, if you are right, there is a pretty big terrestrial factor to be accounted for. Nir Shaviv suspects decadal changes in cloud cover, and the ISCCP data hints at multidecadal changes too. There seems to be scope for an accomadation for both our data and theories.

  199. One of the best proxies for warming would be how much SWR gets in, how much LWR gets out and what the Net Radiation Budget is. I would caution here to get actual values, not anomolies. And don’t compare to a mean. Just use actual data.

    Here are the combinations:

    1. If more SWR gets in and less LWR gets out we get warmer.
    2. If more SWR gets in and more LWR gets out we might stay the same.
    3. If less SWR gets in and less SWR gets out we might stay the same.
    4. If less SWR gets in and more LWR gets out we get colder.

    What are the entities that influence how much gets in and how much gets out? That depends on whether it is short wave or long wave. I have started the list. By the way, did you know that taken as a whole, clouds have a greater cooling affect by a magnitude of 3 times greater than a doubling of CO2 concentration has? Think what it would be if clouds were doubled.

    “The latest results from ERBE indicate that in the global mean, clouds reduce the radiative heating of the planet. This cooling is a function of season and ranges from approximately -13 to -21 Wm-2. While these values may seem small, they should be compared with the 4 Wm-2 heating predicted by a doubling of carbon dioxide concentration.”

    A. SWR
    1. reflecting particles such as water droplets in clouds

    B. LWR
    2. absorbing substances such as GHG’s, with water vapor being one

  200. Leif Svalgaard (09:53:09) :
    What I have found [empirically] is that people [like you] would gladly use a dataset [even if dubious] if the data agree with their own pet theory, and tend to spread FUD on other data.

    People like you say to-mate-o
    I say to-mart-o

    Let’s agree to disagree rather than resort to incivilities and accusations about motivation for which you have not one jot of evidence. We are both seeking scientific truth, I would hope.

    the discussion was not about the large changes in insolation, but about the minute changes in irradiance.

    The way I thought it went was that irradiance was what arrived at the top of the atmosphere, and insolation was what hit the weasel on the ground, whatever the timescale.

    By the way, how do you tell the difference between a weasel and a stoat?

    Reply: A weasel’s weaselly recognized. A stoat’s stoatally different. ~ ctm

  201. @Tallbloke

    Instead of looking for a link to TSI, perhaps worth your while trying to quantify how the proportion of the different wavelengrths that make up TSI, change from solar max to min effect ocean heating.

    High energy UV increases at max penetrates the ocean quite deeply. I think X-ray and other highly energetic frequencies may also be worth a look?

  202. Pamela Gray (12:19:39) :

    1. If more SWR gets in and less LWR gets out we get warmer.

    What are the entities that influence how much gets in and how much gets out?

    Sound analysis Pamela.

    Looking at the graphs produced by Bob Tisdale of the way Outgoing LWR plunges by 70W/m^2 in the tropics when big el nino’s occur, it looks to me like water vapour is the major player in town.

  203. If anomalies are the only way to go, there is a tremendous seasonal influence in the Net Radiative Budget so 3 month data sets, like that used with ONI SST measures, would be appropriate. For example JFM, FMA, MAM, AMJ, MJJ, JJA, etc. That is because the Earth does not care to adhere to the 1st through the 30th dates. Overlapping 3-month averages does a better job of taking into account that mother Earth isn’t on a strict 30 day calendar. If all the data sets could thus be arranged, this would make them comparable across sets. Wonder who we could get to do that for the temperature series and all the other series we like to talk about. The thing that makes this valuable is that the results don’t care how much the Sun puts out or varies. The analysis cares about how much of the Sun’s rays we actually get at the surface where it counts.

    By the way, the wrong reference I put in my above post 12:27:47 is a VERY interesting read on how satellites can screw up in more than just their drift.

  204. To further clarify terminology in case we want to discuss this:

    Positive Net Radiative Budget: Less LWR than SWR.

    Negative Net Radiative Budget: More LWR than SWR.

    However, the overall forcing should be measured as well as the difference. That is because if two negative budgets have the same difference but are greater (or lesser) in absolute value, that might make a difference in just how warm or cold we get. This kind of measure would also be useful as a double check of surface station test results. If we can ever get the eyes in the skies to do it right. When measuring LWR and SWR, the angle of the device as well as its drift makes a difference in terms of what it measures and introduces artifact that has to be calculated out of the resultant data being beamed back to Earth.

  205. I am on a learning curve here so if you see something I have posted that is in error, please correct me.

  206. The premise I believe regarding anthropogenic CO2 warming is that less LWR will escape from GHG’s. That means that while SWR may have a free pass, LWR will never have a free pass. There seems an opportunity to clearly and finally refute the idea that according to the AGW theory, we should be seeing less and less LWR escaping out to space in lock step with proxied CO2 increases. Can we get a graph going of CO2 (anthropogenic only if possible please) increases and LWR measures since the satellite record began?

  207. Steve, some bodies of water are cooling while others are not. The magnitude of cooling previosuly reported is far higher than it has now been found. Ocean cooling is normal and was predicted many years ago. The Earth is still warming rigtht now. More concise and to the point.

  208. Jacob Mack (13:46:36) :

    “Steve, some bodies of water are cooling while others are not. The magnitude of cooling previosuly reported is far higher than it has now been found. Ocean cooling is normal and was predicted many years ago. The Earth is still warming rigtht now. More concise and to the point.”

    So do you think the reported slight net cooling from 2003 to 2008 actually happened or not? How can the Earth be warming if the oceans are not accumulating heat? On what basis can you make that claim?

  209. The above author (who I believe is Monckton) appears to have used Wong’s 2002 satellite data that was then resubmitted in 2004 (Revisited…) with corrections related to satellite drift and measurement angle. I don’t know if those corrections affected the LWR data or not. But it is interesting that LWR increased rather dramatically during the 98 El Nino, even through CO2 modeled data should have demonstrated a reduction in LWR. That reminds me of another premise of AGW that says that CO2 aborption during El Nino’s will make the warming trend worse. But the data says the Earth simply spat out as much of the warming as it could, ignoring CO2 on its way out to space.

  210. Jacob Mack (13:46:36) :

    On second thought, no need to reply to my above post; it is clear that would be a waste of your time and mine.

    Cheers.

  211. Steve Fitzpatrick (14:57:03) :

    So do you think the reported slight net cooling from 2003 to 2008 actually happened or not? How can the Earth be warming if the oceans are not accumulating heat?

    How did “lack of significant heat accumulation in the upper ocean” morph into “net cooling”?

  212. oms (15:39:25) :

    How did “lack of significant heat accumulation in the upper ocean” morph into “net cooling”?”

    Because I was in a hurry. A better choice of words is:

    “So do you think the reported slight loss of heat from the oceans from 2003 to 2008 actually happened or not? How can the Earth be warming if the oceans are not accumulating heat?”

    Sorry if you thought my original choice of words was not clear.

  213. Steve Fitzpatrick (14:57:03) :
    “So do you think the reported slight net cooling from 2003 to 2008 actually happened or not? How can the Earth be warming if the oceans are not accumulating heat?”

    Sorry this s not very helpful, but I don’t think there are enough data-points or sufficent data accuracy to answer your question.

    The amount of heat energy in the oceans is vast and it will take a long time before any trend becomes apparent unless methods and accuracy of measurement improve.

  214. Tenuc (12:29:27) :

    @Tallbloke

    Instead of looking for a link to TSI, perhaps worth your while trying to quantify how the proportion of the different wavelengrths that make up TSI, change from solar max to min effect ocean heating.

    High energy UV increases at max penetrates the ocean quite deeply. I think X-ray and other highly energetic frequencies may also be worth a look?

    I think the idea of TSI is that it encompasses all frequencies, but I’m sure you’re right in that UV varies more than good ol’ visible light does. Leif will tell you that the energy of x ray flares although spectacular, has about as much energy in terrestrial terms as a falling snowflake or some such. Anyway, I’m all ears for any energy data, but I’m trying to focus on my own line of research at the moment, so I’ll leave it to you to furnish me with some facts.

    Cheers

  215. Steve Fitzpatrick (16:04:20) :

    A better choice of words is:

    “So do you think the reported slight loss of heat from the oceans from 2003 to 2008 actually happened or not? How can the Earth be warming if the oceans are not accumulating heat?”

    The point is that from 2003 to 2008 there has not been a slight loss of heat; instead there has been a lack of observed heating, which is to say the temperature in the upper ocean has leveled off (not cooled).

  216. Tenuc (16:10:04) :

    “The amount of heat energy in the oceans is vast and it will take a long time before any trend becomes apparent unless methods and accuracy of measurement improve.”

    Argo represents an enormous improvement in measurement accuracy compared to pre-2003 data. The reported ocean heat trend in the several published papers based on Argo data are pretty clear: the best estimate trends are slightly downward or flat. This is independently confirmed by satellite altimetry/ocean mass measurements that show sea level rise over the 2003 to 2008 period was caused almost entirely by increases in ocean water content, not thermal expansion. Prior to 2003, there was significant heat accumulation in the ocean, especially between 1985 and 2002, but this trend stopped in 2003. I do not understand why you think further improvements in measurement accuracy and/or much longer measurement times are needed to prove this rather obvious change from the earlier trend. The published uncertainly estimates for ocean heat content are really not that wide; the ocean heat content is known today with more accuracy that ever before.

    The fact that this conflicts with projections of rapid warming may not please you, but the data is clear.

  217. FWIW I think that “Global OHC” is a metric which will suffer from some of the same problems as “Global Surface Temp”. I could be wrong but even though Argo is in place now and seems to be a good system but it still leaves gaps (eg. not many bouys in the Arctic last time I looked) and does not provide full depth profiling etc. Historical sea temp records seem to be just as problematic as historical air temps in terms of sparcity and accuracy.

  218. oms (16:29:44) :

    “The point is that from 2003 to 2008 there has not been a slight loss of heat; instead there has been a lack of observed heating, which is to say the temperature in the upper ocean has leveled off (not cooled).”

    The above statement is clearly contrary to well know published studies like Willis et al (2008), since reaffirmed informally by Willis, and Cazenave et al. (2008). The trend in ocean heat content shows very clearly that at least for the 2003 to 2008 period, there was no net heat accumulation, and a best estimate of a slight loss in heat.

    This is clearly in conflict with many predictions of rapid heat accumulation in the ocean (such as Hansen et al, 2005).

  219. If there is stored heat in the oceans it does not come from increased CO2 warming. It comes from direct SWR. Sunlight. Why do AGW’ers keep saying that there is heat in the pipeline (IE oceans) from CO2 warming? That line of reasoning in a debate is easily refuted and can be demonstrated with a 5th grade science experiment.

  220. Steve, the Earth is in fact still warming; all that a leveling off of ocean heating means is that the heat is being redsitributed. As Peter Atkins points out in his textbook, Physical Chemistry heat is not actual energy, but transferring of energy due to temperature difference (a modified definition from an earlier quotation from a first year chemistry textbook) and this in fact is observed in recent peer reviewed papers as well, from 2008-2009. They can be found easily on Google Scholar.
    Also, reduced heating is not the same as cooling, and a few years of some cooling is not considered a trend by the World Meteological Association, Physical Geography textbooks, or most Climatologists either; the time period 1998-2008 is just too short a time period, and it also does not reduce the 30 year warming trend. More LW is not escaping to space than before, and this is clear from measurements from NASA and the NOAA among others of course.
    Keep in mind that some heat loss or reduced heating of one area does not change the fact that the globe is warming, but rather there is a redistribution of heat around the globe, since the El Nino in 1998.

  221. Pamela,
    recent satellite data is clear on LW and SW dynamics; more LW is being boiunced back to Earth; take a look at infarared satellite data for the past year; it is very compelling and robust.

  222. Pamela Gray (17:12:10) :

    If there is stored heat in the oceans it does not come from increased CO2 warming. It comes from direct SWR. Sunlight.

    And as I replied to you in a different thread, the ocean does in fact receive heating from the atmosphere in some areas, so your statement is not correct.

  223. I hope you are reading your graphs correctly. There is a huge seasonal variation along with a direct and easily identifiable response to warm or cool oceans. What is compelling and robust about your information? Have you subtracted seasonal variation? Have you controlled for cloud cooling? More clouds means less SWR. With less SWR getting in, it doesn’t take much for LWR to be more significant and create a positive net balance. That does not prove a CO2 affect. Tell me more about your data.

  224. Jacob Mack (17:27:33) :

    Steve, the Earth is in fact still warming; all that a leveling off of ocean heating means is that the heat is being redsitributed.

    The question is, redistributed to where? I don’t think it’s plausible that the heat flux out of the upper ocean into the abyssal ocean can suddenly jump to some level that “masks” heating.

  225. Oops. Said that wrong. With increased LWR, that means that more is getting past GHG’s. So in order for current LW and SW radiation to create a positive net balance, LWR has to be less than SWR. My bad.

  226. This website provides daily and monthly values. The absorbed data is the Net Radiation Budget (first frame) which is the difference of incoming shortwave (second frame) and outgoing longwave (third frame) radiation measured at the outer edge of the atmosphere. So far, every published study I have been able to find on the web is saying that the net budget seems to conform to natural warm and cold drivers, not AGW CO2 data. Please direct me to a study that says different using real data.

    http://www.osdpd.noaa.gov/ml/air/rad_budget.html

  227. oms, where does the ocean receive enough warmth from warm air that it can store it away instead of evaporate it?

  228. Let’s be clear on the above LWR graphic supplied at the top of the post. The hotter the color the more LWR is escaping GHG’s and making it to the outer edge of the atmosphere. Please correct me if I am wrong.

  229. Jacob Mack (17:27:33) :

    “Steve, the Earth is in fact still warming; all that a leveling off of ocean heating means is that the heat is being redsitributed.”

    The ocean’s heat capacity dwarfs all other heat capacities on Earth combined. If the ocean is not currently accumulating heat, then this means the Earth is not currently warming. This is widely understood, and has been included in many well known publications, including Hansen et al (2005) for example. That you are either unaware of this equivalency or choose to ignore it, even though it is widely accepted on all sides, makes further conversation between us on global warming pointless.

    Cheers.

  230. Longwave radiation, which is the stuff reflected off the surface of the Earth, absorbed, and re-emitted by GHG’s, is a very poor way to boil water. It is much more efficient to use SWR to boil water.

  231. Steve, (continued)
    You should check out Cazenanave (2009) where two reconstructions show a warming trend. Also Leuliette (2009) shows a warming trend.
    Since Cazenanave uses 2 reconstructions, and error analysis more thoroughly, not based upon Willis or Leuliette I will summarize Cazenanave briefly:
    Cazenanave uses two reconstructions which gives independent estimates of ocean heat. The first uses satellite gravity measurements to the change in ocean mass. Then the subtract the ocean mass sea level rise from the total sea level rise. The second reconstruction uses satellite gravity measurements to calculate the change in mass of land ice and land water. The sea level rise from this contribution is subtracted from the total sea level rise to obtain another estimate of steric sea level rise. Both reconstructions show a significant warming trend. Keep in mind that sea level rise consists of 2 components: mass change due to melting ice and steric sea level rise due to changes in ocean density.

    See: M Ablain, A Cazenave, G Valladeau, S … – Ocean Sci, 2009 – ocean-sci.net
    “A new assessment of the error budget of global mean sea level rate estimated by satellite altimetry over 1993-2008.

  232. Steve, see my last post in response to you; the oceans are warming, but if Hansen says that the oceans warm in a monitically, then he is mistaken.
    The warming of the oceans will not be the same from year to year or decade to decade, and a pause in temperature and even an ephemeral cooling is discussed by Weart among others int he 1980’s and 1990’s.
    I wonder if you are ignoring some key physics and chemistry here… perhaps you need a refresher in both; let me know. I would be happy to educate you.

  233. Quote: “Longwave radiation, which is the stuff reflected off the surface of the Earth…” If this is so, why is LWR good enough to create natural warming from GHG atmospheric content? Also, no one is saying that oceans are boiling… what we are saying is that LWR increases due to GHG and holds more heat in, which in turn is transferring energy, raising over time the average kinetic energy which is temperature.
    I think it is important here, Pamela that you see the distinction between temperature and heat. Temperature is the average kinetic energy of all motion and oscillations, while heat is all the internal energy due to motion and oscillations. So, over a period of 150 years of increasing GHG emissions, and about 200 years of research on weather and cimate, we see a wrming trend, where the correlation is so high, and the observations in physics, chemistry, environmental science, and climatology/meteorology, are so clear there is no doubt AGW is real. Then here we are discussing magnitude; the problem with Steve’s assumptions is that he neglects the physics and chemistry while ignoring the sheer number of peer reviewed papers (in comaparison to Willis’ paper and papers like it) showing an ocean warming.
    Also he is ignoring the XBT errors reported on as well as ARGO sensor errors recently fixed, however, even if there was a temporary cooling of the oceans, this has been predicted for over 20 years, and in fact is has occured in the past int he 30 year trend.

  234. Finally, before I get back to work, Steve I am disputing the heat capacity of the oceans, but you are neglecting temperature changes caused by El Nino, La Nina, chaotic weather systems in general, and mixing chamges at the surface down to a meter or so in the oceans. Heat capacity is extremely important, but we are not dicsussing a closed system, and just as Angstrom made severe errors in his experiments neglecting real world, dynamic, open systems, we must be careful here not to make the same error.

  235. Edit: “Finally, before I get back to work, Steve I am ‘NOT’ disputing the heat capacity of the oceans…” (I am also in a rush and these keys stick).

  236. Also Jacob, not a small amount of incoming SWR is used by the planet to live and is not reflected back as LWR. The net budget may or may not calculate this proxy measure. Over the long term (multiple years), the net budget will show warming or cooling tied to natural processes. At issue are the well-controlled studies that all say this: a CO2 signal CANNOT be found in the Net Radiative Budget.

  237. Pamela,

    The ocean is constantly radiating LW upward and receiving back radiation from the atmosphere (including clouds). If there is more back radiation then the net LWR flux is reduced.

    Evaporation (latent heat flux) is large but not infinite, so increased downward flux (both radiant and sensible) can matter.

    Where can it matter? Everywhere.

  238. Jacob. No. LWR should be DECREASING under your AGW scenario. LWR is measured at the outer edge of the atmosphere, AFTER it has passed by GHG land. If it is increasing (and I am thinking you typed too fast), we are cooling if the net budget is negative. Please correct me if I am wrong.

  239. Jacob Mack:

    M Ablain, A Cazenave, G Valladeau, S (2009)

    Their abstract reads:

    “A new error budget assessment of the global
    Mean Sea Level (MSL) determined by TOPEX/Poseidon and
    Jason-1 altimeter satellites between January 1993 and June
    2008 is presented using last altimeter standards. We discuss
    all potential errors affecting the calculation of the global
    MSL rate. We also compare altimetry-based sea level with
    tide gauge measurements over the altimetric period. Applying
    a statistical approach, this allows us to provide a realistic
    error budget of the MSL rise measured by satellite altimetry.
    These new calculations highlight a reduction in the rate of sea
    level rise since 2005, by 2 mm/yr. This represents a 60%
    reduction compared to the 3.3 mm/yr sea level rise (glacial
    isostatic adjustment correction applied) measured between
    1993 and 2005. Since November 2005, MSL is accurately
    measured by a single satellite, Jason-1. However the error
    analysis performed here indicates that the recent reduction in
    MSL rate is real.”

    This paper has nothing to do with measured ocean heat. It may have been (this is only a guess) a response to claims that the satellite altimeter data (which agree with the Argo ocean heat data which Cazenave et al had earlier reported) may have been incorrect. The authors carefully show that the measured fall in the rate of seal level rise is real.

    This article is TOTALLY irrelevant to the discussion of ocean heat, except that it reinforces the author’s earlier publication of no net increase in heat in the oceans from 2003 to 2008. I am most puzzled that you would bring it up, unless you simply did not understand what the paper says.

  240. oms, The oceans do this even at night? That would be a good trick. The oceans can only go so far to release their heat at night till it gives up all its LWR, since there isn’t much SWR at night to reflect back out into the air is there.

  241. Pamela,

    The oceans have some radiating temperature (and emissivity around 0.985). So, unless you think the SST is close to absolute zero when the sun goes down, then yes, it keeps doing it all night long.

    And no, black body radiation is not the same thing as reflection.

  242. tallbloke (12:23:18) :
    “What I have found [empirically] is that people [like you] would gladly use a dataset [even if dubious] if the data agree with their own pet theory, and tend to spread FUD on other data.”

    Let’s agree to disagree rather than resort to incivilities and accusations about motivation for which you have not one jot of evidence.

    Regardless of your motivation, my empirical observation still stands. You are, it seems to me, living proof of its validity. There is nothing uncivil in making an observation. And it is a normal human reaction. If you started with a dataset and it agreed nicely with your carefully reasoned theory, you would not start out by immediately distrusting the data. You would find that the agreement is strong support for both your theory and the data. Newton originally rejected his own theory [and didn’t publish it] because it disagreed with the data about the distance to the Sun. Then, when he got new measurements that made everything fit, he took that as strong support and validation. Who wouldn’t?

    Imagine then that some time after that, even newer data showed that the distance that agreed with the theory was, in fact, wrong. He would, rightfully be suspicious of the revision, same as you.

    The way I thought it went was that irradiance was what arrived at the top of the atmosphere, and insolation was what hit the weasel on the ground, whatever the timescale.
    Irradiance is what the Sun puts out per unit of time [and area]. Insolation is what the measuring device [the ‘surface’ of the Earth in this case] receives per unit of time [and area]. It is usually not a good idea to mix the two in the same paragraph [or even paper].

  243. Given that trade winds and thunderstorms don’t die down at sundown, how does evaporative heat loss from the oceans at night compare with LWR?

  244. Sandy (20:03:17), having lived many years in the deep tropical Pacific, and spent untold hours on and in and under the ocean, I can assure you that both thunderstorms and trade winds die away during the night. Dawn in the tropical oceans is typically calm and clear.

    You can see this in the excellent link given by Pamela Gray above. Take a look at paired observations of OLR from any of the satellites. During the day, OLR (outgoing longwave radiation) comes more from the cloud tops, which are cooler. At night, the clouds dissipate, and the radiation comes more from the surface, which is warmer.

    Night-time evaporation is much smaller than daytime evaporation, but still goes on. How does night time evaporation compare with OLR (either day or night)? Couldn’t say, but we can assume that OLR is greater. This is because surface radiation (not that seen by the satellites, but from the surface) depends only on temperature. At typical tropical sea surface temperatures (SSTs), this is over 300 w/m2. Evaporation is much smaller than that during the night, in part because the sea is overturning and bringing cooler water to the surface. This reduces OLR, but also reduces evaporation.

    w.

  245. Just to add to what Willis Eschenbach wrote above, the latent heat flux through the sea surface has been observed in the tropics to be of order 100-150 W/m^2 (peak around noon, as one might expect).

  246. Steve Fitzpatrick makes an all too ubiquitous error :

    Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.

    This ignores Kirchhoff’s 150 year old insight that for a gray body , absorptivity must equal emissivity . It has the earth absorbing as a gray body of absorptivity 0.7 , but emitting as a black body , 1.0 . This is un-physical and produces an error of 1 – absorptivity^%4 to the down side , the infamous missing 33c in the case of the earth . See http://cosy.com/views/warm.htm for a full discussion and correct implementation of Stefan-Boltzmann/Kirchhoff .

    I don’t know what to make of any calculations after that because they are working from a false assumption .

  247. Willis Eschenbach (22:30:15) :

    Evaporation is much smaller… during the night, in part because the sea is overturning and bringing cooler water to the surface. This reduces OLR, but also reduces evaporation.

    Hi Willis, I bow to your superior knowledge of swimming in the tropical ocean regularly, you lucky man, but I’d have thought this would be the other way about.

    Won’t the surface cool at night due to conduction/convection, and won’t the overturning action of the waves bring water warmed by the sun during the day to the surface as well as the cooled surface water sinking?

    I ask because it seems to me the oceans lose heat to the air all the time. By radiation and latent heat of evaporation in daytime, and by radiation and conduction/convection at night time.

    Thanks

  248. Bob Armstrong (00:07:37), “ubiquitous” means appearing or present everywhere at once, so an error could be ubiquitous, but not “all too ubiquitous”. It’s like “more unique”, you can’t get there from here.

    In any case, it appears you are confusing the albedo of the earth including clouds with the absorptivity of the earth’s surface. Total albedo is 0.3, which you incorrectly assume means a surface absorptivity of 0.7. It does not.

    The temperature of the earth without greenhouse gases depends on what assumptions you make. Steve is calculating it in the common way, using the total albedo of the system, neglecting absorption by the air.

    Note that this total albedo already includes the albedo of the surface. Since the absorptivity “e” of the surface is 1 – surface albedo, he has already included the surface absorptivity in his calculations.

    Now, you can do your way, using the emissivity. Start with the top of atmosphere insolation 1366/4 =341 w/m2. Remove the amount absorbed by the clouds (75 w/m2) and the atmosphere (67 w/m2). This leaves 199 w/m2 that are actually striking the surface.

    The average surface albedo of the earth is about 0.85. Using Stefan-Bolzmann including surface albedo, this give us a surface temperature of 254 K … Steven’s number again. We can verify this by noting that reflection = 1 – emissivity, or about 0.15. This gives about 30 w/m2 reflected by the surface, which is in agreement with observations.

    So no, Steve’s calculation is not wrong, ubiquitously or otherwise.

    Best to all,

    w.

    PS – dear friends, remember significant digits. We can’t say the earth blackbody temperature is “254.802 K”, that’s a bridge too far.

  249. Leif Svalgaard (19:04:25) :

    Irradiance is what the Sun puts out per unit of time [and area]. Insolation is what the measuring device [the ‘surface’ of the Earth in this case] receives per unit of time [and area]. It is usually not a good idea to mix the two in the same paragraph [or even paper].

    I agree with that. I have been trying to get over to you two main points which involve each of these quantities, and maybe I should have separated the points better for clarity.

    Point one. Whichever of the calibrations and assessments of TSI (irradiance) you believe in, the difference in TSI levels between the first and second half of the C20th is more than enough energy to account for the empirically observed acceleration and unsuing deceleration of the thermal expansion of the oceans. Since the air doesn’t heat the ocean, it must be the sun wot done it. To understand and accept how this is necessarily true without having to adopt an unrealistically high sensitivity of the climate to TSI (irradiance) you also need to appreciate…..

    *****Gap to separate irradiance from isolation******

    ……Point two. It appears that there is in addition some kind of terrestrial amplification of changes in TSI, possibly due in large part to long term changes in cloud type, location and cover. This affects surface received insolation.

    Corollary to point two: Because of the very dynamic nature of the feedback and exchange processes going on between the biosphere, oceans, atmosphere and Earth surface, solar energy absorbed can get ‘hidden’ from the surface temperature record and redistributed in ways which make an analogy with a black body radiating spacebourne lump of coal or a snowball inadequate to the correct understanding of the Earth.

    Regardless of your motivation, my empirical observation still stands. You are, it seems to me, living proof of its validity. There is nothing uncivil in making an observation. And it is a normal human reaction. If you started with a dataset and it agreed nicely with your carefully reasoned theory, you would not start out by immediately distrusting the data. You would find that the agreement is strong support for both your theory and the data. Newton originally rejected his own theory [and didn’t publish it] because it disagreed with the data about the distance to the Sun. Then, when he got new measurements that made everything fit, he took that as strong support and validation. Who wouldn’t?
    Imagine then that some time after that, even newer data showed that the distance that agreed with the theory was, in fact, wrong. He would, rightfully be suspicious of the revision, same as you.

    I think you mischaracterising the situation in order to cast doubt on my ability to apply the scientific method and to bolster the status of your own hypothesis, which is strongly contested by other scientists apart from those you list as converts to your cause.

    My reconstruction of SST’s from sunspot numbers and LOD variation is unquantified, but scalable and proportionate. This means it can accommodate whatever quantities and magnitudes you care to throw at it. So adjust, revise and minimise variation in TSI as much as you like, my method can cope, just as long as you don’t reduce solar variation to zero.

  250. Steve Fitzpatrick (16:39:46) :

    [Tenuc (16:10:04) :
    “The amount of heat energy in the oceans is vast and it will take a long time before any trend becomes apparent unless methods and accuracy of measurement improve.”]

    Steve Fitzpatrick
    “Argo represents an enormous improvement in measurement accuracy compared to pre-2003 data.”

    No it doesn’t. Here’s a quote from the Argo website:-
    ‘Argo has the potential, after careful data assessment, to provide salinity / temperature / pressure profiles that approach ship-based data accuracy.’

    So, at best, the accuracy of the Argo data could be approaching that of the poor quality data coming from the ship based method.

    Steve Fitzpatrick
    “The reported ocean heat trend in the several published papers based on Argo data are pretty clear: the best estimate trends are slightly downward or flat.”

    Argument fails as data poor.

    Steve Fitzpatrick
    “This is independently confirmed by satellite altimetry/ocean mass measurements that show sea level rise over the 2003 to 2008 period was caused almost entirely by increases in ocean water content, not thermal expansion. Prior to 2003, there was significant heat accumulation in the ocean, especially between 1985 and 2002, but this trend stopped in 2003.”

    Satellite altimetry/ocean mass measurements are estimates, with the usual problems of orbital decay and instrument drift associated with this data capture method. Also gravity anomilies are difficult to remove. Again, data accuracy in dispute.

    Steve Fitzpatrick
    “I do not understand why you think further improvements in measurement accuracy and/or much longer measurement times are needed to prove this rather obvious change from the earlier trend. The published uncertainly estimates for ocean heat content are really not that wide; the ocean heat content is known today with more accuracy that ever before.”

    The ocean is an inhomogenius dynamic chaotcic system – sea surface measurements are a poor proxy for state of ocean energy at any moment in time.

    There are only about only c3000 Argo bouys (only 1 bouy per c30,000 square miles of ocean – about the size of the Czech Republic). Is it logical to think that with ocean currents on all scales – tidal effects – local variations in insolation due to cloud cover – variations in salinity – effects of local storms – effect of marine life…etc, that total energy estimates are accurate enough to measure a small short-term trend? I don’t think so.

    To get a better numbers for ocean energy trends, we need better ‘ARGO type’ bouys with each covering 25square miles of ocean , along with a system which can measure deep ocean currents on a continuous basis.

    The data is far from clear.

  251. tallbloke, yes, living in the tropical Pacific has been berry berry good to me.

    You are correct that the ocean loses heat all the time. However, it loses more during the day than the night.

    The ocean is opposite to the atmosphere, in that the atmosphere overturns thermally during the day and is thermally stratified at night. The ocean, on the other hand, is stratified during the day and overturns at night.

    During the day, the uppermost layer of the ocean warms from sun and downwelling longwave radiation (DLR). The heated water rises because it is less dense, and the warmest water is at the very top. This encourages both evaporation and radiation (OLR).

    During the night, you are correct that the surface continues to lose energy through evaporation and radiation. While it still is getting DLR, the lack of sunshine make for a net heat loss, so it cools overall. As soon as it becomes cooler than the underlying layers, however, the top layer starts sinking, and cooler water is brought to the surface by thermal convection.

    Hope this clarifies my often vague writing …

    w.

  252. Tenuc (16:10:04), you say:

    The amount of heat energy in the oceans is vast and it will take a long time before any trend becomes apparent unless methods and accuracy of measurement improve.

    and

    There are only about only c3000 Argo bouys [sic] (only 1 bouy [sic] per c30,000 square miles of ocean – about the size of the Czech Republic). Is it logical to think that with ocean currents on all scales – tidal effects – local variations in insolation due to cloud cover – variations in salinity – effects of local storms – effect of marine life…etc, that total energy estimates are accurate enough to measure a small short-term trend? I don’t think so.

    Well … the people doing the research in the field like Josh Willis and Amy Cazenave would beg to differ. They analyze and write about those trends, and find them statistically significant. If you disagree that they are producing statistically significant results, please point to their errors, as a blanket denial doesn’t get you much traction on a scientific site.

    The number of ARGO floats (which you decry as inadequate) is on the order of three times as large as the number of ground temperature stations. Since the ocean is a bit more than twice as large as the land, this means ocean coverage is better than ground station coverage.

    In addition, the ARGO floats measure temperature in three dimensions rather than two. They measure the vertical temperature profile of the ocean, where ground stations only measure air temperature at a single point.

    And while ARGO floats are not error-free (nothing is), they are not subject to the siting errors and barbecues and air-conditioner vent and UHI errors which plague the ground station record.

    As a result, at present we are getting more, and much more accurate, information about the temperature of the ocean from the ARGO system than information about the temperature of the atmospheric temperature from the ground stations.

    Note also that, unlike the atmosphere, the ocean temperature can be independently verified in three ways: sea level height, length of the day, and satellites.

    Sea level goes up and down with temperature. The recent flat-lining of the atmospheric temperature has been matched by the flat-lining of the oceanic temperature. This is verified by the flat-lining of the sea level height.

    In addition, as water warms it expand. This makes the equator bulge outwards. Like an ice skater extending her arms, this slows the rotation of the earth. This is used as an indirect measurement of ocean temperature.

    Finally, because of the homogeneous nature of water compared with land, we can measure SST by satellite. Once again, this provides us with an independent confirmation of the ARGOS system.

    Finally, you seem to be confounding two types of ship-based systems when you say:

    No it doesn’t. Here’s a quote from the Argo website:-
    ‘Argo has the potential, after careful data assessment, to provide salinity / temperature / pressure profiles that approach ship-based data accuracy.’

    The type of ship-based system they are referring to is not the “measure the sea temperature in the cooling water inlet” system used to construct global temperature estimates. This system is subject to a host of errors, such as warming due to the inlet pipe and the huge variation in the depth of inlet pipes in various ships.

    They are referring to the system of scientific point-sampling of vertical temperature profiles in the ocean. These are done by lowering a thermometer (actually a thermistor) into the ocean and recording the temperature at each depth as they go down. This system is extremely accurate, and the fact that the ARGOS system can approach this accuracy is a tribute to the ARGOS system.

    w.

  253. TENUC: You write all of this crap about unreliable ARGO data but are perfectly happy to accept ground based temperature data for; a continuing changing number of stations, location changes, changes in UHI, thermometer quality changes, different methods of recording, etc..? Unblelievable!

  254. Tenuc (04:36:40) :

    I am surprised that you think the Argo data (and apparently satellite altimetry and/or satellite ocean mass data as well) are very uncertain. As far as I can tell, this uncertainty is based your personal evaluation of the data, rather than on a series of published studies which show why the Agro and sattelite data are uncertain. If your analysis clearly shows high uncertainty in these data, then I urge you to consider publishing, or at the very least consider offering a post to Real Climate.

    However, after reading your last and several earlier comments again, I wonder if you would assign the same level of doubt to these measurements had they showed a rapid accumulation of heat in the world’s oceans rather that a slight loss of heat. The researchers and groups of researchers who have actually used the Argo and satellite data to calculate the evolution of total ocean heat content since 2003 appear to have a somewhat different view on the quality of this data, since the uncertainty estimates in their published studies have been consistently low.

    Since my curve fit model predicts the possibility of radiative forced warming over the next couple of decades, and since I believe the existing studies have accurately evaluated the reliability of the Argo data, then just for fun, I want to go way out on a limb with a bold prediction:

    I predict the Argo based slight decline in ocean heat content trend through 2008 will reverse and turn positive if there is a significant increase in average surface temperature.

  255. Michael Jennings (09:45:02) :
    TENUC: You write all of this crap about unreliable ARGO data but are perfectly happy to accept ground based temperature data for; a continuing changing number of stations, location changes, changes in UHI, thermometer quality changes, different methods of recording, etc..? Unblelievable!

    Leif’s law: “if data support your pet idea, they are good, otherwise bad or uncertain or doctored or …”

  256. Re: comment up thread – I stand corrected re: Argo.

    I had it in mind it only covered the top 300m but following on from comments above just checked the Argo site where it shows top 2000m are profiled.

  257. Leif Svalgaard (10:48:46) :

    Leif’s law: “if data support your pet idea, they are good, otherwise bad or uncertain or doctored or …”

    ——————————–

    And who’s Leif?

    [Reply: I suggest you do a search for “Leif Svalgaard”. You might be surprised. ~dbstealey, mod.]

  258. Willis Eschenbach (08:21:54) :

    During the night, you are correct that the surface continues to lose energy through evaporation and radiation. While it still is getting DLR, the lack of sunshine make for a net heat loss, so it cools overall. As soon as it becomes cooler than the underlying layers, however, the top layer starts sinking, and cooler water is brought to the surface by thermal convection.

    Hope this clarifies my often vague writing …

    Willis, thank you. I am with you right up to the last line:
    “and cooler water is brought to the surface by thermal convection.”

    Surely thermal convection will bring warmer water to the surface, which then cools by convection to the air and radiation of heat, and then sinks to be replaced by warmer waters again ?

  259. tallbloke (13:38:46), again my apologies for lack of clarity. You are correct when you say:

    Surely thermal convection will bring warmer water to the surface, which then cools by convection to the air and radiation of heat, and then sinks to be replaced by warmer waters again ?

    At the end of the day, the nearer to the surface, the warmer the water. The top layer cools and sinks. When I said that “cooler water is brought to the surface”, I meant cooler than the top layer before it started cooling.

    As this process continues, the water reaching to the surface is cooler and cooler as the night progresses, which is what I was trying to say.

    w.

  260. ” As soon as it becomes cooler than the underlying layers, however, the top layer starts sinking, and cooler water is brought to the surface by thermal convection.”

    Not sure that follows. In gliding one gets a layer of hot air close to the ground which then needs a trigger to form a thermal. So a layer of cool surface water especially if only a degree or two cooler and 10 x wave amplitude thick would probably need a trigger to form reverse thermals. Whether this would clear the temp. inversion before the next days solar heating seems unclear.

  261. Willis Eschenbach (08:58:20) :
    Michael Jennings (09:45:02) :
    Steve Fitzpatrick (10:14:25) :

    You may be surprsed to hear that I am a skeptic regarding AGW, and if asked to vote on the balance of evdence and data I have seen to-date, would have to come down on the side of natural rather than man-made climate change.

    However, I think the biggest problem facing climate research is lack of good quality data at suffcent granularity to reach meaningful conclusions. I also feel that many climatologists try to break down bits of our dynamic chaotic system and treat them as individual linear systems intead of taking the necessary holistic approach.

    Regarding land based temperature, I agree that the data is even worse than for the sea. I’m not even sure that trying to use avarage global surface temperature has any real meaning and some other more useful KPI’s should be developed to see how climate is progressing.

  262. Pamela, much SWR is absorbed by land regions… some remains, yes… LWR is trapped by GHG’s, and less SWR is reflected back as ice cover melts… to keep it out of contention, say the Artic, as opposed to the Antartic, where there is no controversy over the quickly meltng ice and increases in Methane loss there.
    As SWR and LWR accumulate in conjunction the Earth holds in more transfer energy which is defined as heat and the global mean temperatures go up.

  263. So, Steve,
    if it is not melting ice or thermal expansoin causing sea level rise, what is it? Clearly you are not seeing the connection between sea level rise and heat energy transfer. Also check out the other author I pointed out and the recent corrections made by NASA showing ocean heat content still going up.

  264. tallbloke (13:38:46) :

    Perhaps Willis should have said, “formally cooler” to avoid confusion

    DaveE

  265. Sandy (14:21:26), thanks for your comment, you raise an interesting issue. I had said:

    As soon as it becomes cooler than the underlying layers, however, the top layer starts sinking, and [formerly, as Dave E. says] cooler water is brought to the surface by thermal convection.

    You replied:

    Not sure that follows. In gliding one gets a layer of hot air close to the ground which then needs a trigger to form a thermal. So a layer of cool surface water especially if only a degree or two cooler and 10 x wave amplitude thick would probably need a trigger to form reverse thermals. Whether this would clear the temp. inversion before the next days solar heating seems unclear.

    Well, I guess my advantage in this arena is that I’ve done a whole lot of day diving and night diving in the tropical ocean, plus I fly ultralights … so I can testify to both phenomena from personal experience.

    I suspect that the difference in part is the method of heating. In the atmosphere, at dawn the earth’s surface warms up. This in turn must heat up the overlying fluid (air). Only when that happens does the daily overturning start to occur.

    In the ocean, on the other hand, at sundown the fluid itself is cooling. It is cooled directly by radiation and evaporation from the top surface, rather than indirectly as is the case with the atmosphere. This means that, unlike in the atmosphere where heating starts only after dawn, the ocean surface cooling can actually start before sunset. And as soon as an appreciable amount of the surface cools, it starts to sink.

    In part because of the much greater viscosity of water compared to air, this rapidly organizes into smaller areas of descending cooler water, in the middle of larger areas of more slowly ascending warmer water. In the atmosphere, this process of self-organized circulation takes longer to form.

    There is often a clear sensible temperature difference involved between the rising and sinking water. The temperature difference, as well as the fact that the warmer water is rising and the cooler sinking, makes them clearly perceptible when diving at night. It is most pronounced in the morning before dawn, as you might expect.

    My best to you,

    w.

  266. Jacob Mack (15:05:53), you say:

    So, Steve,
    if it is not melting ice or thermal expansoin causing sea level rise, what is it? Clearly you are not seeing the connection between sea level rise and heat energy transfer. Also check out the other author I pointed out and the recent corrections made by NASA showing ocean heat content still going up.

    You have put your finger on one of the unsolved mysteries of climate science. The various measurements of sea level rise do not agree with the total of the known sources of sea level rise. Why? Well, not to put too fine a point on it … we don’t know.

    Or as Willis et al. put it:

    Analysis of ocean temperature and salinity data from profiling floats along with satellite measurements of sea surface height and the time variable gravity field are used to investigate the causes of global mean sea level rise between mid-2003 and mid-2007. The observed interannual and seasonal fluctuations in sea level can be explained as the sum of a mass component and a steric (or density related) component to within the error bounds of each observing system. During most of 2005, seasonally adjusted sea level was approximately 5 mm higher than in 2004 due primarily to a sudden increase in ocean mass in late 2004 and early 2005, with a negligible contribution from steric variability. Despite excellent agreement of seasonal and interannual sea level variability, the 4-year trends do not agree suggesting that systematic long-period errors remain in one or more of these observing systems.

  267. Jacob Mack (15:05:53) :

    “So, Steve,
    if it is not melting ice or thermal expansoin causing sea level rise, what is it? ”

    I never suggested that there had not been sea level rise since 2003; the satellite altimetry data is pretty clear that there has been.

    The rate of rise in sea level has become less, but the sea level continues to rise. The best evidence (from ocean mass determination via satellite) is that melting glacial ice (not floating ice, which would would have little effect on sea level) has contributed nearly all of the observed sea level rise in recent years (Cazenave et al). The best available evidence is that the oceans have gained mass, but not accumulated any heat since about 2003, and may have lost a bit. I am astounded that you continue to doubt this in the face of all the published studies to the contrary.

  268. Steve Fitzpatrick
    “I am astounded that you continue to doubt this in the face of all the published studies to the contrary.”
    I am equally astounded you are not aware of all the peer review literature showing that warming is causing the glacial ice to melt in the first place. If the oceans slightly cool due to more glacial ice melt, this would not show that AGW is somehow in decline, or that there is a cooling either.

  269. Steve Fitzpatrick (18:47:36) :
    The best available evidence is that the oceans have gained mass, but not accumulated any heat since about 2003
    No wonder, as Nasif claims that heat cannot be stored, hence not accumulated.

  270. We are more-or-less at equilbrium. Maybe a 0.1 watt here and a 0.1 watt there is still being absorbed into the heat sinks of the ocean and the ice-sheets but no increasing ocean heat content means we are at equilbrium (right now).

    What does that mean? It means the CO2 doubling sensitivity is less than 3.0C. It is probably only about 1.0C to 1.5C per doubling.

    We can actually prove this mathematically. Something the climate scientists have never done so far. There is no stefan-boltzmann-like equations for the greenhouse effect. It is still just a guess.

    So let’s do some simple math for them.

    Let’s say the 33C greenhouse effect corresponds to 280 ppm CO2 (or 32.5C if you think at 280 ppm in 1750, it was 0.5C cooler).

    11 havings of 280 ppm results in a number that is effectively Zero. 11 times 3C per doubling/halving and the Earth’s temperature is back to 255K or -18C and the greenhouse effect is gone.

    Mathematically, the doubling/having number cannot be more than 3C . 11 halvings and the greenhouse effect is Zero. In fact, the 3.0C per doubling builds in the assumption that water vapour is 100% controlled by GHGs as well. No GHGs, no water vapour. All/100% of the greenhouse effect is controlled by CO2/GHGs.

    Now let’s take all the CO2/GHGs out of the atmosphere, will there still be some water vapour left? Most definitely. Even at the South Pole at -50C and 3 kms high, there is still some water vapour.

    Will there still be a latent heat effect provided by the non-GHG atmosphere constituents of nitrogen and oxygen? Most definitely. It might be small, but the energy from the Earth’s surface is not just going to fly directly off to space in a nano-second without any GHGs at all.

    Will the Earth’s surface temperature be higher than 255K? How much higher? Well, it certainly would be since the small amount of water vapour remaining and the latent heat of the non-GHG atmosphere would provide a greater relative impact at that temperature level and the greenhouse effect remaining would still be 5C to 15C (or let’s say 15% to 40% of the greenhouse effect would still remain).

    Therefore, mathematically, the CO2/GHG doubling is less than 3.0C per doubling (and certainly not more than 3.0C per doubling) and it is probably 1.5C or so per doubling.

    Math is the only pure science. It does not lie.

  271. Bill Illis (21:42:17), your math is excellent … but there is a flaw.

    An example I have used before to illustrate the flaw is this: suppose we take a 75kg block of say copper, and stick one end into hot water. After a while, the heat is transferred to the other end of the copper block. I propose a theory that if you stick one end of something into a block of hot water, the other end will heat up at a certain rate. Simple math, no problem. I try a block of wood. I notice that it works just the same, except it heats up more slowly. I try a steel block, same thing, it heats up, just takes a different time.

    Finally, having proven my theory, I decide to test it on the 75 kg of myself. I put my feet into the hot water, and I wait for my head to heat up … and wait … and wait.

    The moral is, complex systems don’t obey simple math.

    In addition, there is no reason to believe that the effect of say 1 w/m2 of additional forcing will be the same at different temperatures. Suppose we start with a world just like ours, but somehow cooled to just above freezing. As it begins to warm up, initially the added energy causes a large temperature increase.

    But as temperatures warm, the radiation goes up by T^4 and the evaporation goes up by T^2. So each additional unit of energy causes less and less temperature increase.

    Nor is this all. As the temperature warms, at some point cumulus clouds begin to form in the tropics. As the temperature increases, the cumulus clouds increase, reflecting more and more sunlight away from the surface.

    Finally, at a higher temperature, cumulonimbus (thunderstorm) clouds form. These cool the surface through a variety of mechanisms.

    The result of all of these (increasing radiation, increasing evaporation, increasing cumulus, increasing cumulonimbus) put a limit on how warm the earth gets.

    The result of this is that the climate sensitivity (the change in temperature from a given change in forcing) is not constant with temperature. It decreases with increasing temperature until it reaches a point where another watt per square metre is totally counterbalanced by increasing radiation, evaporation, cumulus, and thunderstorms.

    Sorry to mess up your math like that, but the nature of the system is that for the reasons above, the system does not respond linearly to forcing increases.

    w.

  272. Willis Eschenbach (14:08:57) :

    At the end of the day, the nearer to the surface, the warmer the water. The top layer cools and sinks. When I said that “cooler water is brought to the surface”, I meant cooler than the top layer before it started cooling.

    As this process continues, the water reaching to the surface is cooler and cooler as the night progresses, which is what I was trying to say.

    w.

    Thanks again Willis, that all makes sense to me now.

  273. Willis,

    I agree with you. I posted charts above exactly along those lines. I’ve done the math along those lines as well (give or take the cloud and evaporation impacts).

    But the science of the greenhouse effect, what little there is, is not really based on the stefan-boltzmann world. It is based on what Steve McIntyre calls “the higher the colder” argument.

    Basically, the Earth’s surface is 33C higher than it should be from the impact of the Sun (288K – 255K). It is colder as one goes higher up in the atmosphere and there is a point where the atmosphere is radiating back to space at the solar equilbrium level of 255K (there is actually 4 different levels where that is happening but they seem to ignore that and just use the one at the tropopause).

    Below that level, we are not operating in a stefan-boltzmann world anymore, we are working in an atmospheric physics radiation transfer world with GHGs (and, more accurately, GHGs with 100% impact on water vapour) controlling the energy/radiation flows from the tropopause and the surface. We are operating in world of cumulonimbus clouds and different layers of the atmosphere having different temperatures and climates.

    Various arm-wavings later and they believe only a climate model can simulate the atmospheric energy/radiation flows properly and a guess is made that 3.0C per doubling of CO2/GHG controls how the 33C greenhouse effect occurs.

    It seems strange to me that we have such successful formulae like the stefan-boltzmann series but we have to switch to a climate model when the atmosphere is warmer than 255K.

    Let’s say the stefan-boltzmann world still governs what is going on. Then your statements about the declining temperature impact are correct and so are my charts. There will be very little warming beyond where we are now. 13 extra watts of GHGs are required for the next 3.0C increase which is an impossible amount. But part of your comments say evaporation and cumulonibus clouds are then in control and we are back to a world of climate models again.

  274. Jacob Mack (19:27:34) :

    “I am equally astounded you are not aware of all the peer review literature showing that warming is causing the glacial ice to melt in the first place. If the oceans slightly cool due to more glacial ice melt, this would not show that AGW is somehow in decline, or that there is a cooling either.”

    I am quite aware of the literature. The average global temperature for certain increased in the period between the late 1970’s and about 2001. I do not dispute this. The average global temperature also increased between about 1915 and 1945, and declined slightly between 1945 and the late 1970’s.

    Between 2003 and 2008, there was clearly a net increase in ocean mass, do mainly to net melting of glaciers, as shown by the measured sea level increase and measured increase in ocean mass during that period, consistent with a measured lack of heat accumulation in the oceans. The measured increase in ocean mass is also consistent with measured declines in glacial volumes. The cooling associated with melting of glaciers is extremely small, and does not explain the lack of ocean heat accumulation between 2003 and 2008. You can do the calculation yourself: how much change in average temperature takes place when you melt ~10 mm of ice in the top 700 meters of ocean? Answer: about 0.0011C, which does not significantly change the Argo results for 2003 to 2008.

    That at the average global temperature produces a net melting of glaciers has nothing at all to do with with the question of whether or not the oceans are accumulating heat. If the oceans did not accumulate heat for 2003 to 2008, then the average global temperature could not have been increasing during that period, since any increase in average global temperature requires that the ocean be accumulating heat.

    I think this a pretty well known and broadly accepted concept on all sides of the AGW issue. Even James Hansen (along with many others) has said the same in published papers. I remain very puzzled by your position on this.

  275. Leif Svalgaard (21:03:32) :

    “Steve Fitzpatrick (18:47:36) :
    The best available evidence is that the oceans have gained mass, but not accumulated any heat since about 2003
    No wonder, as Nasif claims that heat cannot be stored, hence not accumulated.”

    I do hope Nasif does not become involved. Semantics matter not at all.

    BTW, did you see the questions I asked earlier?

    **************************************************

    Steve Fitzpatrick (07:41:12) :

    Leif Svalgaard (18:58:51) :

    “The peak to valley change depends on the size of the solar cycle and varies by a factor or 3 or more. A good median value is 0.1% of TSI or ~1.4 W/m2, for some cycles larger, for some smaller.”

    Does the mean TSI over a whole cycle remain more or less constant from cycle to cycle, or does the mean TSI for a whole cycle depend on the level of solar activity. For example, if the peak of one cycle has a sunspot number of 75, and the peak of the next 150, would the average TSI over each cycle be the same, or would the cycle with lower peak activity have a lower average TSI?

    “A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].”

    Does the climate sensitivity not enter into the expected temperature change? If the climate sensitivity were 0.75 degree per watt, then a top of atmosphere variation of 1.4 watts per M^2 would give about 0.25 * 0.7 * 1.4 * 0.75 = 0.184K change, not close to the 0.07K you note above. It seems to me that the above calculation implicitly assumes a sensitivity of about 0.21K per watt/M^2. Am I missing something?

  276. Steve Fitzpatrick (09:59:01) :
    Does the mean TSI over a whole cycle remain more or less constant from cycle to cycle, or does the mean TSI for a whole cycle depend on the level of solar activity.
    To first order, TSI W/m2= 1365 + SSN/100. The constant 1365 is the nominal value, the real value might be about 1360.5, but the difference matters not.

    “A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].”

    Does the climate sensitivity not enter into the expected temperature change?

    My assumption is that what goes in must come out [sooner or later], so the radiation balance does not involve climate sensitivity. If you want to introduce such an animal, then the question is whether the sensitivity is constant in time [I think not – might depend on land-sea distribution, for example] or varies. To a certain extent I have taking some sensitivity into account by using 288K as the temperature rather than 255K.

  277. tallbloke (09:14:32) :
    If you’re right, it’s amazing that such a small increase in TSI could cause such an acceleration in the thermal expansion of the oceans,
    ARGO [and other recent data] finds that the 90 W/m2 annual TOA TSI corresponds to ~7 mm steric sea-level change [that due to temperature and density variation]. This means a sensitivity of 7/90 = 0.08 mm/[W/m2]. So even if TSI increased 1 W/m2 over a century [which i don’t think, but let’s assume that for the argument], then the total sea-level rise over a century would be 0.08 mm, right?

  278. Jacob, you said,
    “Pamela, much SWR is absorbed by land regions… some remains, yes… LWR is trapped by GHG’s, and less SWR is reflected back as ice cover melts… to keep it out of contention, say the Artic (sic), as opposed to the Antartic, where there is no controversy over the quickly meltng (sic) ice and increases in Methane loss there. As SWR and LWR accumulate in conjunction the Earth holds in more transfer energy which is defined as heat and the global mean temperatures go up.”

    70 to 72% of the Earth is covered with oceans. SWR reaches into water much better than it does land and land emits LWR much faster than water, which is one of the reasons why land cools faster. Therefore the majority of SWR that gets through the atmosphere without being reflected back into space sinks into the oceans.

    LWR is not necessarily trapped by GHG’s. If it was, we would not be able to measure it at the outer edge of the atmosphere.

    Given the degree of ice melt, just how much do you think SWR reflection has been reduced? It can be calculated. Determine insolation (the amount of SWR that gets through the atmosphere) for Summer over the Arctic. Obtain the calculation for snow reflection of SWR. Calculate the change in amount reflected by the greatest Summer ice cover and the least Summer ice cover. What do you think the % decrease will be and is that significant in terms of climate?

    Methane monitoring is not picking up an increase in Arctic or even subarctic methane increases. Where did you get your data from?

    Based on incoming SWR and outgoing LWR, there has been no measured accumulation of either type inside our GHG blanket. Where did you get your data from?

  279. Pamela Gray (13:50:16) :

    “Where did you get your data from?”

    Pretty clear that he pulled it out of……. um… the air.

  280. Pamela my data comes from NASA GISS, NOAA, Nature, Scientific American 3.0 where Kathy was actually there measurign methane levels; just look it up. Pick up the summer edition of 3.0, or go the Scientific American site. I forget her last name, but there are other researchers as well who show first hand heigtened methane emissions due to melting snow and ice releasing from plants in the soil. This is 100% fact not for debate.

  281. Dear friends, please let us play nicely. Although Jacob may not agree with us, he has been responsive to requests for data, and has worked at supporting his arguments. If we hope to convince people, it is counterproductive to antagonize them.

  282. There has been an uptick in Methane levels after it looked like they might actually stabilize. The highest Methane numbers seem to be in the high Arctic (where the highest annual variation in CO2 levels is also seen.)

    It still looks like Methane is following a logarithmic trendline and will stablize soon. The oil and gas industry used to be the biggest contributors to the increase (not rice paddies) and now that Natural Gas (which is 98% Methane) has a higher value, they are plugging up the leaks and not burning it inefficiently/just releasing it to the atmosphere as they once did.

  283. Jacob, citing Scientific American on a scientific blog is about a half step above citing the National Enquirer (a tabloid rag filled with aliens and Elvis sightings, for those non-Americans). You’d get a lot more traction if you took the time to look up the original report, which seems to be this one here. Then ignore all of the words and think about the numbers. Find a long-term series to compare them to. Then give us your thoughts about the numbers. Not what others have said. What you think the numbers mean.

    Yes, methane did increase in the arctic last year, at least in Svalbard. It went up by a stunning 0.6% … EVERYBODY PANIC is how SciAm treats it. Me, I look at the longer term records. Atmospheric methane sources and sinks are poorly understood. After a decade of stability, world methane levels also rose after 2006 … by 0.6%.

    Why? Well, we don’t know. All climate scientists should practice those words several times a day. We don’t know why methane went up by 0.6%, either in the Arctic or globally. Now, if you want to read some dark future into that, it is your right. Me … no thanks. We don’t have enough data or understanding to say much of anything about that.

  284. Willis Eschenbach:

    You are right, I should not have commented about Jacob’s data sources, even if he does not seem willing to critically evaluate those sources.

    You are right too about Scientific American. 30 years ago it was actually a pretty good magazine that covered real science, and contained decent summary articles from a range of fields, but now it is just a dumbed-down mouthpiece for a particular political viewpoint. I haven’t purchased a copy in over 10 years. A little sad, when you consider that it has been around since well before 1900, but now is basically worthless.

  285. Steve, thank you for your gentlemanly response. We can only sway public opinion if we treat it seriously.

    Losing Scientific American, the bible of my childhood in the 1950s, was a tragedy. As you say, at this point it is just a publicity-driven rag.

    w.

  286. http://eprints.soton.ac.uk/64607/

    http://www.sciencedaily.com/releases/2008/12/081217203407.htm

    “Dirk Wagner3 and Susanne Liebner3

    (3) Research Unit Potsdam, Alfred Wegener Institute for Polar and Marine Research, Telegrafenberg A45, 14473 Potsdam, Germany

    The Arctic plays a key role in the Earth’s climate system, because global warming is predicted to be most pronounced at high latitudes, and one third of the global carbon pool is stored in ecosystems of the northern latitudes. The degradation of permafrost and the associated intensified release of methane, a climate-relevant trace gas, represent potential environmental hazards. The microorganisms driving methane production and oxidation in Arctic permafrost soils have remained poorly investigated. Their population structure and reaction to environmental change is largely unknown, which means that also an important part of the process knowledge on methane fluxes in permafrost ecosystems is far from completely understood. This hampers prediction of the effects of climate warming on Arctic methane fluxes. Further research on the stability of the methane cycling communities is therefore highly important for understanding the effects of a warming Arctic on the global climate. This review first examines the methane cycle in permafrost soils and the involved microorganisms. It then describes some aspects of the potential impact of global warming on the methanogenic and methanotrophic communities.”
    http://www.springerlink.com/content/xq5x2k8l4n686r26/
    No doubt some things are poorly undertsood guys, but even infinitesminal rises in CH4 in conjunction with C02 greatly influences climate and can lead to more progressive warming. Since CH4, NO2, and CO2, are trace gases, we must be very cautious about what we consider to be a small % increase in light of the climate influences these gases exert. Nowhere in recent history have these levels been so high, and after the last ice age, these GHG’s led to drastic climate changes, and utlimately global warming.
    http://blogs.nature.com/news/thegreatbeyond/2008/04/gloomy_emissions_data_shows_me.html

    “Large uncertainties in the budget of atmospheric methane, an important greenhouse gas, limit the accuracy of climate change projections1, 2. Thaw lakes in North Siberia are known to emit methane3, but the magnitude of these emissions remains uncertain because most methane is released through ebullition (bubbling), which is spatially and temporally variable. Here we report a new method of measuring ebullition and use it to quantify methane emissions from two thaw lakes in North Siberia. We show that ebullition accounts for 95 per cent of methane emissions from these lakes, and that methane flux from thaw lakes in our study region may be five times higher than previously estimated3. Extrapolation of these fluxes indicates that thaw lakes in North Siberia emit 3.8 teragrams of methane per year, which increases present estimates of methane emissions from northern wetlands (< 6–40 teragrams per year; refs 1, 2, 4–6) by between 10 and 63 per cent. We find that thawing permafrost along lake margins accounts for most of the methane released from the lakes, and estimate that an expansion of thaw lakes between 1974 and 2000, which was concurrent with regional warming, increased methane emissions in our study region by 58 per cent. Furthermore, the Pleistocene age (35,260–42,900 years) of methane emitted from hotspots along thawing lake margins indicates that this positive feedback to climate warming has led to the release of old carbon stocks previously stored in permafrost." –http://www.nature.com/nature/journal/v443/n7107/full/nature05040.html
    I have no problem acknowledging uncertainties, hence why more of these studies are performed, more observations of the Artic and Anatartic, etc… however, the linage between gases like methane and carbond dioxide are quite clear.

  287. I will digress here with this final post for the night; I need my sleep as well, but Methane is 25 more times an efficient greenhouse gas than carbind dioxide per mole, so keep that in mind; as reported in environmental chemistry textboks, the journal nature, sciencedaily, and many other publications. A small increase in CH4 also leads to more C02 formation in the atmosphere, and the remaining CH4 leads to a greater forcing upon C02, which forces upon the water vapor and C02 provieds a postive feedback upon CH4 and H-O-H is a positive feedback upon C02. Yes, some microorganisms feed on CH4, which reduces the total emissions of this GHG, but NH4 is released along with CH4 from various bacteria and plant ineteractions; including at compost piles in addition to soil exposed in the Tundra and elsewhere in the globe. Also see NOAA paleoclimate data.

  288. Jacob, you truly, truly need to learn to distinguish between observations and hypotheses. I can sympathize with you, as many climate “scientists” are incapable of doing it as well.

    For example, you say:

    No doubt some things are poorly understood guys, but even infinitesminal (sic) rises in CH4 in conjunction with C02 greatly influences climate and can lead to more progressive warming. Since CH4, NO2, and CO2, are trace gases, we must be very cautious about what we consider to be a small % increase in light of the climate influences these gases exert. Nowhere in recent history have these levels been so high, and after the last ice age, these GHG’s led to drastic climate changes, and utlimately global warming.

    We don’t know what effect small changes in trace gases have upon the climate. There is no evidence that they have led to “drastic changes” or “global warming” after the last ice age. Greenland ice cores show that since emerging from the last ice age, the earth has cooled. While we know that CO2 and the others are greenhouse gases and raise temperatures in theory, that’s just simple math, we don’t know what effect they have on the extremely complex climate system. Let me repeat my example from above:

    An example I have used before to illustrate the flaw is this: suppose we take a 75kg block of say copper, and stick one end into hot water. After a while, the heat is transferred to the other end of the copper block. I propose a theory that if you stick one end of something into a block of hot water, the other end will heat up at a certain rate. Simple math, no problem. I try a block of wood. I notice that it works just the same, except it heats up more slowly. I try a steel block, same thing, it heats up, just takes a different time.

    Finally, having proven my theory, I decide to test it on the 75 kg of myself. I put my feet into the hot water, and I wait for my head to heat up … and wait … and wait.

    The moral is, complex systems don’t obey simple math.

    Simple math and lab experiments and basic physics of IR absorption don’t help us here. Climate models are also of no help in this regard, as they can only do what their programmers program them to do.

    The thing that model programmers love to overlook is that climate is a hugely complex, tera-watt scale, driven, multi-stable, chaotic, resonant constructal planetary heat engine. It consists of 5 separate subsystems (atmosphere, biosphere, cryosphere, lithosphere, and ocean). None of these subsystems is well understood. Each of these subsystems has its own known and unknown forcings, resonances, and responses. In addition, each subsystem interacts with the other subsystems in the form of positive and negative forcings and feedbacks, again both known and unknown. It is without a doubt the most complex system we have ever tried to model, and we’ve only been at it for a couple of decades. Our observational datasets are fragmentary, short, and subject to a host of problems. Even our satellite datasets, which at least have global coverage, are the subject of intense scientific dispute.

    With such poor data, such a complex system, and such rudimentary models, the idea that we can currently say “a 1% change in methane will lead to warming” is hubris of the highest order. We can’t predict next month’s weather, we don’t even know if the earth has a thermostat, and you seriously claim that after the last ice age “GHGs lead to drastic climate changes”? … sorry, but we’re not there yet. We can’t even say that the climate that we have seen in recent years is anything but natural variation. The world cooled fairly radically from about 1400 to 1650, and has been generally warming since then. In the last century alone, it warmed from 1910 to 1940, cooled to 1970, and warmed to 1998 … which of these are you claiming is human-caused, and why not the others?

    There is nothing, I repeat nothing, in the observational record that can be shown statistically to be significantly different from earlier parts of the record. Most statewide high temperature records in the US were set in the 1930s … and at that time, people were making very similar statements to yours, that we were all going to fry. The recent warming is no different, either in length, steepness, or hype.

    Finally, it is obvious that there are many things which we still don’t know. Contrary to IPCC claims and climate models, the world has not warmed in the last decade. Don’t know why. CO2 is still rising, but temperature is not. Don’t know why. We recently discovered that plankton make clouds to cool them down when they are too hot. How much does that affect the climate? We don’t know. We don’t understand why the sea level took a jump in 2004. We can’t explain a whole host of things about climate, and you want to tell us that methane is teh suxxor, and we should be “very cautious”?

    Like I said above, you are free to worry about that change in methane if you wish. But trying to pass it off as science won’t work on this site. Our scientific understanding of climate is nowhere near that level, and likely won’t be for decades.

    w.

    PS – That’s just this web site. You can sell your particular brand of worry about trace gases at the Scientific American blog and find plenty of agreement. Here … we prefer facts, science, and people honest enough to say “we don’t know”. I laud you for your efforts here, but if you want to get traction, you need to present facts. Evidence. Observations. Then you need to draw supportable inferences and conclusions from those facts and data. Repeating someone else’s hysteria about arctic methane is meaningless.

  289. Willis,

    “PS – dear friends, remember significant digits. We can’t say the earth blackbody temperature is “254.802 K”, that’s a bridge too far.”

    True of course. However I was forced to used too many digits in the calculation of blackbody temperature because I was trying to show the expected blackbody sensitivity value without using the inverse of the first derivative the Stefan-Boltzman equation with respect to temperature. To get a reasonably accurate sensitivity value without calculus, I had to compare the equilibrium temperatures at two closely spaced radiative energy levels (I used a 1 watt difference), do the calculation to a crazy number of digits, then subtract one from the other. Perhaps I should have just presented the correct sensitivity equation derived from Stefan-Boltzman, and not worried about losing some readers by using calculus.

    Thanks for your many constructive comments on this thread.

    Steve

  290. Just noting that Methane levels are now well-below the IPCC’s A1B forecast and it seems likely that global levels will stay below 1900 ppb versus the peak in the A1B forecast of 2400 ppb.

    So 20% less than the +3.0C forecast and trends that indicate Methane will only add a neglible amount further to global temperatures (barely measurable really) should give Jacob and Scientific American some comfort.

    (But then, there will be another story soon how Methane has spiked another 2 ppb last year and there is a danger of a runaway Methane calthrate release like in the PETM).

  291. >> Willis Eschenbach (01:37:58) :

    The average surface albedo of the earth is about 0.85 <<

    I think you mean emissivity, not albedo.

    Jim

  292. Willis,

    It’s interesting that you are using an emissivity of 0.85. I’ve tried looking up this number and most consider that the Earth’s surface emissivity is closer to 0.95. From my (greatly simplified) calculations, I get numbers below 0.7. Emissivity is frequency dependent. The numbers used in climate are in the IR range. I think it makes more sense to use the total emissivity; however, I appear to be a minority of one. Where do you get your value for the surface emissivity?

    Thanks,
    Jim

  293. The best estimate for Albedo is 0.298

    The best estimate for Emissivity is 1.0 for ice and snow, 0.9907 for water, 0.98 for sand and desert and the overall Earth value ranges from 0.983 to 0.989

  294. >> Bill Illis (11:45:03) :

    The best estimate for Albedo is 0.298 <<

    The error bars are around plus or minus 0.03 and may be (considerably) larger.

    >> The best estimate for Emissivity is 1.0 for ice and snow, 0.9907 for water, 0.98 for sand and desert and the overall Earth value ranges from 0.983 to 0.989 <<

    These are for IR frequencies. The emissivity for new fallen snow is not 1.0 in the visible light range. However, snow is essentially black in the IR range. I saw a US Navy report that says the surface emissivity of the ocean is 0.5. Unfortunately I don’t have a reference.

    Jim

  295. Jacob, why I pay no attention to what the IPCC and other “experts” have to say is condensed in this quote from 1989:

    A senior U.N. environmental official says entire nations could be wiped off the face of the Earth by rising sea levels if the global warming trend is not reversed by the year 2000. Coastal flooding and crop failures would create an exodus of “eco-refugees,” threatening political chaos, said Noel Brown, director of the New York office of the United Nations U.N. Environment Program, or UNEP. He said governments have a 10-year window of opportunity to solve the problem.

    You can see how well that prediction turned out. Tell me the difference between that one and this one from 2009:

    Ban Ki-moon, the United Nations Secretary-General, has warned of “catastrophic consequences” unless a new international agreement on greenhouse gas emissions is reached. . . “The world has less than 10 years to halt the global rise in greenhouse gas emissions if we are to avoid catastrophic consequences for people and the planet.”

    Thanks to Roger Pielke Jr., see his excellent site for real science.

    w.

  296. Jim Masterson (10:15:38), thanks for pointing out my error.

    It’s interesting that you are using an emissivity of 0.85. I’ve tried looking up this number and most consider that the Earth’s surface emissivity is closer to 0.95. From my (greatly simplified) calculations, I get numbers below 0.7. Emissivity is frequency dependent. The numbers used in climate are in the IR range. I think it makes more sense to use the total emissivity; however, I appear to be a minority of one. Where do you get your value for the surface emissivity?

    You are right, my bad, I was conflating 1- surface albedo and surface emissivity … can’t do dat’, huge mistake. Ignore alien orders, it was late, my brain was entirely out of gear.

    My bible for many things climatic, including for the albedo and emissivity of common substances, is Geiger’s The Climate Near The Ground. He gives the following figures for IR emissivity at 9 to 12 microns:

    Water 0.96
    Fresh snow 0.99
    Dry sand 0.95
    Wet sand 0.96
    Forest, deciduous 0.95
    Forest, conifer 0.97
    Leaves Corn, Beans 0.94

    and so on down to things like:

    Mouse fur 0.94
    Glass 0.94

    So yes, a reasonable global figure to use for the emissivity of IR (greenhouse) radiation would be around 0.95 – 0.96 or so. Which is why most people just ignore it.

    w.

  297. Jim Masterson (12:35:41) :
    Bill Illis (11:45:03) :
    The best estimate for Albedo is 0.298
    The error bars are around plus or minus 0.03 and may be (considerably) larger.

    Albedo is by no means a constant. Its variability should be worked into all climate models, since it is a major driver of climate. The IPCC had no idea how albedo worked, so they just ignored it.
    [That last sentence may sound like a harsh statement, but I have had it confirmed independently by 3 scientists associated with or supportive of the IPCC. I can add that they clearly found it very convenient to ignore it.]

    PALL¶E ET AL.: EARTHSHINE TRENDS 1999-2007
    http://solar.njit.edu/preprints/palle1376.pdf
    Abstract.
    The overall reflectance of sunlight from Earth is a fundamental parameter for climate studies. Recently, measurements of earthshine were used to find large decadal variability in Earth’s re°ectance of sunlight. However, the results did not seem consistent with contemporaneous independent albedo measurements from the low Earth orbit satellite, CERES, which showed a weak, opposing trend. Now, more data for both are available, all sets have been either re-analyzed (earthshine) or re-calibrated (CERES), and present consistent results. Albedo data are also available from the recently released ISCCP FD product. Earthshine and FD analyses show contemporaneous and climatologically significant increases in the Earth’s reflectance from the out-
    set of our earthshine measurements beginning in late 1998 roughly until mid-2000. After that and to-date, all three show a roughly constant terrestrial albedo, except for the FD data in the most recent years.
    Using satellite cloud data and Earth reflectance models, we also show that the decadal scale changes in Earth’s reflectance measured by earthshine are reliable, and caused by changes in the properties of clouds rather than any spurious signal, such as changes in the Sun-Earth-Moon geometry.

  298. Willis Eschenbach,

    I think the confusion might not be between absorptivity and emissivity at the surface (due to Kirchoff’s law) but between albedo of the surface and the effective albedo of the earth system.

  299. >> Mike Jonas (13:42:56) :

    PALLE ET AL.: EARTHSHINE TRENDS 1999-2007 <<

    It’s a great paper Mike. Thanks for the link. I’ve browsed it and will digest it later.

    Jim

  300. >> Willis Eschenbach (13:36:00) :

    So yes, a reasonable global figure to use for the emissivity of IR (greenhouse) radiation would be around 0.95 – 0.96 or so. Which is why most people just ignore it. <<

    I was hoping for 0.85, but I see I’m back to a minority of one. Thanks for the reference.

    Jim

  301. Willis, from the combination of observational record, proxy data, GCM’s, and correlational analysis, there can be no other major cause of the current warming trend. Now, if you have a suggestion that has not been considered, well, I am all ears, but your claims are inaccurate. I do appreciate you not engaing in name calling and lashing out as many posters do here often.

  302. Also see here for history of climate science and how AGW was discovered and validated:

    ~snip~ “Denial…” “…denial…” “…denial…” Enough! ~dbstealey, mod.

  303. Trenberth’s latest Earth Radiation Budget paper has a discussion box surrounding emissivity and its context in surface temps.

    I get 0.983 to 0.991

    http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/BAMSmarTrenberth.pdf

    It used to be ignored and set to 1.0 unless one was discussing seasonal changes and seasonal lags.

    Effectively, the new numbers mean the Greenhouse Effect is 156 watts/metre^2 versus the 150 estimated before. There is no change in the temp numbers.

  304. Jacob Mack (16:46:17), thanks for sticking with it. You say:

    Willis, from the combination of observational record, proxy data, GCM’s, and correlational analysis, there can be no other major cause of the current warming trend. Now, if you have a suggestion that has not been considered, well, I am all ears, but your claims are inaccurate. I do appreciate you not engaing in name calling and lashing out as many posters do here often.

    Jacob, before you start coming up the “major cause” for why the current trend is unusual in a historical sense, you first need to show that the current trend in fact is unusual in a historical sense.

    Where is there anything that requires that we posit a new cause? The arctic has been warmer and colder than at present, hurricanes have been more and less frequent than at present, droughts have been more and less prevalent than at present, temperatures have been higher and lower than at present … what exactly is it that you are trying to explain by saying “It’s humans”? What is it that humans are the “major cause” of? Please give us the specifics of what you are trying to explain. Not “the temperature” or “hurricanes”, that’s way, way too vague. What, exactly, are you invoking human action to explain?

    Perhaps you could start with that, because until you can show that the current climate is statistically different from the historical variations in climate, I’m going to go with “natural variability” as the major cause. Occam’s Razor, don’cha know …

  305. Jacob Mack (16:46:17) : “..from the combination of observational record, proxy data, GCM’s, and correlational analysis, there can be no other major cause of the current warming trend.

    The observational record and proxy data are the basic data. The GCMs are not constructed from fundamentals but have been constructed to correlate with the data. To see this, look for “constrained by observation” in the IPCC Report. A nice simple example is in 10.5.4.5 “..relationships between
    variables that can be directly constrained by observations,
    such as global surface temperature..
    “. What this means is that parameters in the GCMs have been set so that they match the global surface temperature record. That doesn’t sound like a crime, but it is when the models are claimed to be sound because their output matches observation.

    What it all adds up to is that the GCMs can have no credibility until they can predict climate beyond the reference period – which they have proved remarkably incapable of doing.

    If you look at the Palle paper that I posted recently, you will see that the Earth’s albedo decreased significantly from the 1980s to around 2000. That means that increasing amounts of sunlight was getting through to warm the oceans – enough to deliver a very large proportion of the observed ocean warming. You will then see that albedo increased for the next 2-3 years – enough to slow or stop the warming process; and observations by Willis, Cazenave, Leuliette and maybe others, all show that ocean warming did indeed slow down and eventually stop around 2003(+-~1yr). After about 2003, albedo has either stayed at the higher level or has increased a bit more. Again, this is in line with the observed slight cooling of the oceans in the last few years.

    I would therefore put forward albedo as a major climate driver and a serious challenge to your statement “there can be no other major cause of the current warming trend“. It most certainly fits observations (particularly those after the last IPCC Report) far better than CO2.

  306. For anyone still following this thread, I think I’ve found a way to integrate the greenhouse effect forcings into the Stefan-Boltzmann equations.

    Effectively, the assumption made by pro-AGW set is that CO2/GHGs are responsible for the entire greenhouse effect (or very nearly all). And that is 150 watts/metre^2 and 33C.

    It turns out if you fit ln(CO2) to those general parametres (with CO2 being a proxy for all the non-water GHGs) and add that into the solar forcing provided by Stefan-Boltzmann …

    … out pops 3.2C per doubling of CO2.

    So the equation is = [(Solar Forcing + Ln (Co2) Greenhouse Forcing)/Stefan-Boltzmann Constant]^0.25 = Surface Temperature

    CO2 388 = [[1366*(1-0.298)/4]+[26.1417*ln(388)]/4.67E-08]^0.25 = 289K

    CO2 280 = [[1366*(1-0.298)/4]+[26.1417*ln(280)]/4.67E-08]^0.25 = 287.5K

    CO2 560 = [[1366*(1-0.298)/4]+[26.1417*ln(560)]/4.67E-08]^0.25 = 290.7K

    CO2 1120 =[[1366*(1-0.298)/4]+[26.1417*ln(1120)]/4.67E-08]^0.25 = 293.9K

    (or what the IPCC/Hansen projection was originally for CO2 doubling of 3.2C) (Small matter to change it to 3.0C – just reduce the 26 constant to 22.84 and one needs a few extra non-CO2 watts).

    And the 26.1417 Constant for Ln (CO2) is really the difference between Hansen’s phony 0.75C per watt/metre^2 temperature impact and the 0.18C that Stefan-Boltzmann says it should be (times the 5.35 ln (CO2now/CO2 orig) formula – with the assumption that 5.35 would be higher if CO2 was a proxy for all the greenhouse gases. Circle completed.

    And now we know where the 5.35 ln came from (I don’t think anyone knew before).

    Adjust the equation to 1.5C per doubling (CO2 only responsible for half the greenhouse effect) and one gets a much better match to the temperature record.

    CO2 388 = [[1366*(1-0.298)/4]+[13.07*ln(388)+75]/4.67E-08]^0.25 = 288.5K

    CO2 280 = [[1366*(1-0.298)/4]+[13.07*ln(280)+75]/4.67E-08]^0.25 = 287.7K

    CO2 560 = [[1366*(1-0.298)/4]+[13.07*ln(560)+75]/4.67E-08]^0.25 = 289.3K

    CO2 1120 =[[1366*(1-0.298)/4]+[13.07*ln(1120)+75]/4.67E-08]^0.25 = 290.8K

    So, this could be one way to simulate temps without a climate model and using the proven Stefan Boltzmann equations.

  307. And I should add the Glacial Maximum of 18,000 years ago to that (with albedo changing from 0.298 to 0.35).

    CO2 180 = [[1366*(1-0.35)/4]+[13.07*ln(180)+75]/4.67E-08]^0.25 = 283.2K = -5.3C from today

  308. >> Bill Illis (15:27:03) :

    And I should add the Glacial Maximum of 18,000 years ago to that (with albedo changing from 0.298 to 0.35).

    CO2 180 = [[1366*(1-0.35)/4]+[13.07*ln(180)+75]/4.67E-08]^0.25 = 283.2K = -5.3C from today <<

    If I use Trenberth’s 1997 model (his albedo value is 0.31 or more specifically 0.3129), an increase of albedo from 0.31 to 0.35 gives a surface temperature change of from -4.5K to -5.5K (depending on how you handle latent and sensible heat flux). If I start the model from the 0.298 albedo value and raise it to 0.35, then I get a temperature change of from -6.6K to -7.6K. But that’s Trenberth’s model. Using Trenberth’s model, I can explain all the current surface temperature increases with albedo changes alone. I don’t need GHGs, and I don’t require an atmosphere hot spot (which doesn’t appear for albedo changes).

    >> Willis Eschenbach (13:36:00) :

    My bible for many things climatic, including for the albedo and emissivity of common substances, is Geiger’s The Climate Near The Ground. He gives the following figures for IR emissivity at 9 to 12 microns: <<

    This is just a side note. The emissivity range is for 9 to 12 microns. The primary absorption band of concern for CO2 is around 15 microns. That’s outside of the specified range. I’m not sure it would change the values by much.

    Jim

  309. Jim Masterson (11:29:41), thanks for the side note. You say:

    This is just a side note. The emissivity range is for 9 to 12 microns. The primary absorption band of concern for CO2 is around 15 microns. That’s outside of the specified range. I’m not sure it would change the values by much.

    You are correct. It is worth noting that at 15 microns much of the absorption is done by H20. However, I think you are also right that it won’t change the values much.

    One often overlooked issue is that these are all clear-sky values. Clouds are essentially black-body to IR at all frequencies, including the so-called “atmospheric window”. Since global cloud coverage is about 70%, this is a huge problem for the simple IR calculations.

  310. >> Willis Eschenbach (13:55:08) :

    It is worth noting that at 15 microns much of the absorption is done by H20. <<

    You can spend a considerable time arguing with supporters of AGW over how important that 15 micron CO2 band is as it peeks out from behind the H2O band. If you can get them to agree that the H2O band dominates in that frequency range, then they bring up drier regions, such as deserts and the poles. It is interesting that during the hot year of 1998, Death Valley was relatively cool. This point was brought out by John Daly before he died (I hope that link works).

    >> One often overlooked issue is that these are all clear-sky values. Clouds are essentially black-body to IR at all frequencies, including the so-called “atmospheric window”. Since global cloud coverage is about 70%, this is a huge problem for the simple IR calculations. <<

    Again, the Kiehl and Trenberth 1997 paper uses 62% for cloud cover. They divide the clouds into three layers and assign coverage values for each layer: 49% for the low cloud layer, 6% for the middle cloud layer, and 20% for the high cloud layer. Apparently they then use the inclusion-exclusion principle to calculate the total cloud coverage. (I prefer the easier method of taking the complement of the product of the complements.) So we have 1 – ( 1 – 49% )*( 1 – 6% )*( 1 – 20% ) = 61.6%. Cloud cover then becomes ambiguous for the rest of the paper. We don’t know when they say cloudy, if they mean 100%, 62%, or something else. Supposedly their famous figure 7 uses the 62% figure, but it is never explicitly stated anywhere in the paper or on the figure. We must assume. They also miscalculate the atmospheric window flux value, but that’s another story.

    Jim

Comments are closed.