How Sensitive is the Earth’s Climate?

Guest Post By Steve Fitzpatrick

Fitzpatrick_Image1

Introduction

Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s).  This high sensitivity depends mainly on three assumptions:

1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.

2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases.  Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.

3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.

However, there is doubt about each of the above three assumptions.

1.  Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.  This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements.  While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked.  Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.

2.  Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings.  There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.  Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.

3.  Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined.  Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds.  Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired.  Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s.  This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.

Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.  A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.

In spite of the many problems and doubts with GCM’s:

1)       It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.

2)   Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.

3)   There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.

There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases.  This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years.  For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.  But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures.  What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period.  The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.

Climate Sensitivity

If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s.  Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity.   An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law.  Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.  With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.

But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles.  The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth.  Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.

Measuring Earth’s Sensitivity

The only way to accurately determine the Earth’s climate sensitivity is with data.

Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters:  1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration.  Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008.  He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend.  The overall trend correlated well with the log of the CO2 ratio.  In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.

There are a few implicit assumptions in Bill’s model.  First, the model assumes that all historical warming can be attributed to radiative forcing.  This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.).  The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.

Second, the model assumes the global average temperature changes linearly with radiative forcing.  While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings.  That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature.  So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.

Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2.  While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration.  In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.

To make Bill’s model more physically accurate, I made the following changes:

1.  Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.

2.  Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.

3.  The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.

4.  The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

Fitzpatrick_Image2
Figure 1 Model results with temperature projection to 2060

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing.  The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma).  All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity.  A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average.  As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.

Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle.  The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140.  Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter.  The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot.  When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.

Fitzpatrick_Image3
Figure 2 Scatter plot of the model versus historical temperatures
Fitzpatrick_Image4
Figure 3 Comparison of the model’s temperature projection under ‘Business as Usual’ with the IPCC projection of ~0.2C per decade, consistent with GCM projections.

Regional Sensitivities

Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter.  The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011.  An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K).  This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors.  The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.

Credibility of Model Projections

Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future.  But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation.  The probability of encountering important new variables increases with the length of the forecast, of course.  So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.

To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4).  The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years.  Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance.  The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

Fitzpatrick_Image5
Figure 4 Model temperature forecast for 1972 through 2008, with model constants based on 1871 to 1971. The model has no “knowledge” of the temperature record after 1971.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period.   The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2.  A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.

Emissions Scenarios

The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:

a)       The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060.  Atmospheric concentration reaches ~518 PPM by 2060.

b)       N2O concentration increases in proportion to the increase in CO2.

c)       CFC’s decrease by 0.25% per year.  The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.

d)       The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.

e)       Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.

The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s.  The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade.  This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.

Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

Fitzpatrick_Image7
Figure 5 Reduced warming via controls on non-CO2 emissions and gradually lower CO2 emissions growth.

One such scenario can be called the “Efficient Controls” scenario.  The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases.  These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency.   Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade.  Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions.  By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060.  So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.

A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions.  The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario.  Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100.  Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100.  Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Fitzpatrick_Image8
Figure 6 Draconian emissions controls may reduce average temperature in 2060 by ~0.21C compared to business as usual.

Conclusions

The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.  This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant).  Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.

Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average.  Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action.  A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
334 Comments
Inline Feedbacks
View all comments
Richard
August 9, 2009 3:14 am

Steve Fitzpatrick: The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, BASED ON THE ASSUMPTION THAT RADIATIVE FORCING FROM WMGG’S HAS CAUSED ALL OR NEARLY ALL THE MEASURED TEMPERATURE INCREASE SINCE ~1871.
The question I would like to ask is – what would be the climate sensitivity if the radiative forcing from WMGG’s caused only 50% of the warming? As this is the approximate position of the IPCC? What would it be if it caused only 25% of the warming and if it caused only 10% of the warming?
In each of the above scenarios what would be the average temperature in 2060 compared to the 2008 average? (And I presume this assumes that the average solar intensity and the Earth’s average albedo do not change?)

Barry R.
August 9, 2009 3:18 am

A couple of things common to most models:
1) This all assumes that the concentration of greenhouse gases is the same over every part of the planet. Once you state that assumption it becomes obvious that it isn’t going to be completely true since there are both sources and sinks for the emissions, especially for CO2. How big are the variations? I don’t know. It could be that they are insignificant compared to the overall ratio. However, I suspect that you will find that concentrations of any man-made greenhouse gas will be highest in the likely source areas–Northern hemisphere over land primarily, and lower in probable sink areas like forested tropical areas and over tropical oceans. What does that do to the overall impact? It would be interesting to model that.
2) This all assumes that the measured temperatures are reasonably accurate representations of overall temperatures for the planet. There are a lot of reasons to doubt that. Temperatures tend to get measured in areas where it’s convenient for people to measure them. That often means in cities or near airports. However, you have to be careful about measurements from rural areas too. How many of the sensors are close to hog or cattle confinement operations? Both are major producers of Methane, CO2, Ammonia, and Hydrogen Sulfide. If any of those gases have an impact on temperature they would have their greatest impact near the source. Anthony’s surface station project should probably look at how close rural temperature sensors are to confinement operations.
3) This all assumes that there will be no biological response to increased CO2. It’s more likely that after a lag of a few decades there will be biological shifts that favor plants (especially microscopic ones) capable of exploiting higher CO2 levels, reducing or at least partially balancing increased emissions.

August 9, 2009 3:19 am

FWIW use of the outdated lean 2000 paper should be avoid as Leif stated. Also minor typo. “described at WUWT on August 23, 2009”

M White
August 9, 2009 3:22 am

“World temperatures are set to rise much faster than expected as a result of climate change over the next ten years, according to meteorologists.”
This is how the people in power see the earth’s climate sensitivity. Unfortunately they seem to share a lot in comman with an end of the world cult, they day after the day of judgement another future date will be picked.
http://www.telegraph.co.uk/earth/earthnews/5925523/World-temperatures-set-for-record-highs.html
“a new study by Nasa said the warming effect of greenhouse gases has been masked since 1998 because of a downward phase in the cycles of the sun that means there is less incoming sunlight and the El Nino weather pattern causing a cooling period over the Pacific.”
“The new study adds the effect of El Nino, which is entering a new warm phase and of the impact of the solar cycle.”
“Gareth Jones, a climate research scientist at the Met Office, said the effect of global warming is unlikely to be masked by shorter term weather patterns in the future. He said that 50 per cent of the 10 years after 2011 will be warmer than 1998. After that any year cooler than 1998 will be considered unusual.”

tallbloke
August 9, 2009 3:40 am

Leif Svalgaard (23:04:09) :
For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter
Using your own numbers this gives an increase of 2*(239/1366)*0.267 = 0.09 degrees, hardly a “significant fraction of the observed warming”.
Blimey, Groundhog day again.

Allan M R MacRae
August 9, 2009 3:43 am

So according to the above, the Worst Case Sensitivity for a doubling of CO2 is ~1.0 degree C.
Other analyses, and current cooling suggests a lower figure, between 0.0 and 0.3 C; either way, so low as to be inconsequential.
I accept 0.0 to 0.3C.
I accept inconsequential.

Curiousgeorge
August 9, 2009 3:53 am

What’s really important with all this is not the absolute precision of any estimate of warming or cooling or sea levels, etc.
What is important is whether the majority of people in major countries (and therefore their political leadership ) believe one position or another. Those beliefs will drive political, economic, demographic and military decisions and actions regardless of any scientific pronouncements that contradict those beliefs. Some countries may decide that their survival hinges on preparations for repelling a perceived political, economic, demographic or military threat directly related to a belief in global warming and institute policies and actions that exacerbate tensions that already exist either internally or externally. Those actions inevitably result in other countries developing countermeasures to the above to ensure their own survival and prosperity. It quickly becomes a sort of arms race, in which the actual behavior of the climate over time is irrelevant. We have seen the beginnings of this in the recent disputes over Arctic oil and gas resources, as well as other natural resources around the world.
Perception is everything, and if the future is perceived as a zero sum game then we are all in a lot more trouble than anything that could be brought about by a few degrees of climate change.

tallbloke
August 9, 2009 3:57 am

DennisA (02:59:26) :
In 2000, Dr Robert E Stevenson, (now deceased), Oceanographer and one-time Secretary General of the International Association for the Physical Sciences of the Oceans (IAPSO), wrote an assessment of Levitus et al (2000) on global heat budget.
http://www.21stcenturysciencetech.com/articles/ocean.html
It is clear that solar-related variations in mixed-layer temperatures penetrate to between 80 to 160 meters, the average depth of the main pycnocline (density discontinuity) in the global ocean. Below these depths, temperature fluctuations become uncorrelated with solar signals, deeper penetration being restrained by the stratified barrier of the pycnocline.
Consequently, anomalous heat associated with changing solar irradiance is stored in the upper 100 meters.

While I agree with Stevenson on his analysis of the non-heating of the ocean by greenhouse gases, I take issue with him on this part.
My calculations (confirmed by Leif Svalgaard) show that to account for the thermal expansion component of sea level rise between 1993 and 2003, the oceans must have retained around 14×10^23J from the sun over and above the energy they receive and retransmit. This is equivalent to a 4W/m^2 forcing and must be solar in origin, plus cloud modulation. There was less cloud in the tropics during the period in question according to ISCCP data.
The global rise in SST over the same period was around 0.3C. The falloff of temperature to the thermocline is approximately linear below the mixed surface layer and this is consistent with an average increase in the temperature of 0.15C for the top 700m of ocean.
I asked James Annan, an oceanologist how warmth got mixed down to those depths far below the mixing in the top layer performed by waves. He replied that tidal action, and strongly subducting currents at high latitudes taking down warm water arriving from the tropics explained the deeper mixing.
This is consistent with extra warmth at deeper levels apparently unconnected with solar forcing, but still leaves room for another possibility : that the amount of heat coming through the thin seabed from the earths interior may vary over time due to changes in the subcrustal currents within the earths mantle. These appear to be connected to variations in Length of Day.

Chris Wright
August 9, 2009 4:07 am

Steve Fitzpatrick:
“There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.”
This makes no sense at all. The consensus appears to admit that all warming up to about 1970 was natural. There simply wasn’t enough CO2 to have any effect. In fact the Hadley two-graphs ‘proof’ depends on this.
I’m sorry, but anyone who publishes a graph showing the global temperature over the next fifty years is probably deluded. No one knows what the climate will be in 2060. It may be warmer. But it may also be colder. About the only thing we can agree on is that the IPCC projections are wildly exaggerated, probably for political reasons.
Probably the best long-term climate records are provided by the ice cores. They appear to show that CO2 is an effect and not a cause. And over hundreds of millions of years there is essentially no correlation between CO2 and temperature. This strongly suggests that the effect of CO2 on climate is negligible. We were fortunate to have enjoyed a modest warming during the 20th century, but it may not last.
Although I’m sure that the author is correct when he says the IPCC projections are too high, he may have fallen into the same trap as many other modellers. It’s pretty easy to predict what has already happened. The trick is to accurately forecast the future. Due to the chaotic nature of weather and climate, it’s probably impossible to predict beyond a few weeks. The ridiculous Met Office quarterly forecasts are a good example of this.
It will be interesting to see how well that straight green line predicts the global temperature over the coming years. My guess is that it’s probably wrong. Sorry.
Also, congratulations to WUWT for publishing this essentially pro-AGW article. It shows a good sense of balance, something lacking in some other web sites we could name!
Chris

Charlie
August 9, 2009 4:29 am

Some editing comments —
Fig 4 is a duplicate of Fig 3. The other figures are all moved down one. The real Figure 6 is missing.

Bill Illis
August 9, 2009 4:35 am

This is a very good paper.
One of the most important insights is that under the Stefan-Boltzmann equations, the very equations thats underpins most of climate science, the surface temperature should only increase 0.27C per watt/metre^2 increase in forcing. Stefan-Boltzmann is actually a logarithmic equation (like the ln(GHG) versus temperature is a logarithmic equation).
Global warming theory for the long-term climate sensitivity uses 0.75C per watt/metre^2, but the point we are on in Stefan-Boltzmann will only result in 0.27C per watt/metre^2. It is also interesting that the climate models also use 0.27C to a 0.32C per watt/metre^2 for hindcasts and short-term predictions but over time the number is increased.
I like adding in the other GHGs (which I didn’t do), the expected lags in the climate system are not occurring, and I think the conclusions presented by Steve are quite accurate.

August 9, 2009 4:44 am

In the report I read ” With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.”
I am not sure quite how to interpret this. It seems to assume that one can solve all there is to solve about greenhouse gases, by assuming that one need only to consider radiation in the way energy moves through the atmosphere. This ignores the effects of conduction, convention and the latent heat of water. I sort of find this simplification, if that is what it is, to be not very believable.

pinkisbrain
August 9, 2009 4:54 am

ok, and what was the co2 concentration in 1850?
ipcc 280, others 320 to 345ppm….big differences!
nobody knows, how long human co2 emissions will last in the atmosphere. 10y 50y, 200y?
why is the CH4 curve flat for some years now?

August 9, 2009 5:07 am

M White (03:22:28) :
Gareth Jones also said:
“The amount of warming we expect from human impacts is so huge that any natural phenomenon in the future is unlikely to counteract it in the long term”.
We’ll see…

Allen63
August 9, 2009 5:24 am

A very thoughtful article. It makes a plausible case. My comments should not be taken as negative towards the author. Rather, these are thoughts I often have regarding any models I have seen.
As someone above mentioned, the forcing phenomena/mechanisms proposed include cyclic and one monotonically increasing, and the fit is to a temperature series that shows increase; thus, the predicted temperature must increase. This and other models seem to say — it must be CO2 because I can’t think of anything else it could be, given our lack of understanding.
What I question with any model is:
Start point — By chance, good temperature records and good CO2 records begin at a local minimum — so only net increase is shown. Better if a model could go back a couple thousand years and work its way up to the present. In the process showing how it would account for historical heating and cooling in the absence of anthropogenic CO2.
The basic historical temperature data itself — The accuracy of historical temperature data itself is suspect due to many factors and Hadley data is manipulated. Can one actually believe the Hadley (or GISS, etc) anomalies are accurate and precise representations of the actual historical global temperature. I honestly don’t think that has been definitively shown.
And, lastly (for today at least), the global land and sea temperatures are measured at sites and in ways that may not accurately indicate the heat accumulation (or lack thereof) over the entire earth land surfaces to a depth and throughout the ocean depths. But, AGW is all about global heat accumulation — for which global temperature is only a proxy.

Tom in Florida
August 9, 2009 5:39 am

Has anyone ever tried to model a future where CO2 begins to decrease at the current rate of increase? What would happen if we took this steady decrease down to 0? If everything else remained the same, at what level of CO2 would the “tipping point” of unstoppable cooling be achieved? Would that be a reverse demonstration of whether CO2 drives climate or not? Would that be of any use?

August 9, 2009 5:46 am

“congratulations to WUWT for publishing this essentially pro-AGW article. It shows a good sense of balance, something lacking in some other web sites we could name!
Chris”
Hear, Hear!

ecarreras
August 9, 2009 6:18 am

Is the increase of CO2 due to the burning of “fossil” fuels? When I look at the CO2 data over the last 10 years the increase in atmospheric CO2 concentrations look almost linear. When I look at the last 10 years of energy consumption (excluding nuclear, solar, hydroelectric, wind and geothermal) the increases in consumption are anything but linear. How correlated is atmospheric CO2 increases to changes in energy consumption? Can anyone point me to any papers on this subject?

Terry
August 9, 2009 6:27 am

Please check the figures, I believe a number of them are incorrect. Fig 4 appears to be a repeat of Fig 3 and that maybe what is throwing the other off.

August 9, 2009 6:36 am

Leif Svalgaard (22:57:52) :
“For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming”
She doesn’t believe that anymore, neither does anybody else [except some climatologists].

How did you do that? You answered my question before I’d asked it.
John Finn (02:46:54) :
2. Leif Svalgaard may be the best person to answer this question. Does Judith Lean still stand by her TSI reconstruction. I know there are a number of other reconstructions, including one by LS himself, which show much less variability. Has a consensus (I tried to avoid this word, but …) been established.

Jens
August 9, 2009 6:38 am

Leif Svalgaard (23:04:09) :
“For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter. Using your own numbers this gives an increase of 2*(239/1366)*0.267 = 0.09 degrees, hardly a “significant fraction of the observed warming”.”
Please forgive a comment from someone with absolutely no credentials in this field whatsoever, but haven’t you just plugged the 0.267 answer back into the calculation? I took the 0.267 to be the sensitivity of temperature in C per W per m2. For a 2W increase, the temperature would rise 2*0.267 = 0.534C . Or am I doing something fundamentally daft (wouldn’t be the first time) ?
I do disagree that a rise of 0.09C is ‘hardly a significant fraction’. If my “one minute Google” research of a 1C rise since the Little Ice Age is correct, then the sun has caused 9% of the temperature rise since then. I call that significant, from my point of view.
Thanks for your attention,
Jens.

eric
August 9, 2009 6:40 am

Steve Fitzgerald,
One of the key problems with an analysis of climate sensitivity from temperature data, such as you have performed is the estimation of lag time for the ocean surfaces to heat up. The use of the solar cycle versus temperature data is problematic. It is ok if the system consists of only one heat resevoir, so a single time constant is appropriate. The problem is that the ocean has a shallow and a deep resevior with different time constants, and the easily observed smaller resevoir, which has a 2 year time constant, will give you too small an answer for the climate sensitivity.
A more complex model is required to get a correct answer.

Kevin Kilty
August 9, 2009 6:59 am

Barry R. (03:18:11) :
A couple of things common to most models:
1) This all assumes that the concentration of greenhouse gases is the same over every part of the planet. Once you state that assumption it becomes obvious that it isn’t going to be completely true since there are both sources and sinks for the emissions, especially for CO2. How big are the variations? I don’t know. It could be that they are insignificant compared to the overall ratio. However, I suspect that you will find that concentrations of any man-made greenhouse gas will be highest in the likely source areas–Northern hemisphere over land primarily, and lower in probable sink areas like forested tropical areas and over tropical oceans. What does that do to the overall impact? It would be interesting to model that.

NASA mid-troposphere measurements (worldwide) in July are roughly 375ppm +/- 10ppm, with highest values over western North America and western Atlantic basin adjacent North America, Middle East. So exactly the region you expected.
Chris Wright (04:07:24) :
Also, congratulations to WUWT for publishing this essentially pro-AGW article. It shows a good sense of balance, something lacking in some other web sites we could name!

Congratulations to this site for welcoming all sorts of opinions and thinking, but I don’t see this as necessarily pro AGW. The real issue it seems to me is not whether CO2 or temperature has increased over the past 50 years…we can see the measurements ourselves, but rather, what these observations mean for the future. In this case climate sensitivity is very important, and people who are quite alarmed by the situation see a sensitivity above 0.5C/(W/m^2), while this result places the value at half that. Much less alarming.
By the way, we can arrive at roughly the same value of sensitivity in three more different ways. Set W=e(sigma)T^4 (Stefan Equation), differentiate T with respect to W, and the result is the sensitivity. If one plugs in e=0.98, sigma = 5.67×10^(-8), and T as 288K, then one gets 0.25. If one takes the estimated cooling of Earth from Pinatubo and the estimated insolation change one gets about 0.5K/2.7(W/m^2)=0.19. If one takes the mean Earth temperature change over the glacial cycle (10C or so), and divides by the change in insolation (50W/m^2), then one finds 0.2 as the implied sensitivity. Of course these are simplified “models” and would be ridiculed in debate…why use a simple idea when a complicated one will do as well? (I jest).
The issue is more complex, of course, with people worrying about the effect of all the feedback influences. These feedback signals have all sorts of varying time scales, and perhaps regional influences (which makes me wonder the usefulness of mean Earth temperature in the first place), and this is what makes the issue interesting.
Since longwave radiation depends on temperature to the fourth power, mean temperature in our radiation equations should more meaningfully be fourth root of the mean of temperature to the fourth power. Does anyone know if someone has made such a calculation?

John G
August 9, 2009 7:04 am

I read this as an attempt at assuming the AGWarmers are right, the warming in the last century and one half is due to greenhouse gases, so make simplifying assumptions that won’t distort that hypothesis too much and see how bad it gets in fifty years. Further check the numbers under the assumptions we make some civilization bending efforts to reduce greenhouse gases, and under the assumption we make some civilization ending efforts to reduce greenhouse gases and see how we do. The answer seems to be if we do nothing it gets a little warmer (~1C), if we bust our collective butts it gets a little less warm (~.9C), and if we destroy civilization it will be less warm still (~.8C). He leaves it up to the politicians to decide which course of action makes the most sense . . . in which case we’ll likely take the civilization ending route.

August 9, 2009 7:32 am

Minor type alert, beginning of third paragraph. “his” should be “this”:
Many people, including his author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.