Guest Post By Steve Fitzpatrick
Introduction
Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s). This high sensitivity depends mainly on three assumptions:
1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.
2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases. Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.
3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.
However, there is doubt about each of the above three assumptions.
1. Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001. This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements. While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked. Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.
2. Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings. There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future. Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.
3. Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined. Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds. Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired. Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s. This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.
Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible. A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.
In spite of the many problems and doubts with GCM’s:
1) It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.
2) Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.
3) There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.
There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases. This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years. For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming. But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures. What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period. The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.
Climate Sensitivity
If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s. Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity. An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law. Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat. With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.
But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles. The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth. Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.
Measuring Earth’s Sensitivity
The only way to accurately determine the Earth’s climate sensitivity is with data.
Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters: 1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration. Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008. He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend. The overall trend correlated well with the log of the CO2 ratio. In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.
There are a few implicit assumptions in Bill’s model. First, the model assumes that all historical warming can be attributed to radiative forcing. This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.). The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.
Second, the model assumes the global average temperature changes linearly with radiative forcing. While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings. That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature. So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.
Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2. While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration. In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.
To make Bill’s model more physically accurate, I made the following changes:
1. Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.
2. Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.
3. The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.
4. The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.
This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing. The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma). All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity. A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average. As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.
Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle. The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140. Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter. The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot. When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.


Regional Sensitivities
Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter. The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011. An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K). This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors. The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.
Credibility of Model Projections
Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future. But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation. The probability of encountering important new variables increases with the length of the forecast, of course. So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.
To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4). The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years. Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance. The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period. The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2. A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.
Emissions Scenarios
The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:
a) The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060. Atmospheric concentration reaches ~518 PPM by 2060.
b) N2O concentration increases in proportion to the increase in CO2.
c) CFC’s decrease by 0.25% per year. The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.
d) The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.
e) Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.
The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s. The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade. This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.
Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

One such scenario can be called the “Efficient Controls” scenario. The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases. These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency. Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade. Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions. By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060. So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.
A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions. The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario. Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100. Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100. Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Conclusions
The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871. This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant). Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.
Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average. Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action. A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

The map of the globe at the head of this post is too important to ignore.
One must understand the geographical aspects of climatology if one is to come to grips with how the globe warms and cools. One can then appreciate the real place of long wave radiation, cloud cover and the Southern Oscillation in this fascinating process. Greenhouse theory can then be put in its proper context. It is almost totally irrelevant.
Referring now to the map, notice the low levels of long wave radiation from the three centers of strong convection namely the Amazon, the Congo and the Indian Ocean between India and New Guinea. The air above these regions is characterized by de-compressive cooling associated with strong uplift. The amount of long wave radiation emanating from these regions is slight. Its as slight as that from the coolest parts of the globe. In these locations the air cools via the same de-compressive mechanism that is utilized in a domestic refrigerator. It does not cool by emitting long wave radiation.
What goes up must come down. If the air rises in strong centers of convection it must fall somewhere else. Where the air descends it will warm via compression and in so doing it loses cloud. That descending air emits high levels of long wave radiation. Notice that high levels of radiation are associated with dry cloud free air and low levels are associated with wet cloudy air. Without water vapor, the presence of the so called greenhouse gas has little effect in trapping outgoing long wave energy. To the extent that a ‘greenhouse gas’ is present it will have the effect of reducing cloud cover in relatively cloud free zones. (No amplifier).
Notice the extent of the oceans of the southern hemisphere where outgoing long wave radiation is relatively more intense. ( This is a simple function of the shortage of land mass in the Southern Hemisphere). The atmosphere above these areas is relatively cloud free because the air is descending and warming. This ocean accordingly receives a lot of direct sunlight.
The flux in cloud cover above the southern oceans is the basis of the Southern Oscillation. Cirrus cloud forms on the margins of, and between the zones of descending air over the southern hemisphere oceans. There is a strong seasonal warming of the entire atmosphere between April and September due to radiation of solar energy by the land masses of the northern hemisphere. This causes a loss of cloud cover globally (about 3%) and a strong loss of cloud in the southern tropics. Superimposed on this seasonal oscillation there is a warming and a cooling of the stratosphere and the upper troposphere down to about 200hpa based on a flux in ozone content associated with the relative strength of the polar vortexes. The Arctic vortex is weak, operates only in winter and fluctuates in its strength on decadal and longer time scales. The resulting flux in stratospheric ozone causes a parallel change in the extent and opacity of high altitude cirrus cloud above 200hPa. It has long been known that a sudden stratospheric warming is associated with warming of the tropical ocean and it has become apparent in recent times that this warming is most intense in the southern hemisphere between 20° and 40° of latitude between November and March. Some three or four months following the sudden stratospheric warming, the sea at the equator reflects the intensity and timing of that stratospheric warming. By that time, the stratospheric warming responsible for the sea surface warming is well past.
On long time scales one must look to the forces that determine the concentration of ozone in the stratosphere if one wants to explain surface warming. Chief amongst these is the flux of nitrogen oxides from the mesosphere, a factor that relates directly to solar activity.
The change in the temperature of the stratosphere/upper troposphere is a fascinating area of study. Good data is available from 1948. The Southern stratosphere is the most volatile. It warmed strongly up to 1978 and has cooled since that time. That trend continues. Our globe is gaining cloud in direct relation to the diminishing temperature of the upper troposphere/lower stratosphere where cirrus cloud forms. The warming between 1948 and 1978 was abrupt. The cooling since that time has been slow but relentless. In response, the atmospheric windows that allow solar radiation to reach the surface of the ocean in the southern hemisphere are gradually shrinking in extent.
The forces that determine the character of the Southern Oscillation operate on very long time scales. The Oscillation is constantly changing. A recent study suggested that 70% of the variability in global temperature could be attributed to the Southern Oscillation. On the basis of my knowledge of the temperature of the stratosphere I would guess that this figure is an underestimate.
A mathematician who does not understand the dynamics of climate is in no position to predict anything. His tools are of no value.
Climatology, as we know it today, has nothing to say about the causes of the Southern Oscillation. Until this phenomenon is understood we will be at the mercy of snake oil salesmen , charlatans and cranks.
Do you have something scheduled to drop on August 23?
There is another assumption that needs to be adjusted to reality. The notion of “well mixed”. GHG’s are not well mixed. Water vapor is not well mixed. Aerosols are not well mixed. And salt spray is the leading contender for Aerosols by lengths. Storms, jet streams, and just plain ol’ wind can knock down both GHG’s and aerosols as well as remix them in ever changing globby concentrations. The rotation away from the Sun can change ozone amounts here and there. Seasons change the globby mix. Ocean conditions change the globby mix. If we were to color each of these components of our atmosphere and then take a video of our planet from the outside looking in, we would find a swirling ball of ribbonned colors. Quiet pretty actually. But well mixed it ain’t.
The other thing I have sticking in my craw has to do with the dynamical nature of models. They have yet to be proven in chaotic systems with multiple variables that are not readily predicted. The proper use of dynamical models is to include a control set of statistical models. Whatever happened to the idea of comparing something new to a gold standard? Did that get nixed from proper scientific research too?
If the models works so well, why can’t they with any precision outside of 30-60 days render a decent temperature forecast. The GCMs (using the assumption that WMGHG are the driving mechanism for climate) cannot provide a decent forecast of changes in ENSO, the AMO, etc… Niether NASA, NOAA, nor HadCrut have any skill in predicting even regional short-term climatic changes. Almost invariably they come off too warm.
To make matters worse, they are trapping themselves into a corner by shortening the time periods in question. Climate Science deals with anomalies that cover hundreds of years. But these people now are in the business of making seasonal variations predictions, and in the process are getting burned. Climate is now defined (according to our experts) as changes from year to year. When they come out wrong (which is quite often), they point out it on just “weather”; when their predictions come out correct, it is AGW or Climate Change.
.
John Finn (06:36:29) :
Does Judith Lean still stand by her TSI reconstruction.
Lean is continuously updating her reconstruction [as is proper when new data or insight comes along]. Here latest view [which I share and which most other researchers are coming around to] were expressed by her at TSI-meeting in 2008 [SORCE Santa Fe]: “no long-term variation has been detected – do they occurs?”. All reconstructions have been converging to the ‘flat’ version with little or no long-term variation.
It is unfortunate that people [deliberately?] confuse the Top of Atmosphere and actual insolation at ground level. The TOA is 1361 [or 1366 if you like that better – makes to difference]. Lean’s obsolete 2 W/m2 were for this figure, so translated to the ground would only be 239/1361 times 2W/m2, corresponding to 0.09 degrees. But even that 0.09 did not occur as TSI [at TOA] did not change the 2 W/m2. So, if you want to ascribe the LIA to the Sun, you can’t invoke TSI. Cosmic ray proxies do not show any marked change over that time either.
Interesting. Speculative, but interesting.
I once create a simple EBM which could explain the effect of the Eruption of Pinatubo with a very low sensitivity (about .6 K per CO2 doubling). Maybe I can dig out the excel file…
Kum Dollison (22:13:46) : asked
What was the CO2 ppm of the atmosphere needed in 1871 to make this work?
seems to me this comment deserves an answer
also this model is a tad different from this one
http://wattsupwiththat.com/2009/05/22/a-look-at-human-co2-emissions-vs-ocean-absorption/
Steve, thanks for your contribution to the discussion. However, I don’t understand why you think you can use AMO and ENSO in a model. You say:
The problem is that the AMO Index and the Nino 3.4 index are both measurements of sea surface temperature (SST). You present it as some kind of revelation that changes in SST today will affect the global temperature in the near future … but this is trivially true for any autocorrelated dataset, and is particularly true for SST and global temperature.
Since AMO and Nino3.4 are measurements of today’s temperatures, they are absolutely useless as model “input”. It’s like saying “I can hindcast yesterday’s temperature with success way better than chance … if I can use day before yesterday’s temperature as input.” Yes, this is also trivially true, and works very well on hindcasts … but if you think that will allow you to forecast the next decade you are very mistaken.
So no, the AMO and the Nino 3.4 do not predict the drop in temperature 1940 – 1970, nor the drop 1875 – 1900. They do not forecast those temperature drops at all, they measure those drops. So claiming that they make your model more accurate, while true, is meaningless.
You present it as if it were a surprising result that if you remove measured temperature variations for the Atlantic and the Pacific oceans, the variance in global temperature is reduced … but doing that has no effect on the accuracy of any model. It also means nothing about the credibility of any model.
Absent the AMO and the Nino 3.4, you are simply asserting (without providing a scintilla of evidence) that temperature will rise with log CO2 … but that is what the debate is about, you can’t just assume that.
w.
Quote:
1) It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.
Isn’t this the definition of positive forcing?
The article mentions the effect of aerosols and carbon particles in the atmosphere. I have never seen any discussion of the effect of the various “Clean Air Acts” introduced over the last 60+ years.
Steve F. – I have a full-time job, kids, wife, house, etc. and would like to understand more about climate models. I really don’t have time to learn everything necessary to start from scratch. I was wondering if you could share your model. I might help those of us who are not professionals, but have a background in science that would allow us to (eventually) come to a better understanding of climate models.
Steve – does your model take into account the dark side of the Earth that is exposed to near zero radiational temperature?
Thanks for the great post. It’s posts like this that reinforce my rejection of positive feedback and Hansen’s ‘x3 CO2 forcing factor’, a factor that was decided before most of the science was even started. No feedback has a better fit with past temperature. So a 1-1.2C rise for a doubling of CO2 is the default case.
CO2 can only warm the oceans if it warms the atmosphere first. The atmosphere has no thermal inertia so warming from CO2 must happen every time the sun comes up – something we would see by now. How can warming be hiding in the oceans if the warming of the atmosphere is the very mechanism that is supposed to warm the oceans?
Whole world cloud feed back is near to zero – strongly negative in the tropics, less so at middle latitudes and positive at the poles. Roy Spencer has intimated that negative feedback lessens away from the tropics and we know that clouds warm the poles so it’s only a matter of degree in between.
The whole positive feedback case is based upon an increase of CO2 causing a small increase in temperature which causes an increase of absolute humidity. This in turn leads to a second round of warming. But the 1998 El Nino raised temperatures quite a bit. If positive feedback was true then the temperature should have stayed permanently higher. It didn’t so how can positive feedback possibly be right? The reason it didn’t is because the atmosphere isn’t saturated with humidity – the extra humidity gets taken out of the atmosphere.
Climate isn’t complicated. It’s actually very simple. The problem we are up against is, it doesn’t matter what is said, only who says it.
My grand children (she is three years old) also plays computer games for fun.
Willis, I disagree. Using statistical models you can get in the ballpark with predictions. What you can’t predict as well is whether or not the conditions will happen as they have happened in the past. So you run several statistical models. One will win out by pulling ahead of the others. Dynamical models don’t work so well because we don’t know how the chaotic system works. Statistical models don’t care. They just spit out what happened in the past given the current set of conditions. And usually a number of things happened more than others under the same setting in the past so you set your confidence level and hit the button to get the most likely temp outcome.
Just to add to Steve’s point about the 0.27C per Watt/metre^2.
I built a few charts of how temperatures are related to the forcings and the change in the forcing using the Stefan-Boltzmann equations.
Here is how Earth’s Surface temperature changes with changes in solar radiation (at the top of the atmosphere). The point of this is to show that the equation is logarithmic and as one goes up in Watts, the successive temperature increase is less and less as it increases. [I used solar since so many people are interested in that.]
http://img34.imageshack.us/img34/3668/sbsolarforcing.png
Now here is how the surface temperature will change for each 1 Watt/metre^2 of forcing at the surface over the range of Watts we are concerned about. (this is now the surface so it is solar/4 plus the greehouse effect).
http://img33.imageshack.us/img33/2608/sbtempcperwatt.png
This really means global warming is only one-third of that estimated give or take other changes such as albedo which could happen with global warming – it might also explain why temperatures have not kept up with the predictions of the models – whatever forcing is being absorbed in the oceans is actually taking away Watts – so there is less forcing at the surface than predicted, not less Temp C response per Watt than expected – the limit is still 0.27C (which declines to 0.26C in a few more watts of increase).
I think many people have been using averages for these calculations and one really needs to get down to the “each successive Watt equals Y temp increase”. The Sun is giving us 240 watts at the surface which translates into 255K or 1.06C per Watt. But actually, the first Watt was 64C, the next one was 34C and so on. It is only 0.27C now.
[Steve actually tuned me onto this in another venue which is going to help me finish another project I’m working on. Thanks.]
Pamela, I guess my writing is not clear. The problem is not that it is a statistical model. It is the idea that you can remove variation by using AMO and Nino3.4.
In general, you can’t use observations to remove variance from observations. In particular, you can’t use sea temperature observations to remove variance from air temperature observations.
Or to be more accurate, you can do that, but you add absolutely nothing to the predictive ability of a model by doing so.
Sea surface temperature (SST) is very, very closely related to air temperature. The Hadley sea surface temperature (HadSST) enjoys a correlation of 0.93 with the Hadley global surface air temperature (HadCRUT). Steve is using a subset of the SST (Nino3.4 and AMO) to remove variations from the air temperature … I hope you can see the huge problems with that approach.
If Steve’s approach worked, we could use the SST to remove 93% of the variance from the air temperature, leaving a straight line … but how on earth would that improve the model?
His model is nothing but “Temperature is proportional to CO2” dressed up in fancy mathematical clothes. Open your eyes, folk … it doesn’t help to do that, it makes no difference, at the end of the day you’re just left with “temp ~ CO2”.
w.
If atmospheric CO2 falls to 220 ppm, plants get sick. They die at
160 ppm.…and if plants die..your beautiful lives little global warmies will end too!
Steve Fitzpatrick,
You Wrote:
“So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.”
There is a real puzzle here. Thoughout the course of the year each hemisphere cycles between hot summers and cold winters. The non-linerarity, if present, might show in the climatology. Basically the individial local climatologies (e.g. as in HadCRU’s abstem3) ought show a marked asymmetry between the (hottest month – annual average) and the (annual average – coldest month). Given that the range of temperatures in some continental areas is extreme (+/- 30C) the asymmetry ought to be marked. But the asymmetry is not apparent in the climatogies.
Now I do not no why this should be, so it puzzles me. It could look horribly like negative feedback. Over the oceans and in maritime areas one can rightly argue that the oceans are exporting there low annual range to the land and somehow this removes the asymmetry, but I can not see how this applies to the areas like Eastern Siberia that have the largest annual range, which implies that the are largely cut off from the oceans.
As best as I can calculate from the S-B equation the asymmetry there should be around +9% of the annual range, but is actually very small ~1% and in the opposite sense.
Now this may all be just rubbish, so if anyone knows better please tell.
*******
On another tack, one area that I feel is not much commented on is the abilites of the GCMs to reproduce Earth Like Climatologies. That is getting the mean temperature, annual range, and annual phase lags correct for each location. As I recall they do not do very well. There really ought to be a climatology test. Such as, if I was to wake up in a modelled world, could I tell within the course of a year whether the climatology was right or wrong. Now I am not talking about whether which nobody can predict, but the climate (strictly speaking the climaotolgy).
One yet another tack, a guy from the MET was interviewed on the box a week or so back over the seasonal forecast for the UK and its failure. He siad that it was easier to predict the climate in the distant future (I can not remember but I think ~30 years out) than it is to predict the coming seasons. Could that be because it takes a lot longer before you can be judged.
Alexander Harvey
What happens to the scenario presented by Steve Fitzpatrick if one factors out the overestimation of warming that is likely to be part of the CRUT3V temperature anomaly. The CRUT3V anomaly does not account fully for socioeconomic contamination of the temperature record, as demonstrated by McKitrick and others. The real post-Little Ice Age global temperature anomaly trend is probably lower than that used in the post under discussion here.
I wonder what effect man’s deforestation and desertification has had on climate in the past century ? Australia alone has lost 70% of its natural vegetation.
The paper is interesting, but if the draconion restrictions of emissions isn’t enough to stop AGW like that paper says the models suggest, does that mean the only way would be an unprecedented reduction of global population by more than 90 percent and maybe even 99 percent? Would we have to completely and rapidly de-populate Africa, China, and India and make that whole continent and the whole of those countries massive plant and animal preserves to make the temps. stay even?
One should bear in mind that when playing with the simple stefan’s law balance sensitivity that the surface average radiation is around 390 W/m^2 and the average radiated power is 240 W/m^2. Take the difference of that and you find that about 150 W/m^2 is lost going through the atmosphere and not made up for by radiation from higher altitudes. If you divide 240 by 390 you get 0.61 as being the fraction of power that is radiated. For a small change, you can assume that an increase in surface temperature via stefan’s law required to achieve balance from a 1W/m^2 increase in absorption is actually more like 1.67 W/m^2. The result for this is more like 0.31 K per W/m^2 for a radiative only sensitivity (in clear sky). Making a further stretch that this is valid for all W/m^2 forcings, one sees that 150*0.31 = 46 Kelvins above that of a BB which is far greater than the observed 33K rise caused by all GHGs. The sensitivity of this is the 33K riside divided by the 150W/m^2 GHG absorption forcing which amounts to 0.22 K/ W/m^2. That places us in the position that real world forcings are subject to net negative feedback over that of a simple radiative transfer. THe only thing that this doesn’t provide for is the additional amount of other ghg forcing changes caused by a change in temperature – i.e. the water vapor feedback. Note that the 0.22 K/ W/m^2 is the actual Earth system average sensitivity value to a co2 only (or any other forcing only) rise of 1W/m^2. Of course if there is a variation in sensitivity to W/m^2 as ghg absorption is increased, then the current sensitivity must be lower than this average if earlier forcings had higher sensitivity levels. For the current levels to be at higher sensitivities, the earlier had to have a lower effect – which makes little sense as the power absorption attenuation itself is a log function of decreasing effect.
What this cannot show directly is a feedback that changes the total w/m^2 forcing from another gas – such as how many W/m^2 increase in h2o vapor forcing happen when the T rises from the original – such as a 1 W/m^2 increase in co2 forcing. A co2 doubling of 3.6w/m^2 should then result in 0.22×3.6 = 0.8 Kelvins increase in T. If a 0.8 Kelvins rise in T creates an increase in h2o vapor forcing in some sort of positive feedback mechanism, its effect must definitely be less than 0.8 Kelvins. Otherwise any small variatiations in upward T would result in the complete runaway of h2o vapor driving itself upward. Also, any net positive feedback of h2o vapor forcing with T leads to wild swings and variations. Negative feedback net still results in variations but they are both stable and reduce the total swing from what it would be otherwise.
Note that a modest increase in T can only result in the possibility of h2o ‘feedback’ where there is liquid h2o available to be brought into the atmosphere. Some areas do not have this ready reservoir. Also, bringing in more h2o vapor into the atmosphere increases convective power transfer and can bring in that real unknown of additional cloud cover which can reduce added h2o vapor forcing from possibly being positive to being seriously negative – and this is the area that is poorly known and practically ignored in modeling.
It would unfortunately take more time to answer all the comments/questions posted until now than it took to prepare the post, so I can’t possibly address all. I will try to address at least some:
With regard to this be a ‘pro-AGW’ post, I want to point out that I am very skeptical of the large temperature increases projected by GCM’s and the IPCC, and that the net climate sensitivity the model suggests (~0.27 degree per watt) is in the range of 1/3 of the sensitivities used in making those much larger projections. On the other hand, adding any infrared absorbing gas to the atmosphere makes it more difficult for infrared radiation to escape from the Earth’s surface. The net effect of these gases has been pretty well studied, and their “radiative forcings” are reasonably well known, so the addition of infrared absorbing gases to the atmosphere ought to increase the surface temperature. The key issue is the magnitude of warming that might reasonably be expected from surface temperature. Is the feedback that operates on this radiative forcing negative, positive, or near zero? My post was an effort to define the warming that might take place due to greenhouse gases IF they were the only cause for warming since the mid 1800’s. The several comments suggesting that part/most/all the warming was due to other causes seem to miss the point I was trying to make: this is pretty much a worst case scenario.
With regard to the model itself, I was not ware when Anthony would place the post on WUWT; had hoped to have the spreadsheet (sorry, it is not R or something else that some might prefer) available at the time of the post. I will ask Anthony to make the spreadsheet available.
Lief: I was not aware that Lean had changed her mind about the 2 watts change since the little ice age. It certainly was not my intent to misrepresent her current views. The calculations I did were based on recently measured changes in intensity over the solar cycle (peak to valley) of ~1 watt per square meter at the top of the atmosphere, and the model assumed this variation was the same since 1871. This works out to ~0.7 * 0.25 = 0.175 watt per square meter, and an expected solar signal from the solar cycle of 0.047C (peak to valley) for a sensitivity of 0.27 degree per watt per square meter. What I found interesting was that the best model fit to the temperature data corresponded to ~0.168 watt per square meter, remarkably (at least to me) close to the 0.175 watt per square meter that would be expected based on the measured variation over the last few cycles. So for what is is worth: the model is consistent with no substantial change in cyclical variation over the past 130 years.
Willis Eschenbach (09:13:53) : “The problem is that the AMO Index and the Nino 3.4 index are both measurements of sea surface temperature (SST). You present it as some kind of revelation that changes in SST today will affect the global temperature in the near future … but this is trivially true for any autocorrelated dataset, and is particularly true for SST and global temperature.”
If you look at the AMO and Nino 3.4 historical data you will see that in spite of overall warming since 1871, these indexes have shown essentially no net trend, and so appear to have contributed virtually nothing to the observed total warming. Graphs showing the historical trends of these two indexes are included in the spreadsheet. AMO and Nino 3.4 most certainly are related to “climate/weather noise”, and that is the point of including them in the model: these indexes account for most (not all) of the variation around the long term trend. AMO and Nino 3.4 can be measured at any time you want, and their contributions subtracted from the currently measured global average temperature to reveal the “true” temperature trend (or at least a much “truer” trend). Indexes like the AMO and Nino3.4 are well known to capture shorter term climate variation, and I was not suggesting that including them in a model was any kind of “revelation”; they were included in Bill Illis’s model (based on only CO2, AMO, and Nino 3.4) back in 2008.
eric (06:40:53) :
“Steve Fitzgerald,
One of the key problems with an analysis of climate sensitivity from temperature data, such as you have performed is the estimation of lag time for the ocean surfaces to heat up. The use of the solar cycle versus temperature data is problematic. It is ok if the system consists of only one heat resevoir, so a single time constant is appropriate. The problem is that the ocean has a shallow and a deep resevior with different time constants, and the easily observed smaller resevoir, which has a 2 year time constant, will give you too small an answer for the climate sensitivity.
A more complex model is required to get a correct answer.”
I do not know who this Steve Fitzgerald person is, but he is not me.
If you assume that the “short” time constant is <2 years, and this shallow reservoir represents only ~40%, and you also assume that there is a larger (~60%) reservoir with a time constant of ~30-40 years, then the model will calculate (with with a lower R^2) a sensitivity of about 0.37 degree per watt per square meter, or ~1.37C for a doubling of CO2. To reach a sensitivity in the range of the IPCC projections, you need BOTH much longer ocean lags (which do not appear consistent with recent ARGO data) and assume that man-made aerosols have "canceled" much of the radiative forcing (once again, not consistent with 'global brightening' observed since the early 1990's. The model will also suggest that the solar cycle forcing has to be substantially bigger than has been observed by satellites.
The key point is: what will happen in the next 50 years? A relatively straightforward (and simple) curve fit analysis suggests that warming may continue, but at much less than the IPCC projected rate. Please look at the accuracy of the 1972 to 2008 model projection, and then compare with the accuracy of GCM projections since at least 2000 (Lucia has many relevant posts on this subject); the GCM's consistently predict more warming than actually happened. Do you honestly think that the prediction accuracy of the model I showed will change from good to poor starting in 2009, and the GCM's will suddenly become more accurate? If so, what change(s) in the sun/oceans/atmosphere do you think is(are) happening right now that will make the curve fit model less accurate than it was for 1972 to 2008, and make the GCM's more accurate?