Guest Post By Steve Fitzpatrick
Introduction
Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s). This high sensitivity depends mainly on three assumptions:
1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.
2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases. Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.
3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.
However, there is doubt about each of the above three assumptions.
1. Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001. This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements. While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked. Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.
2. Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings. There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future. Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.
3. Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined. Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds. Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired. Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s. This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.
Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible. A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.
In spite of the many problems and doubts with GCM’s:
1) It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.
2) Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.
3) There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.
There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases. This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years. For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming. But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures. What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period. The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.
Climate Sensitivity
If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s. Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity. An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law. Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat. With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.
But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles. The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth. Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.
Measuring Earth’s Sensitivity
The only way to accurately determine the Earth’s climate sensitivity is with data.
Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters: 1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration. Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008. He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend. The overall trend correlated well with the log of the CO2 ratio. In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.
There are a few implicit assumptions in Bill’s model. First, the model assumes that all historical warming can be attributed to radiative forcing. This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.). The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.
Second, the model assumes the global average temperature changes linearly with radiative forcing. While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings. That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature. So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.
Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2. While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration. In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.
To make Bill’s model more physically accurate, I made the following changes:
1. Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.
2. Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.
3. The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.
4. The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.
This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing. The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma). All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity. A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average. As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.
Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle. The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140. Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter. The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot. When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.


Regional Sensitivities
Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter. The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011. An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K). This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors. The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.
Credibility of Model Projections
Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future. But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation. The probability of encountering important new variables increases with the length of the forecast, of course. So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.
To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4). The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years. Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance. The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period. The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2. A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.
Emissions Scenarios
The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:
a) The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060. Atmospheric concentration reaches ~518 PPM by 2060.
b) N2O concentration increases in proportion to the increase in CO2.
c) CFC’s decrease by 0.25% per year. The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.
d) The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.
e) Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.
The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s. The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade. This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.
Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

One such scenario can be called the “Efficient Controls” scenario. The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases. These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency. Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade. Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions. By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060. So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.
A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions. The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario. Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100. Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100. Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Conclusions
The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871. This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant). Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.
Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average. Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action. A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

Michael Jennings (09:45:02) :
TENUC: You write all of this crap about unreliable ARGO data but are perfectly happy to accept ground based temperature data for; a continuing changing number of stations, location changes, changes in UHI, thermometer quality changes, different methods of recording, etc..? Unblelievable!
Leif’s law: “if data support your pet idea, they are good, otherwise bad or uncertain or doctored or …”
Re: comment up thread – I stand corrected re: Argo.
I had it in mind it only covered the top 300m but following on from comments above just checked the Argo site where it shows top 2000m are profiled.
Leif Svalgaard (10:48:46) :
Leif’s law: “if data support your pet idea, they are good, otherwise bad or uncertain or doctored or …”
——————————–
And who’s Leif?
[Reply: I suggest you do a search for “Leif Svalgaard”. You might be surprised. ~dbstealey, mod.]
Willis Eschenbach (08:21:54) :
During the night, you are correct that the surface continues to lose energy through evaporation and radiation. While it still is getting DLR, the lack of sunshine make for a net heat loss, so it cools overall. As soon as it becomes cooler than the underlying layers, however, the top layer starts sinking, and cooler water is brought to the surface by thermal convection.
Hope this clarifies my often vague writing …
Willis, thank you. I am with you right up to the last line:
“and cooler water is brought to the surface by thermal convection.”
Surely thermal convection will bring warmer water to the surface, which then cools by convection to the air and radiation of heat, and then sinks to be replaced by warmer waters again ?
tallbloke (13:38:46), again my apologies for lack of clarity. You are correct when you say:
At the end of the day, the nearer to the surface, the warmer the water. The top layer cools and sinks. When I said that “cooler water is brought to the surface”, I meant cooler than the top layer before it started cooling.
As this process continues, the water reaching to the surface is cooler and cooler as the night progresses, which is what I was trying to say.
w.
” As soon as it becomes cooler than the underlying layers, however, the top layer starts sinking, and cooler water is brought to the surface by thermal convection.”
Not sure that follows. In gliding one gets a layer of hot air close to the ground which then needs a trigger to form a thermal. So a layer of cool surface water especially if only a degree or two cooler and 10 x wave amplitude thick would probably need a trigger to form reverse thermals. Whether this would clear the temp. inversion before the next days solar heating seems unclear.
Willis Eschenbach (08:58:20) :
Michael Jennings (09:45:02) :
Steve Fitzpatrick (10:14:25) :
You may be surprsed to hear that I am a skeptic regarding AGW, and if asked to vote on the balance of evdence and data I have seen to-date, would have to come down on the side of natural rather than man-made climate change.
However, I think the biggest problem facing climate research is lack of good quality data at suffcent granularity to reach meaningful conclusions. I also feel that many climatologists try to break down bits of our dynamic chaotic system and treat them as individual linear systems intead of taking the necessary holistic approach.
Regarding land based temperature, I agree that the data is even worse than for the sea. I’m not even sure that trying to use avarage global surface temperature has any real meaning and some other more useful KPI’s should be developed to see how climate is progressing.
Pamela, much SWR is absorbed by land regions… some remains, yes… LWR is trapped by GHG’s, and less SWR is reflected back as ice cover melts… to keep it out of contention, say the Artic, as opposed to the Antartic, where there is no controversy over the quickly meltng ice and increases in Methane loss there.
As SWR and LWR accumulate in conjunction the Earth holds in more transfer energy which is defined as heat and the global mean temperatures go up.
So, Steve,
if it is not melting ice or thermal expansoin causing sea level rise, what is it? Clearly you are not seeing the connection between sea level rise and heat energy transfer. Also check out the other author I pointed out and the recent corrections made by NASA showing ocean heat content still going up.
tallbloke (13:38:46) :
Perhaps Willis should have said, “formally cooler” to avoid confusion
DaveE
Oops. Formerly!
DaveE.
Sandy (14:21:26), thanks for your comment, you raise an interesting issue. I had said:
You replied:
Well, I guess my advantage in this arena is that I’ve done a whole lot of day diving and night diving in the tropical ocean, plus I fly ultralights … so I can testify to both phenomena from personal experience.
I suspect that the difference in part is the method of heating. In the atmosphere, at dawn the earth’s surface warms up. This in turn must heat up the overlying fluid (air). Only when that happens does the daily overturning start to occur.
In the ocean, on the other hand, at sundown the fluid itself is cooling. It is cooled directly by radiation and evaporation from the top surface, rather than indirectly as is the case with the atmosphere. This means that, unlike in the atmosphere where heating starts only after dawn, the ocean surface cooling can actually start before sunset. And as soon as an appreciable amount of the surface cools, it starts to sink.
In part because of the much greater viscosity of water compared to air, this rapidly organizes into smaller areas of descending cooler water, in the middle of larger areas of more slowly ascending warmer water. In the atmosphere, this process of self-organized circulation takes longer to form.
There is often a clear sensible temperature difference involved between the rising and sinking water. The temperature difference, as well as the fact that the warmer water is rising and the cooler sinking, makes them clearly perceptible when diving at night. It is most pronounced in the morning before dawn, as you might expect.
My best to you,
w.
Jacob Mack (15:05:53), you say:
You have put your finger on one of the unsolved mysteries of climate science. The various measurements of sea level rise do not agree with the total of the known sources of sea level rise. Why? Well, not to put too fine a point on it … we don’t know.
Or as Willis et al. put it:
Jacob Mack (15:05:53) :
“So, Steve,
if it is not melting ice or thermal expansoin causing sea level rise, what is it? ”
I never suggested that there had not been sea level rise since 2003; the satellite altimetry data is pretty clear that there has been.
The rate of rise in sea level has become less, but the sea level continues to rise. The best evidence (from ocean mass determination via satellite) is that melting glacial ice (not floating ice, which would would have little effect on sea level) has contributed nearly all of the observed sea level rise in recent years (Cazenave et al). The best available evidence is that the oceans have gained mass, but not accumulated any heat since about 2003, and may have lost a bit. I am astounded that you continue to doubt this in the face of all the published studies to the contrary.
Steve Fitzpatrick
“I am astounded that you continue to doubt this in the face of all the published studies to the contrary.”
I am equally astounded you are not aware of all the peer review literature showing that warming is causing the glacial ice to melt in the first place. If the oceans slightly cool due to more glacial ice melt, this would not show that AGW is somehow in decline, or that there is a cooling either.
Steve Fitzpatrick (18:47:36) :
The best available evidence is that the oceans have gained mass, but not accumulated any heat since about 2003
No wonder, as Nasif claims that heat cannot be stored, hence not accumulated.
We are more-or-less at equilbrium. Maybe a 0.1 watt here and a 0.1 watt there is still being absorbed into the heat sinks of the ocean and the ice-sheets but no increasing ocean heat content means we are at equilbrium (right now).
What does that mean? It means the CO2 doubling sensitivity is less than 3.0C. It is probably only about 1.0C to 1.5C per doubling.
We can actually prove this mathematically. Something the climate scientists have never done so far. There is no stefan-boltzmann-like equations for the greenhouse effect. It is still just a guess.
So let’s do some simple math for them.
Let’s say the 33C greenhouse effect corresponds to 280 ppm CO2 (or 32.5C if you think at 280 ppm in 1750, it was 0.5C cooler).
11 havings of 280 ppm results in a number that is effectively Zero. 11 times 3C per doubling/halving and the Earth’s temperature is back to 255K or -18C and the greenhouse effect is gone.
Mathematically, the doubling/having number cannot be more than 3C . 11 halvings and the greenhouse effect is Zero. In fact, the 3.0C per doubling builds in the assumption that water vapour is 100% controlled by GHGs as well. No GHGs, no water vapour. All/100% of the greenhouse effect is controlled by CO2/GHGs.
Now let’s take all the CO2/GHGs out of the atmosphere, will there still be some water vapour left? Most definitely. Even at the South Pole at -50C and 3 kms high, there is still some water vapour.
Will there still be a latent heat effect provided by the non-GHG atmosphere constituents of nitrogen and oxygen? Most definitely. It might be small, but the energy from the Earth’s surface is not just going to fly directly off to space in a nano-second without any GHGs at all.
Will the Earth’s surface temperature be higher than 255K? How much higher? Well, it certainly would be since the small amount of water vapour remaining and the latent heat of the non-GHG atmosphere would provide a greater relative impact at that temperature level and the greenhouse effect remaining would still be 5C to 15C (or let’s say 15% to 40% of the greenhouse effect would still remain).
Therefore, mathematically, the CO2/GHG doubling is less than 3.0C per doubling (and certainly not more than 3.0C per doubling) and it is probably 1.5C or so per doubling.
Math is the only pure science. It does not lie.
Bill Illis (21:42:17), your math is excellent … but there is a flaw.
An example I have used before to illustrate the flaw is this: suppose we take a 75kg block of say copper, and stick one end into hot water. After a while, the heat is transferred to the other end of the copper block. I propose a theory that if you stick one end of something into a block of hot water, the other end will heat up at a certain rate. Simple math, no problem. I try a block of wood. I notice that it works just the same, except it heats up more slowly. I try a steel block, same thing, it heats up, just takes a different time.
Finally, having proven my theory, I decide to test it on the 75 kg of myself. I put my feet into the hot water, and I wait for my head to heat up … and wait … and wait.
The moral is, complex systems don’t obey simple math.
In addition, there is no reason to believe that the effect of say 1 w/m2 of additional forcing will be the same at different temperatures. Suppose we start with a world just like ours, but somehow cooled to just above freezing. As it begins to warm up, initially the added energy causes a large temperature increase.
But as temperatures warm, the radiation goes up by T^4 and the evaporation goes up by T^2. So each additional unit of energy causes less and less temperature increase.
Nor is this all. As the temperature warms, at some point cumulus clouds begin to form in the tropics. As the temperature increases, the cumulus clouds increase, reflecting more and more sunlight away from the surface.
Finally, at a higher temperature, cumulonimbus (thunderstorm) clouds form. These cool the surface through a variety of mechanisms.
The result of all of these (increasing radiation, increasing evaporation, increasing cumulus, increasing cumulonimbus) put a limit on how warm the earth gets.
The result of this is that the climate sensitivity (the change in temperature from a given change in forcing) is not constant with temperature. It decreases with increasing temperature until it reaches a point where another watt per square metre is totally counterbalanced by increasing radiation, evaporation, cumulus, and thunderstorms.
Sorry to mess up your math like that, but the nature of the system is that for the reasons above, the system does not respond linearly to forcing increases.
w.
Willis Eschenbach (14:08:57) :
At the end of the day, the nearer to the surface, the warmer the water. The top layer cools and sinks. When I said that “cooler water is brought to the surface”, I meant cooler than the top layer before it started cooling.
As this process continues, the water reaching to the surface is cooler and cooler as the night progresses, which is what I was trying to say.
w.
Thanks again Willis, that all makes sense to me now.
Willis,
I agree with you. I posted charts above exactly along those lines. I’ve done the math along those lines as well (give or take the cloud and evaporation impacts).
http://img524.imageshack.us/img524/6840/sbearthsurfacetemp.png
http://img43.imageshack.us/img43/2608/sbtempcperwatt.png
But the science of the greenhouse effect, what little there is, is not really based on the stefan-boltzmann world. It is based on what Steve McIntyre calls “the higher the colder” argument.
Basically, the Earth’s surface is 33C higher than it should be from the impact of the Sun (288K – 255K). It is colder as one goes higher up in the atmosphere and there is a point where the atmosphere is radiating back to space at the solar equilbrium level of 255K (there is actually 4 different levels where that is happening but they seem to ignore that and just use the one at the tropopause).
Below that level, we are not operating in a stefan-boltzmann world anymore, we are working in an atmospheric physics radiation transfer world with GHGs (and, more accurately, GHGs with 100% impact on water vapour) controlling the energy/radiation flows from the tropopause and the surface. We are operating in world of cumulonimbus clouds and different layers of the atmosphere having different temperatures and climates.
Various arm-wavings later and they believe only a climate model can simulate the atmospheric energy/radiation flows properly and a guess is made that 3.0C per doubling of CO2/GHG controls how the 33C greenhouse effect occurs.
It seems strange to me that we have such successful formulae like the stefan-boltzmann series but we have to switch to a climate model when the atmosphere is warmer than 255K.
Let’s say the stefan-boltzmann world still governs what is going on. Then your statements about the declining temperature impact are correct and so are my charts. There will be very little warming beyond where we are now. 13 extra watts of GHGs are required for the next 3.0C increase which is an impossible amount. But part of your comments say evaporation and cumulonibus clouds are then in control and we are back to a world of climate models again.
Jacob Mack (19:27:34) :
“I am equally astounded you are not aware of all the peer review literature showing that warming is causing the glacial ice to melt in the first place. If the oceans slightly cool due to more glacial ice melt, this would not show that AGW is somehow in decline, or that there is a cooling either.”
I am quite aware of the literature. The average global temperature for certain increased in the period between the late 1970’s and about 2001. I do not dispute this. The average global temperature also increased between about 1915 and 1945, and declined slightly between 1945 and the late 1970’s.
Between 2003 and 2008, there was clearly a net increase in ocean mass, do mainly to net melting of glaciers, as shown by the measured sea level increase and measured increase in ocean mass during that period, consistent with a measured lack of heat accumulation in the oceans. The measured increase in ocean mass is also consistent with measured declines in glacial volumes. The cooling associated with melting of glaciers is extremely small, and does not explain the lack of ocean heat accumulation between 2003 and 2008. You can do the calculation yourself: how much change in average temperature takes place when you melt ~10 mm of ice in the top 700 meters of ocean? Answer: about 0.0011C, which does not significantly change the Argo results for 2003 to 2008.
That at the average global temperature produces a net melting of glaciers has nothing at all to do with with the question of whether or not the oceans are accumulating heat. If the oceans did not accumulate heat for 2003 to 2008, then the average global temperature could not have been increasing during that period, since any increase in average global temperature requires that the ocean be accumulating heat.
I think this a pretty well known and broadly accepted concept on all sides of the AGW issue. Even James Hansen (along with many others) has said the same in published papers. I remain very puzzled by your position on this.
Leif Svalgaard (21:03:32) :
“Steve Fitzpatrick (18:47:36) :
The best available evidence is that the oceans have gained mass, but not accumulated any heat since about 2003
No wonder, as Nasif claims that heat cannot be stored, hence not accumulated.”
I do hope Nasif does not become involved. Semantics matter not at all.
BTW, did you see the questions I asked earlier?
**************************************************
Steve Fitzpatrick (07:41:12) :
Leif Svalgaard (18:58:51) :
“The peak to valley change depends on the size of the solar cycle and varies by a factor or 3 or more. A good median value is 0.1% of TSI or ~1.4 W/m2, for some cycles larger, for some smaller.”
Does the mean TSI over a whole cycle remain more or less constant from cycle to cycle, or does the mean TSI for a whole cycle depend on the level of solar activity. For example, if the peak of one cycle has a sunspot number of 75, and the peak of the next 150, would the average TSI over each cycle be the same, or would the cycle with lower peak activity have a lower average TSI?
“A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].”
Does the climate sensitivity not enter into the expected temperature change? If the climate sensitivity were 0.75 degree per watt, then a top of atmosphere variation of 1.4 watts per M^2 would give about 0.25 * 0.7 * 1.4 * 0.75 = 0.184K change, not close to the 0.07K you note above. It seems to me that the above calculation implicitly assumes a sensitivity of about 0.21K per watt/M^2. Am I missing something?
Steve Fitzpatrick (09:59:01) :
Does the mean TSI over a whole cycle remain more or less constant from cycle to cycle, or does the mean TSI for a whole cycle depend on the level of solar activity.
To first order, TSI W/m2= 1365 + SSN/100. The constant 1365 is the nominal value, the real value might be about 1360.5, but the difference matters not.
“A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].”
Does the climate sensitivity not enter into the expected temperature change?
My assumption is that what goes in must come out [sooner or later], so the radiation balance does not involve climate sensitivity. If you want to introduce such an animal, then the question is whether the sensitivity is constant in time [I think not – might depend on land-sea distribution, for example] or varies. To a certain extent I have taking some sensitivity into account by using 288K as the temperature rather than 255K.
tallbloke (09:14:32) :
If you’re right, it’s amazing that such a small increase in TSI could cause such an acceleration in the thermal expansion of the oceans,
ARGO [and other recent data] finds that the 90 W/m2 annual TOA TSI corresponds to ~7 mm steric sea-level change [that due to temperature and density variation]. This means a sensitivity of 7/90 = 0.08 mm/[W/m2]. So even if TSI increased 1 W/m2 over a century [which i don’t think, but let’s assume that for the argument], then the total sea-level rise over a century would be 0.08 mm, right?
Jacob, you said,
“Pamela, much SWR is absorbed by land regions… some remains, yes… LWR is trapped by GHG’s, and less SWR is reflected back as ice cover melts… to keep it out of contention, say the Artic (sic), as opposed to the Antartic, where there is no controversy over the quickly meltng (sic) ice and increases in Methane loss there. As SWR and LWR accumulate in conjunction the Earth holds in more transfer energy which is defined as heat and the global mean temperatures go up.”
70 to 72% of the Earth is covered with oceans. SWR reaches into water much better than it does land and land emits LWR much faster than water, which is one of the reasons why land cools faster. Therefore the majority of SWR that gets through the atmosphere without being reflected back into space sinks into the oceans.
LWR is not necessarily trapped by GHG’s. If it was, we would not be able to measure it at the outer edge of the atmosphere.
Given the degree of ice melt, just how much do you think SWR reflection has been reduced? It can be calculated. Determine insolation (the amount of SWR that gets through the atmosphere) for Summer over the Arctic. Obtain the calculation for snow reflection of SWR. Calculate the change in amount reflected by the greatest Summer ice cover and the least Summer ice cover. What do you think the % decrease will be and is that significant in terms of climate?
Methane monitoring is not picking up an increase in Arctic or even subarctic methane increases. Where did you get your data from?
Based on incoming SWR and outgoing LWR, there has been no measured accumulation of either type inside our GHG blanket. Where did you get your data from?