How Sensitive is the Earth’s Climate?

Guest Post By Steve Fitzpatrick

Fitzpatrick_Image1

Introduction

Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s).  This high sensitivity depends mainly on three assumptions:

1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.

2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases.  Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.

3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.

However, there is doubt about each of the above three assumptions.

1.  Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001.  This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements.  While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked.  Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.

2.  Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings.  There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future.  Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.

3.  Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined.  Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds.  Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired.  Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s.  This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.

Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible.  A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.

In spite of the many problems and doubts with GCM’s:

1)       It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.

2)   Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.

3)   There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.

There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases.  This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years.  For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming.  But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures.  What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period.  The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.

Climate Sensitivity

If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s.  Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity.   An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law.  Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.  With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.

But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles.  The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth.  Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.

Measuring Earth’s Sensitivity

The only way to accurately determine the Earth’s climate sensitivity is with data.

Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters:  1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration.  Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008.  He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend.  The overall trend correlated well with the log of the CO2 ratio.  In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.

There are a few implicit assumptions in Bill’s model.  First, the model assumes that all historical warming can be attributed to radiative forcing.  This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.).  The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.

Second, the model assumes the global average temperature changes linearly with radiative forcing.  While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings.  That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature.  So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.

Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2.  While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration.  In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.

To make Bill’s model more physically accurate, I made the following changes:

1.  Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.

2.  Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.

3.  The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.

4.  The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.

This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

Fitzpatrick_Image2
Figure 1 Model results with temperature projection to 2060

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing.  The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma).  All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity.  A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average.  As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.

Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle.  The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140.  Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter.  The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot.  When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.

Fitzpatrick_Image3
Figure 2 Scatter plot of the model versus historical temperatures
Fitzpatrick_Image4
Figure 3 Comparison of the model’s temperature projection under ‘Business as Usual’ with the IPCC projection of ~0.2C per decade, consistent with GCM projections.

Regional Sensitivities

Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter.  The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011.  An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K).  This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors.  The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.

Credibility of Model Projections

Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future.  But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation.  The probability of encountering important new variables increases with the length of the forecast, of course.  So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.

To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4).  The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years.  Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance.  The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

Fitzpatrick_Image5
Figure 4 Model temperature forecast for 1972 through 2008, with model constants based on 1871 to 1971. The model has no “knowledge” of the temperature record after 1971.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period.   The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2.  A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.

Emissions Scenarios

The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:

a)       The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060.  Atmospheric concentration reaches ~518 PPM by 2060.

b)       N2O concentration increases in proportion to the increase in CO2.

c)       CFC’s decrease by 0.25% per year.  The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.

d)       The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.

e)       Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.

The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s.  The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade.  This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.

Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

Fitzpatrick_Image7
Figure 5 Reduced warming via controls on non-CO2 emissions and gradually lower CO2 emissions growth.

One such scenario can be called the “Efficient Controls” scenario.  The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases.  These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency.   Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade.  Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions.  By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060.  So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.

A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions.  The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario.  Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100.  Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100.  Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Fitzpatrick_Image8
Figure 6 Draconian emissions controls may reduce average temperature in 2060 by ~0.21C compared to business as usual.

Conclusions

The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871.  This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant).  Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.

Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average.  Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action.  A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
334 Comments
Inline Feedbacks
View all comments
crosspatch
August 10, 2009 7:59 pm

“and yet the Anartic is still warming”
No. It isn’t.
The articles you link to are full of the usual buzz words such as:
“He says the changes could be responsible for up to 20% of the observed global sea-level rise.”
Which is pure blither. The oceans have been rising at a fairly steady rate for about the last thousand years, long before we started using fossil fuels. Oceans reached their maximum height about 7,000 years ago when they were about 2 meters higher than now. In fact, the trend in sea level rise since 2006 is flat with rise which had been going along at a fairly steady rate suddenly stopping.
The ocean’s aren’t warming either. Data from the ARGO project show flat to slightly cooling ocean temperatures since the project started.
The articles you provide are basically not true but have become “conventional wisdom”. The seas are not rising, Antarctica is accumulating ice, the oceans are not warming, the atmosphere is not warming … and you will see 2009 end up with more ice than 2008 if the current trend holds. It looks like the melt is coming to an end early this year … before the Northwest passage has had a chance to open.
If you dig into those stories that you read, you will learn that they are not backed up by the data. They pretty much all simply repeat each others data and since that is what people are taught is “fact” these days, you are primed to receive it as such. Basically you are being hoodwinked in order to get you to agree to give up a good portion of your cash in order to “save the planet”.

Jacob Mack
August 10, 2009 8:05 pm

I will continue my responses tomorrow to Steve and Pamela.

Pamela Gray
August 10, 2009 8:36 pm

Jacob, are you referring to floating ice calved from glaciers or floating ice from freezing ocean saltwater seas? Floating ice calved from glaciers is indeed freshwater sourced. Floating ice from freezing seas is salt water to begin with and when melted returns to salt water. Unless this kind of ice is capable of creation, it cannot have a greater volume as a liquid than it began with before it froze and then reintegrated back into the sea from whence it came. If there is a common misconception. It is this: that the Arctic ice cap is frozen water from river sources, accumulated snow, and ice from rain.

Jacob Mack
August 10, 2009 9:20 pm

Pamela,
I noticed you latest response in my email, so I decided to reply now. The floating ice calved from glaciers have some salinity, albeit lower, however, the changes in density due to the salt water dissolving the mostly fresh water ice sourced from the glaciers, will inrease the volume of the water in addition to thermal expansion; so let us say thermal expansion is currently flat, well, the salinity of the water would still change the density which would change the volume of the water. Since I am posting, I will also ask you to respond to a past post where you asked me about heat transfer of oceans and terrestial land masses in a recent thread, since you never got back to me there either.
Now, regarding incoming SW radiation and LW radiation there is alot of literature which exists showing good approximations and results of LW radiation trapping and I will be more than happy to post links up tomorrow.
Steve, in the interest of being more efficient in my posts: the IPCC reports are an analysis of past and recent literature, data, GCM’s and so forth and of course low, median and high end estimates are made regarding such things as aerosl effects, incoming/outgoing radiation and so forth, however, there is recent and well validated research highlighting aerosol effects and water vapo rise, which I will post tomorrow. You may find plenty of your own at NOAA, NASA GISS, NASA.GOV, AOS Princeton and so forth,but I will paste the most relevean links tomorrow.
I will say now, however, that ocean cooling does not invalidate water vapor feedbacks whatsoever, as this is a natural process through conduction for example. As sea surfaces cool and heat is transferred vertically, more heat is added to the atmosphere where increased levels of H20/C02 can trap it. Keep in mind that evaporation has a 50% cooling effect of seas whereas conduction has only about a 10% total effect, so, the escaping heat is more easily trapped by higher GHG atmospheric levels, and coupled with the high residence time of C02 provides a forcing upon the water vapor, which is itself a postive feedback, and in turn more heat returns to the planet surface. Since land is more greatly affected by heat transfer due to tempertaure differences, the land flucuates greatly while even as the ses absorbs more heat, its temp will not change due to the high heat capacity and specific heat. Stratospheric warming has been shown in severa; studies, along with increases in water vapor in the middle to upper troposphere, even as ARGO floats, shows some ephemeral cooling.
Recently, ARGO floats had to be reclaibrated and when they were, it was revealed the level of cooling was not as significant as it was previously being recorded as. Also ocean cooling has been predicted by the models for the past 25 years and has been well expected and predicted in several papers since the 1980’s. It was first hypothesized in the late 1970’s. Heat over time can also travel to deeper depths of the sea as well and mix, as can C02 which in combination with various natural processes halt warming of water, and/or create a cooling period, though there is no cooling trend of the bodies of water on this planet either.
The doubling of C02 wil not necessarily, lead to equilibrium immediately, and in fact sevral papers site that it may be some time after the doubling of C02 from pre-industrial levels that equlibrium may be reached and the full climate senstivity may be realized/reached through a global mean temperature increase. The range is between 2 degrees C 4.6, (some ranges say 1.5 as a starting point, and 5-6 degrees as not ruled out) but the median clustering is about 3 degrees C, which has been demonstrated to be an accurate approximation in far more than 4 peer reviewed papers. So, if we forget about the IPCC for a minute and Hansen’s estimates we seee a strong agreement with high confidence that the temp increase will be greater than 2 degrees C and the clustering is at about 3 degrees C. Now, averaging in the IPCC report and looking at Hansen’s median (or even low projections) as well as GCM’s with more conservative predictions based upon the physics and the central theroem we are looking at about 3.5 degrees C increase in global mean temp, or so. It is impossible to get from the IPCC report, the peer reviewed literature (99.5 % of it) and the most recent updated data to predict a global mean temp increase of less than 2 degrees C. The statistics shows 3 degree C clustering from many sources which have been repeated many times. You are ignoring the impact of short term sea lag and transfer of heat due to temp differencs, though admittedly it is very complex in such a chaotic system, (as Pamela is quick and correct to point out) but it seems to me you have not considered the system enough in your analysis. I am not predicting a 5 degree C increase or immediate catastrophe at the doubling of C02 over pre-industrial levels, but once equlibration has occurred we are looking at several catastrophic events, and prior, to this many citizens of third and second world countries will die and become deathly ill as a result of global warming due to anthropogenic means and the natural variability response, as well as, dimming/cooling as seen in the brown cloud.
The 1 degree C warming you predict is over halfway there, now as evidenced by global mean temperature analysis which is repeated atleast a hundred times. Hence a total 1 degree increase without considering equilibrium is impossible. AGW is > 99% certain (really 100%, but all measurements contain uncertainty) and a future of increased droughts, floods due to AGW is >90% certain (around 95%) and predictions for great detrimetal weather patterns and climate disruptions is >66%, (AR4) but in light of recent literature and empircal obesrvations is >70%. I, for one,do not want to wager on thoes odds. We are approaching 1 degree global mean temperature warming now, (or in the next 10 years.) so I am confused as to how we can only have 1 degree increase at equlibrium. The reaction will shift to the right and even with chaotic weather, ENSO, and the like, the warming effect of GHG will continue to be part of the trend. T
The GCM’s are amazingly accurate and precise at this point, and in conjunction with real world data input/updated in them, and satellite/proxy data, it is clear that 1 degree warming is a gross underestimate. I will paste the links tomorrow and if I have time some of my own calculations as well; if not then within a few days I will show where I see you miscalculating and show my proofs.

Jacob Mack
August 10, 2009 9:27 pm

Oh, and interestingly enough, the planet has still been warming since 1998 even with 1998 being a warmer year…go figure…

Jacob Mack
August 10, 2009 9:31 pm

XBT showed exaggerated warming and ARGOS showed cooling due to bad sensors and underestimating how long it would take for them to reach under water; once the corrections were made, the cooling was shown not to exist and the warming was shown to be on the incline. Okay now I will digress for the night.

August 10, 2009 9:48 pm

Jacob Mack:

I also want to add that precipitation will slow down Anartic (sic) warming and some evidence suggests that El Nino will also temporarily suppress warming magnitude, and yet the Anartic (sic) is still warming… I also have preparations to make for Steve, soon as he answers my initial questions.

How about you answering the question I’ve been asking you through a number of threads.
OK, here’s the question: You previously have stated, explicitly, that you had a B.S. in Chemistry. But then another poster provided educational links, none of which stated that Jacob Mack had been awarded a degree in Chemistry. Given those facts, my question is which school did you graduate from with a Chemistry degree, and what year was that?
Simple questions, and your two-part answer can be posted here in under a minute.
What are you waiting for?

Jacob Mack
August 10, 2009 9:55 pm

Smokey, one you have been misquoting me the entire time, and two the links you gave me used faulty methods, did not use proper references and have not been repeated and validated; come to think of it, you have not answered any of my questions, and you did not falsify AGW whatsoever.

August 10, 2009 10:20 pm

No, no, I’m not misquoting you at all. I’m just asking two simple questions.
Seems a guy would be proud of the school he graduated from. With his degree in… Chemistry.
So, what school and what year?

tallbloke
August 11, 2009 12:40 am

Leif Svalgaard (19:04:55) :
tallbloke (18:50:01) :
I pointed out the uncertainty in TSI values
The uncertainty is smaller than the solar min to max variation so is hardly relevant. Even a 1 W/m2 uncertainty translates into a 0.05K temperature signal which is negligible in the current context.

Hi Leif, we’ve rehearsed the argument elsewhere and we disagree on this. As far as I can see, your application of the Stefan Boltzman law doesn’t fit the context of a planet with a dynamic atmospheric system. Earth is not a snowball or a lump of coal, and doesn’t behave like either of them.
The calculations I did on ocean heat content changes due to insolation which you confirmed show that sunlight plus terrestrial atmospheric factors can lead to a 4W/m^2 ‘forcing’ on the decadal scale. And that wasn’t ‘peak to trough’ either.
On century long timescales, I think we need to take into account the various issues with projecting the PMOD model beyond the data, and the ACRIM data’s higher peak-trough amplitude, and the bigger than expected fall in TSI from the peak of cycle 23 to now. I think there is a higher climate sensitivity to changes in TSI than a simplistic analysis of the temperature data would indicate. This is due to the curve flattening effect of the ocean’s ability to store solar energy (hiding it temporarily from the surface record at the top of the solar cycle), and el nino’s tendency to occur at solar minimum (raising SST’s at the bottom of the solar cycle and thus further flattening the signal).
Outgoing longwave radiation from the surface jumped 4W/m^2 after 2000, and has stayed at that higher level since. The oceans are shedding some of the heat which my calcs show they have been gaining since the end of the little ice age some 300 years ago, barring some minor downtrends along the way.
This has kept things warmer than they would otherwise be for some 8 years now, which shows what a vast reservoir of heating energy the oceans contain. However, the ocean heat content is consequently diminishing, despite what Josh Willis’ refudging of the ARGO data says, and even he has since admitted (and then recanted) that there has been a “slight fall” since 2003.
Since there is no big tropical tropospheric hotspot developing as a result of this increased OLR, the puny effect of the change in concentration of the trace gases the IPCC worry about is, well, puny, and nothing to worry about.
The bottom line is that in my opinion, your estimate of change in ‘effective temperature’ for a black body earth for a 1W/m^2 change in TSI may be correct, but your estimate of the climate sensitivity to that change is well off the mark.
This is why I say that the climate is very sensitive. But not to co2. The evidence is to be found not in he atmosphere, but in the changes to ocean heat content (which are hidden from the surface record). Because of the oceans vast heat capacity, (the top two metres can store as much as the entire atmosphere above it as Bob Stevenson pointed out), and the oceans ability to move heat from the tropics towards the poles, the earth is a well moderated place to live. This gives the false impression that the climate is insensitive to changes in insolation levels.
Nothing could be further from the truth.

August 11, 2009 2:06 am

Jacob said
“The 1 degree C warming you predict is over halfway there, now as evidenced by global mean temperature analysis which is repeated at least a hundred times.”
I note with interest the precision with which you believe we can analyse modern temperatures and compare them to older ones.
So are you talking about modern warming whose temperatures are based on the Hadley 20 global stations in the year 1850 (which reflects the little ice age) or James Hansens innovative 1880 figures based on a novel grid system also referencing a small number of stations which became the basis for the first IPCC report?
Both of these data bases of course bear no relation to todays stations in terms of subsequent changes in location, numbers, uhi effect or poor siting.
Perhaps you are referring to modern cooler temperatures in relation to previous warmer eras as evidenced by the MWP, the Roman optimums or the Holocenes?
If you can comfirm we can then compare like for like, rather than cite figures which assumes greater accuracy than is evidenced by the methodology employed, or ignores past warming episodes.
Tonyb

Leif Svalgaard
August 11, 2009 6:10 am

tallbloke (00:40:03) :
On century long timescales, I think we need to take into account the various issues with projecting the PMOD model beyond the data, and the ACRIM data’s higher peak-trough amplitude, and the bigger than expected fall in TSI from the peak of cycle 23 to now.
Whatever details you may ponder, the uncertainty is still less than the solar cycle variation. And I don’t know what PMOD ‘issues’ you are talking about going back in time [and I know PMOD quite well]. The ACRIM data seems to have a smaller peak-trough amplitude because the trough in 1996 was less deep. The very earliest data before 1980 I’d not make much of. The larger than expected PMOD drop is due to calibration errors, and ACRIM does not have a deeper minimum now than in 1986. But all of this doesn’t matter: There are good reasons to believe that the magnetic field is responsible for TSI variation and since the magnetic field now is just what it was 108 years ago, there is no reason to believe [and no evidence for it] that TSI was any different back then than now.

tallbloke
August 11, 2009 7:32 am

Leif Svalgaard (06:10:11) :
since the magnetic field now is just what it was 108 years ago, there is no reason to believe [and no evidence for it] that TSI was any different back then than now.

Yebbut, the sun’s just gone into a once in 200 year funk hasn’t it?. Before then the magnetic field (I assume you are talking about the solar dipole field?) shows a more or less steadily rising trend throughout the C20th.

Steve Fitzpatrick
August 11, 2009 7:41 am

Leif Svalgaard (18:58:51) :
“The peak to valley change depends on the size of the solar cycle and varies by a factor or 3 or more. A good median value is 0.1% of TSI or ~1.4 W/m2, for some cycles larger, for some smaller.”
Does the mean TSI over a whole cycle remain more or less constant from cycle to cycle, or does the mean TSI for a whole cycle depend on the level of solar activity. For example, if the peak of one cycle has a sunspot number of 75, and the peak of the next 150, would the average TSI over each cycle be the same, or would the cycle with lower peak activity have a lower average TSI?
“A simpler calculation is that the solar signal would then be 1/4 of 0.1% of the effective temperature or 0.025% of 288K = 0.07K [or C].”
Does the climate sensitivity not enter into the expected temperature change? If the climate sensitivity were 0.75 degree per watt, then a top of atmosphere variation of 1.4 watts per M^2 would give about 0.25 * 0.7 * 1.4 * 0.75 = 0.184K change, not close to the 0.07K you note above. It seems to me that the above calculation implicitly assumes a sensitivity of about 0.21K per watt/M^2. Am I missing something?
“So for what is is worth: the model is consistent with no substantial change in cyclical variation over the past 130 years.
And I don’t understand this statement. Stefan-Boltzman’s law hasn’t changed. So what is this ‘cyclical variation’?”
What I meant was that there was no obvious evidence for a big overall trend in TSI, past cycles were not very different from recent.

Leif Svalgaard
August 11, 2009 7:45 am

tallbloke (07:32:58) :
Yebbut, the sun’s just gone into a once in 200 year funk hasn’t it?.
No, not at all, a 100 year funk, not 200. Cycle 23 was much like cycle 13, and cycle 24 is forecast to be like cycle 14.
Before then the magnetic field (I assume you are talking about the solar dipole field?) shows a more or less steadily rising trend throughout the C20th.
No, not at all. The heliomagnetic field shows the same ~100 year variation as solar activity. Here http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1835.png
In http://www.leif.org/research/Reply%20to%20Lockwood%20IDV%20Comment.pdf we debunk the idea of steady increase. Lockwood et al have come around to our view and now agree with our reconstruction. On http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1900.png we show their reconstruction as the green curve compared with ours [blue curve] and observations by spacecraft [red curve].

tallbloke
August 11, 2009 7:46 am

I should clarify that, the magnetic field at minimum shows a more or less steadily rising trend through the C20th. And it should be noted that the shorter, more vigorous cycles of the latter half of the C20th with their steep up and downramps meant a lot less downtime for the sun, and a lot more TSI overall, right up to 2003 or so.
In Fairness to Steve, I don’t think we should swamp his thread with a continuation of our debate on all this stuff here, though he may want to consider these two points if he has previously simply accepted the facile argument that the last three solar cycles had lower maximum amplitudes than the highest on ever recorded, so therefore the sun is out of step with the temperature record.

Leif Svalgaard
August 11, 2009 8:08 am

tallbloke (07:46:31) :
I should clarify that, the magnetic field at minimum shows a more or less steadily rising trend through the C20th.
I thought that this
http://www.leif.org/research/Reply%20to%20Lockwood%20IDV%20Comment.pdf made it clear that there is no such rise.
The minimum in 2008 is on par with that in 1901. The minimum in 1996 with that in 1933. The minimum in 1965 on par with that in 1933 as well. The minimum in 1986 on par with 1945. The minima in the 19th century very much like the ones in the 20th: http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1835.png and the maxima too, BTW.
In other words, the variations of TSI minima have been much less than 1 W/m2.

tallbloke
August 11, 2009 8:10 am

Steve Fitzpatrick (07:41:12) :
What I meant was that there was no obvious evidence for a big overall trend in TSI, past cycles were not very different from recent.

See my points above. To illustrate them and their implications, some facts:
1) Sunspot numbers correlate well with TSI.
2) The average sunspot number from 1875 to 1935 was 42
3) The average sunspot number from 1945 to 2005 was 73
4) The average sunspot number from 1975 to 2005 was 68, so it didn’t drop much after the highest solar cycle ever recorded.
5) Above around 40, the oceans start to gain heat content.
6) Ocean heat content is the main driver of the sea surface and therefore air temperature on all timescales over a couple of months.
7) Ocean heat content is driven by the sun, not co2, because longwave radiation doesn’t penetrate the ocean, it just causes more evaporation at the surface.
I’ll leave you to join the dots.

tallbloke
August 11, 2009 8:24 am

Leif Svalgaard (07:45:07) :
tallbloke (07:32:58) :
Yebbut, the sun’s just gone into a once in 200 year funk hasn’t it?.
No, not at all, a 100 year funk, not 200. Cycle 23 was much like cycle 13, and cycle 24 is forecast to be like cycle 14.

We’ll see soon enough. 🙂
Before then the magnetic field (I assume you are talking about the solar dipole field?) shows a more or less steadily rising trend throughout the C20th.
No, not at all. The heliomagnetic field shows the same ~100 year variation as solar activity.
Well your theory of a 100 year cycle is an interesting one, but I don’t think the data supports it all that well. As you said before, your re-evaluation of C19th solar activity is still in the works. Until it’s done, I’ll carry on using the sunspot numbers, which were generally lower in the C18th and C19th than they have been in the C20th since 1935. By the way, did you see my offer of help with the solar magnetic data digitisation on the NASA admits possibility of Dalton minimum thread?

Leif Svalgaard
August 11, 2009 8:30 am

tallbloke (08:10:12) :
I’ll leave you to join the dots.
Some more dots to connect:
Average TSI for
1830-1875 1365.98
1875-1930 1365.78
1930-1975 1365.94
1975-2009 1365.94

tallbloke
August 11, 2009 8:32 am

Leif Svalgaard (08:08:50) :
The minimum in 2008 is on par with that in 1901. The minimum in 1996 with that in 1933. The minimum in 1965 on par with that in 1933 as well. The minimum in 1986 on par with 1945. The minima in the 19th century very much like the ones in the 20th: http://www.leif.org/research/Heliospheric-Magnetic-Field-Since-1835.png and the maxima too, BTW.
In other words, the variations of TSI minima have been much less than 1 W/m2.

Well, maybe. then again, maybe not. It depends whose data you use, and how you interpret it.
And regardless of all that, my observations of the overall high levels of TSI in the latter C20th due to short minima, swift up and downramps etc still stand. You never refute them, but you always obfuscate them with an avalanche of links and other matters.
For these simple and indisputable reasons, TSI in the second half of the C20th was much higher than in the first. End of.

Leif Svalgaard
August 11, 2009 8:36 am

tallbloke (08:24:08) :
Well your theory of a 100 year cycle is an interesting one, but I don’t think the data supports it all that well.
This is not a theory, but derived from the data. The good news is that HMF B is very well determined the past 170+ years. Even our harshest critics now agree with us. So your statement that ‘the data supports it all that well’ is unfounded.
I’ll carry on using the sunspot numbers
Leif’s law: if data agree with your view they are good.
By the way, did you see my offer of help with the solar magnetic data digitisation on the NASA admits possibility of Dalton minimum thread?
I estimate the work to be of the order of 10 man-years. How many will you contribute? Anything helps.

Leif Svalgaard
August 11, 2009 8:39 am

tallbloke (08:32:10) :
For these simple and indisputable reasons, TSI in the second half of the C20th was much higher than in the first. End of.
1st half average 1365.85
2nd half average 1365.98
Much higher?

tallbloke
August 11, 2009 8:41 am

Leif Svalgaard (08:30:46) :
tallbloke (08:10:12) :
I’ll leave you to join the dots.
Some more dots to connect:
Average TSI for
1830-1875 1365.98
1875-1930 1365.78
1930-1975 1365.94
1975-2009 1365.94

I thought you said there was still much work to do on the C19th, which is why I offered to help with the digitisation of the records. Why not put the data and methodology up for discussion in a post so we can all discuss it properly on a separate thread?

tallbloke
August 11, 2009 8:49 am

Leif Svalgaard (08:36:21) :
tallbloke (08:24:08) :
I’ll carry on using the sunspot numbers
Leif’s law: if data agree with your view they are good.

There are a couple of different ways that comment can be understood.
Your way, and my way. 😉

1 6 7 8 9 10 14