Guest Post by Willis Eschenbach
OK, a quick pop quiz. The average temperature of the planet is about 14°C (57°F). If the earth had no atmosphere, and if it were a blackbody at the same distance from the sun, how much cooler would it be than at present?
a) 33°C (59°F) cooler
b) 20°C (36°F) cooler
c) 8° C (15°F) cooler
The answer may come as a surprise. If the earth were a blackbody at its present distance from the sun, it would be only 8°C cooler than it is now. That is to say, the net gain from our entire complete system, including clouds, surface albedo, aerosols, evaporation losses, and all the rest, is only 8°C above blackbody no-atmosphere conditions.
Why is the temperature rise so small? Here’s a diagram of what is happening.
Figure 1. Global energy budget, adapted and expanded from Kiehl/Trenberth . Values are in Watts per square metre (W/m2). Note the top of atmosphere (TOA) emission of 147 W/m2. Tropopause is the altitude where temperature stops decreasing with altitude.
As you can see, the temperature doesn’t rise much because there are a variety of losses in the complete system. Some of the incoming solar radiation is absorbed by the atmosphere. Some is radiated into space through the “atmospheric window”. Some is lost through latent heat (evaporation/transpiration), and some is lost as sensible heat (conduction/convection). Finally, some of this loss is due to the surface albedo.
The surface reflects about 29 W/m2 back into space. This means that the surface albedo is about 0.15 (15% of the solar radiation hitting the ground is reflected by the surface back to space). So let’s take that into account. If the earth had no atmosphere and had an average albedo like the present earth of 0.15, it would be about 20°C cooler than it is at present.
This means that the warming due to the complete atmospheric system (greenhouse gases, clouds, aerosols, latent and sensible heat losses, and all the rest) is about 20°C over no-atmosphere earth albedo conditions.
Why is this important? Because it allows us to determine the overall net climate sensitivity of the entire system. Climate sensitivity is defined by the UN IPCC as “the climate system response to sustained radiative forcing.” It is measured as the change in temperature from a given change in TOA atmospheric forcing.
As is shown in the diagram above, the TOA radiation is about 150W/m2. This 150 W/m2 TOA radiation is responsible for the 20°C warming. So the net climate sensitivity is 20°C/150W-m2, or a temperature rise 0.13°C per W/m2. If we assume the UN IPCC canonical value of 3.7 W/m2 for a doubling of CO2, this would mean that a doubling of CO2 would lead to a temperature rise of about half a degree.
The UN IPCC Fourth Assessment Report gives a much higher value for climate sensitivity. They say it is from 2°C to 4.5°C for a CO2 doubling, or from four to nine times higher than what we see in the real climate system. Why is their number so much higher? Inter alia, the reasons are:
1. The climate models assume that there is a large positive feedback as the earth warms. This feedback has never been demonstrated, only assumed.
2. The climate models underestimate the increase in evaporation with temperature.
3. The climate models do not include the effect of thunderstorms, which act to cool the earth in a host of ways .
4. The climate models overestimate the effect of CO2. This is because they are tuned to a historical temperature record which contains a large UHI (urban heat island) component. Since the historical temperature rise is overestimated, the effect of CO2 is overestimated as well.
5. The sensitivity of the climate models depend on the assumed value of the aerosol forcing. This is not measured, but assumed. As in point 4 above, the assumed size depends on the historical record, which is contaminated by UHI. See Kiehl for a full discussion.
6. Wind increases with differential temperature. Increasing wind increases evaporation, ocean albedo, conductive/convective loss, ocean surface area, total evaporative area, and airborne dust and aerosols, all of which cool the system. But thunderstorm winds are not included in any of the models, and many models ignore one or more of the effects of wind.
Note that the climate sensitivity figure of half a degree per W/m2 is an average. It is not the equilibrium sensitivity. The equilibrium sensitivity has to be lower, since losses increase faster than TOA radiation. This is because both parasitic losses and albedo are temperature dependent, and rise faster than the increase in temperature:
a) Evaporation increases roughly exponentially with temperature, and linearly with wind speed.
b) Tropical cumulus clouds increase rapidly with increasing temperature, cutting down the incoming radiation.
c) Tropical thunderstorms also increase rapidly with increasing temperature, cooling the earth.
d) Sensible heat losses increase with the surface temperature.
e) Radiation losses increases proportional to the fourth power of temperature. This means that each additional degree of warming requires more and more input energy to achieve. To warm the earth from 13°C to 14°C requires 20% more energy than to warm it from minus 6°C (the current temperature less 20°C) to minus 5°C.
This means that as the temperature rises, each additional W/m2 added to the system will result in a smaller and smaller temperature increase. As a result, the equilibrium value of the climate sensitivity (as defined by the IPCC) is certain to be smaller, and likely to be much smaller, than the half a degree per CO2 doubling as calculated above.

K&T 97 is a fair job at trying to estimate some balance based upon simple theory and compared with some measurements, like satellite. The update (2007?) seems to really show a bit of bias IMHO. Not all of the numbers seem to be correct for the ’97 paper but it seems a fairly reasonable attempt on average.
as for how much is real physics versus unfounded claims, well, it is climate science.
the problems overall – not including the confusion between politics, the politics and subversion of science, religion, and climatology – is basically that albdeo/clouds etc. are not known or understood well enough to predict hardly anything and the sensitivity is way off, way too high.
cba:
Bravo. The size and nature of the discrepancies, the size and nature of the unmodled effects, the degree of uncertainty in almost every known significant parameter makes predictions from the GCM’s highly suspect at best. It seems to me that one could manipulate the parameters and assumptions within arguable bounds and achieve wildly different results. In this context, predicting the effect of 100ppm, or 200 ppm change in C02 seems a stretch, almost laughable. Not that we shouldn’t try, but it seems to me we need to know a heck of a lot more before we put lots of faith in these predictions.
Another point, positive feedback as is being discussed is extremely sensitive to errors in the feedback gain (call it “g”). A term like 1/(1-g) is the result of a simple feedback term with gain g, and the sensitivity of this overall gain to changes, or errors in g is 1/(1-g)^2. Thus a term which produces a gain of 3, has a sensitivity of 9 to changes in g; quite substantial.
Perhaps I am not used to way Physicists or climate scientists discuss things, but the term “feedback” is plural in most control system terminology. Perhaps this is just a convention you guys all use?
Mike
mike,
I think what we have is a natural temperature PID setpoint or closed loop control system with internal oscillations, an ability to be shorted out by high surface albedo due to fresh snow and ice, and a sensitivity to all sorts of serious external effects like cosmic rays and volcanism. The control mechanism is the cloud cover, at least during warm periods of low or no glaciation.
Consequently, an ice age lasts quite a long time as so much of the surface has similar albedo to that of clouds which short circuits the control mechanism. That combined with the naturally lower h2o vapor availability and content lead to the situation of long glaciation periods. Ultimately though, clear ice has very low albedo and as the snow and ice age and become dirty with soot from fires, volcanic dust, the roughness from sublimation and the like, along with the inability to replace the ice from precipitation, and perhaps also an external (to climate) event, one starts to lose the glaciation and the PID control mechanism takes over again.
Overall though, it looks like the setpoint system does a fairly good job at maintaining the temperature. Andddd, kudos to big Momma Nature who always seems to be able to figure out the best way to redistribute thermal energy and maintain that heat flow, even when climatologists can’t. BTW, this stuff reminds me of the old time evolution dogma that required a nice unchanging Earth to ‘cook’ those molecules up to the point that catastrophic events were not even allowed to be considered.
cba (04:55:46) : I think what we have is a natural temperature PID setpoint or closed loop control system with internal oscillations…
This is a useful analogy. In a PID controller the proportional value (P) determines the reaction to the current deviation from equilibrium, the integral value (I) determines the reaction based on the sum of recent current deviations from equilibrium, and the derivative value (D) determines the reaction based on the rate at which the current deviation from equilibrium has been changing.
Sometimes a PID or just PI controller can be unstable, leading to oscillations. A nice example is a drunken man adjusting the temperature in the shower – his delayed response may lead to oscillations between warm and cold water…
🙂
Maybe our climate is a little out of focus or has a delayed response to the deviation from equilibrium – then the result will be temperature oscillations…
Interesting analogy, but I am not sure I would refer to it as a “controller”. It seems clear that there are hundreds of feedback loops with many minor loops inside bigger ones in any reasonable model of the climate. With positive feedback in lots of these loops, the system could indeed be unstable, and in fact will be under certain circumstances. The evidence we have, from anyone’s data, shows that so far the system has exhibited a reasonable local stability under some pretty heavy internal and external disturbances for hundreds of millions of years (at least). We certainly haven’t ramped off into a Venutian state at any time recently, despite many huge pertubations from inside and out. So the question to me is, do we know enough about this system to even get to Willis’ basic sensitivity analysis? When the operating point shifts, do we know enough about say, albedo or cloud cover shifts from pole to pole that we can really esitimate the feedback terms properly? Really?
Mike
it’s fairly easy to do the sort of things as Willis has been doing for this thread. We do know what we have in the way of conditions now- at least time independent and averaged. Where Willis and I disagree on approach is that he is starting with the Earth as it is expected to be without an atmosphere or clouds while I’m afraid there is way too much being included in there that is going to be changing in a fashion that is not just a small amount. He is right in the aspect that our current conditions are a result of Earth’s response including cloud formation and the change in albedo from around 0.15 to 0.19 up to around 0.30. However, I doubt that either improves the accuracy or clarifies the nature of a very small response, like a few doublings of co2. He is going with 15deg C rise (?) rather than a 33 deg C rise, making the sensitivity even less.
We do have several definite factors that can give us some basic idea. Average temperature is right at 288K now and if we assume constant albedo over the range of ghg influence, there’s a 33 deg C rise due to ghgs. (if not – then one has Willis’ number 15 deg C). We have with 0.3 albedo 239 W/m^2 average incoming power absorbed and for a general balance, we need to average 239 w/m^2 outgoing power radiating into space. In the IR, where Earth radiates, we’re going to be very close to an emissivity of 1.0 so at a temperature of 288k, we’re going to emit from the surface around 390 w/m^2 and of that amount only the 239 w/m^2 can be allowed to escape on average. That means in the real atmosphere, there must be around 150 w/m^2 absorbed for the outgoing, some from clouds, some from ghgs. 150/390 amounts to about 38% being blocked.
Above we have some rough numbers for the real Earth where we can calculate some average values. Given a 3.7 w/m^2 forcing increase for co2 doubling, one can figure the surface T rise needed to compensate – assuming other major factors are basically unchanged. Since 38% of any additional outbound radiated power is going to be absorbed as well, we must have a BB temperature that radiates more than an additional 3.7. That’s actually a needed 6 w/m^2 from the surface. Working stefan’s law backwards to find a new T which turns out to be 1.1 deg C and it amounts to a 1.1/3.7 = 0.3 deg C rise for an additional 1 w/m^2 of power being radiated from the Earth system.
However, if we take the average T rise for all ghgs and cloud cover etc, we have our 33 deg C / 150 W/m^2 = 0.22 deg C rise per w/m^2 – over the entire range for the addition of ghgs. Note the most obvious thing is that the average sensitivity over all ghg forcing is actually less than that calculated by the CO2 doubling in the real atmosphere using stefan’s law. That means there is a strong net negative feedback overall which means there can be no net positive feedback. 3.7 * 0.22 = around 0.8 deg C rise for a doubling of CO2. Note that Willis’ numbers will be even lower in sensitivity but there’s so much more lumped in there – like how we got the initial cloud cover – which may or may not be applicable now and I think it requires some additional accounting for (modeling) and I’m not sure that is possible to do it.
Invariant, Mike
offhand I’m not sure I can claim a non zero D term but I leave it in for completeness and to acknowledge the fact that maybe there really is one.
that there are a tremendous number of feedback loops is not in question. Just how well it behaves as a setpoint controller is also questionable. I think it should be obvious that it does because there are just too many things that are varying significantly.
Perhaps better questions to ask is whether the system is totally chaotic or partially chaotic and whether there really can be a setpoint control system. It appears probable to me that one cannot do a predictive time iterative model. I do think one can do these sorts of time independent concept models that tell us about how things tend to behave – like will adding ghgs increase or decrease temperatures.
I also think that these sorts of concept models essentially trash the notion of high sensitivity and show that these GCMs which predict high sensitivity are no better than video games when it comes to describing the climate or determining the sensitivity.
cba
Thanks for re-stating all that, it is hard to put that all together unless you read the entire thread – good summary I think.
You make a good case, along with Willis and others, that general energy balance and average temperatures should be predictable to a first order. I didn’t mean to imply that at that level they aren’t, just that some of the faith put in the accuracy around ppm of this that or the other thing added will produce a certain outcome seems a bit beyond the pale. But your first order approximations seem reasonable.
I do wonder about long term time constants and systems that give rise to them, like Ocean Currents and heat resevoirs and land mass and their ultimate feedback on key parameters affecting things like albedo, and water vapor. In other words, some of the parametrics you guys use to calculate the sensitivities can change over years to provide a negative feedback which reduces the long term sensitivity. I know you guys already know or suspect these things, I am just trying to catch up.
Regarding chaotic effects, I personally believe they are of major import. A simple second order system exhibits chaotic behavior if a simple quantization model is added into it. I always found this fascinating. There are many critical phenomena in our climatactic system which have chaotic behavior: Cloud formulation and shape, wind patterns and turbulence, lightning to name but a few.
The other point people keep mentioining that I cannot help believe must be of great significance is the variation in the Sun, orbit, precession, etc which seem to dwarf GHG in magnitude, all of which have been present all along.
Thanks again fo rthe summary, I appreciate it.
Mike
Mike Workman (15:39:42) : I do wonder about long term time constants and systems that give rise to them,
I wonder about this too! Certainly the static picture that Trenberth is painting in his energy budged is useful, but to me the dynamics has always been more interesting – I cannot see how it is possible to explain climate variations without a clear picture of natural variations including the time constants and the systems that give rise to them.
you might want to look at / for an aging physical meteorology text. So much of the stuff has been worked out back in the 60s and it isn’t contaminated by an over reliance on computer modeling or gw politics. It’s a fascinating application of thermo. However, the basic ones assume adiabatic when there’s radiative activity going on.
Dear Wiilis Eschenbach
From my point of view there is a fundamental error at the beginning.
Instead :”As is shown in the diagram above, the TOA radiation is about 150W/m2. This 150 W/m2 TOA radiation is responsible for the 20°C warming. So the net climate sensitivity is 20°C/150W-m2, or a temperature rise 0.13°C per W/m2. If we assume the UN IPCC canonical value of 3.7 W/m2 for a doubling of CO2, this would mean that a doubling …..” it must be corrected to
My corrected version:
“As shown in the diagramm above,the surface lw radiation is 392 W/m². TOA radiation is 237 W/m². The difference 392 – 237 = 155 W/m² is absorbed by the system earth + atmosphäre and this is the cause for warming the earth by atmosphere.”
By accident the following calculation brings nearly the same result 20/155 = 0,129 grad C/W/m². But with a different atmosphäre like Venus, the given calculation would bring a total wrong result. Earth atmosphere in lw absorbs
155/392 = o,395 but Venus atmosphere absorbs 16250/16900 = 0,96 of surface-radiation. That means, Venus atmospheare is only radiating in lw range on TOA roughly 650 W/m² (Albedo 0,75).
Wolfgang Jünger
Wolfgang Jünger (05:47:53) : edit
Wolfgang, many thanks. However, the definition you are using of the TOA radiation is not that of the IPCC. I’m trying to compare their results to mine, so I have to use their definition.
Your definition of TOA gives a radiation 237 W/m2 … but that’s the total incoming radiation that makes it past the albedo. As such, it is relatively constant. The earth has to radiate to space all of the energy that it absorbs, and that’s 237 W/m2. So obviously, that cannot be the TOA radiation we are talking about. The one we are talking about can change due to a change in CO2, while yours cannot. The IPCC TOA is generally taken to be the tropopause, not the outermost fringes of the atmosphere.
So while your analysis is interesting, and I thank you for it, I fear it is not germane to the present discussion.
Finally, the comparison of Earth with Venus is not just comparing apples and oranges, but apples and hummingbirds. At the surface of Venus, CO2 is above the critical point, so it is neither a fluid nor a gas. See Volz for details.
Joel Shore (17:23:19) : edit
No, your claim makes no sense. They say that the Fs* was obtained by doubling the CO2 in a model and letting it run for ten years.
Now, at what point in that run will there be “zero temperature change”?
Well, immediately after we add the CO2 forcing of 4.52 W/m2, there’s no temperature change. But forcing at that time is Fi, instantaneous forcing, so that can’t be the time of “zero temperature change” they are discussing regarding Fs*
The only other time in the model run when the temperature change is zero is after the modelled system has fully equilibrated. At that point, the temperature is no longer changing. So that has to be the time they are discussing. And that is after the action of all of the feedbacks.
I note that you left out the rest of the quote immediately after the part you quoted. The full quote says (emphasis mine):
I don’t find that confusing or unclear at all. This is with all of the fast feedbacks.
Next, in the footnote to the table they say:
That’s quite clear. 10 year model run, when the surface temperature (Ts) stops changing measure the flux. How do you interpret “as the change in surface temperature approaches zero” (∆Ts->0) part of that statement?
Later in the paper they say:
That is to say, fast feedbacks but not slow ice and vegetation changes.
They also say that Fs and Fs* include the “fast feedbacks”, which they say are:
So as I have been saying all along, the values given by Hansen et al. as the Fs* values (and the corresponding climate sensitivity values which Hansen defines as ∆Fs*/∆Ts) match up with the IPCC numbers, and they include “all fast feedbacks in the climate system, such as changes of sea ice, clouds and water vapor”.
So to review the bidding, I’ve given an explanation of why “zero temperature change” does not mean “with no water feedback”. It just means they don’t measure the TOA flux until the temperature stops changing. If you dispute that, tell me when the temperature is not changing in the model run, other than before and after the feedbacks and forcings are allowed to operate. And it can’t be at the start, that’s Fi, not Fs*.
I’ve also given quotes where they explicitly say Fs* contains all tropospheric, atmospheric, and surface feedbacks.
So it is not “slightly unclear”. It is clearly stated. Fs* contains, and I quote, the “stratospheric, tropospheric, and land surface feedback mechanisms”. How is that unclear in the slightest?
Finally, you say:
So far, you have given no evidence at all that Fs* does not contain tropospheric, stratospheric, and surface feedbacks. So you’ll have to do more than just say that the evidence is all around me … where is it?
Willis Eschenbach says:
I personally would call that long time a case where dT/dt is going to zero, not a case where ∆T is going to 0. However, on the face of it, from just reading the Hansen et al. paper, I suppose your interpretation is not impossible if they have been a little careless with terminology.
However, today I got hold of the Gregory et al. paper and, in fact, I think you will find if you read it that they are indeed talking about regressing to the time when ∆T = 0, i.e., when the climate has not yet changed from the initial state. I suggest that you get hold of that paper and have a look.
I am talking about the fact that your whole interpretation is convoluted and non-sensical. You seem to somehow want to believe that the radiative effects of, say, the water vapor feedback and the ice albedo feedback are insignificant and so they do not much change the radiative balance from what it would be if they were absent. This then has you believing that climate scientists are somehow imagining that the clear predictions of the Steffan-Boltzmann Equation get altered by what exactly? Exactly how is the planet, which only communicates with space (in an energetic sense) to any significant degree via radiation, going to heat up by 3 degrees if the net effect of the radiative changes is such that they should only cause it to heat up by a little over 1 degree? (One could, I suppose, seek refuge in something involving the relative heating of the surface vs the mid- and upper-troposphere, but alas that is a feedback operating in the opposite direction.) It is usually a sign that you are arguing against a “strawman” when you have come to the conclusion that your opponents believe silly things, as you have here (particularly, when those opponents are the strong majority of the relevant scientific community).
At this point, I also have to admit that I am confused by your whole argument about looking to the radiative balance at the TOA because what actually happens in the climate system is this: A forcing is introduced due to increased greenhouse gases that alters the radiative balance by increasing the downward-relative to-upward radiation at the TOA. The earth then responds by heating up and reduces this radiative imbalance. (Gregory et al. actually have a nice graph of this as predicted by the HadSM3 climate model.) However, because of positive feedbacks, it has to heat up more than would be predicted simply by applying the Steffan-Boltzmann Equation to the original radiative imbalance because the process of heating up causes further radiative changes that work to oppose this reduction in the radiative imbalance. [Note, however, that the radiative imbalance does still reduce with time because we are not in a state where the feedbacks are so strong that we get an instability…i.e., in net, as temperatures rise the radiative imbalance is reduced. It is just not reduced as rapidly (as a function of temperature) as it would be if the positive feedbacks were not operating and hence it takes a larger temperature change to restore radiative balance.]
All the W/m2, line by line figures for gases and radiation flows within the atmosphere used and bandied about in this thread by so many,
are they all just “model” answers / guesstimates rather than ACTUAL MEASUREMENTS. ?
If so, then the whole basis of the thread and discussion is on the models answers / guesstimates (and how good they are),
NOT observations.
It would appear that where a W/m2 figure is plotted on the Y axis of the various radiation plots, the figures used here are taken from, that / those.
The actual line by line calculations (HITRAN, LOWTRAN, etc.) are by an algorithm, presumably based upon, or derived from the Schwarzchild equation.
These models spectra indicate high emissivity values for gases (0.97),
yet gases do not behave like black or grey bodies.
Gases should have very low emissivity values (0.09).
I would suggest that the whole basis of the thread / discussions is from the above mentioned (falsely modelled) basis,
but there again, I think the whole of present climate science is from a false basis.
This is just one of them.
It does look suspiciously like the old “IPCC trick” of moving the decimal point one place though.
This trick works very well, I have to admit, in conjunction with the “all radiation is positive” rubbish as well.
(This could explain the “differences” between radiation / energy flows, and what we observe with heat flows ACTUALLY being relative)
You could create the impression (falsely) of a greenhouse effect with them combined,
AND TALK W/m2 RUBBISH UNTILL THE COWS COME HOME…..
Joel Shore (18:39:20) : edit
Since they repeated the claim several times, it seems doubtful that they were just careless.
The forcing when CO2 has doubled but “the climate has not yet changed from the initial state” is clearly defined in all of the literature as Fi, the instantaneous forcing. That’s why it is called “instantaneous”, it is in the instant before the climate starts to change from the initial state.
You are saying that Fs*, the forcing after the system has equilibrated with all feedbacks, is really Fi, the instantaneous forcing. I confess to being totally mystified by that claim. Perhaps you could say some more about that, I don’t get it.
Please, please, please quote my words that you are objecting to. Objecting to what I “seem to somehow want to believe” goes nowhere. I don’t believe that water vapor feedback is insignificant, where did I ever say anything like that? You are attacking a straw man.
So far, so good … but then you say:
It “has to heat up more”? Has to? Why does it “have to heat up more”? What happened to the negative feedbacks? Did they magically disappear? You assume that the net feedback is positive … where is the evidence for that assumption? The assumption of net positive feedbacks is doubtful on many theoretical grounds. Evaporative, transpirational, and convection parasitic losses are known to increase with increasing temperature, and quite rapidly. Where is the effect of these known negative feedbacks in your account?
In addition, the ERBE data shows that clouds have a net negative feedback … but the climate models assume a net positive feedback from clouds. Where is the evidence for that positive feedback?
In this connection, it is helpful to visualize what would happen to a watery planet if the sun’s power started at zero, and then began to rise. Initially the planet would be frozen, there would be no liquid water, and very little water vapor in the atmosphere. At some point the ice would melt, and water vapor would rise. At some later point, clouds would form, and the amount of sunlight reaching the surface would begin to decline, cooling the surface. But increased IR absorption by increased water vapor would tend to heat the surface. On the other hand, the various parasitic water vapor related losses (evaporation, transpiration, vertical transport in thunderstorms, hydrometeors, etc.) would also rise, cooling the surface. It is important to distinguish between these two processes, the cloud feedback, thunderstorms, and parasitic losses on the one hand cooling the earth, and the increased water vapor IR warming the earth.
At any given stage in the sun’s warming, the earth would have an equilibrium temperature. This temperature would be less than the theoretical temperature, and would be determined by the relative balance of the increasing parasitic losses and increasing cloud reflection cooling the earth, and the increasing greenhouse effect of the water vapor warming the earth. For any given sun strength, the earth will not warm beyond a certain point, the point at which the parasitic losses (which are driven by delta T) and the cloud albedo reflections (driven by evaporation) matches the greenhouse gains. In other words, the heat engine of the climate is always running as hot as it can. If the earth heats a bit, parasitic and albedo losses increase and reduce the temperature. If the earth cools a bit, losses decrease, and the earth warms up.
The fact that the earth’s temperature is limited, not by the strength of the sun, but by the temperature dependent losses, is a crucial point which is often not accounted for.
Now in the midst of all of this, what will be the overall effect of a slight increase in CO2 forcing? About the only thing we can say for sure is that it will be less than it would be if there were no parasitic losses.
Joel, I appreciate your ideas, they always make me think, but more evidence, please, more quotes, please. Show us why Fs* is really Fi, don’t just make the claim. Explain to us why the net feedback is positive, don’t just say it “has to be” positive.
Willis Eschenbach says:
In which case we are left with the conclusion that your interpretation is likely incorrect because what they said is that they were regressing to was ∆T = 0, not dT/dt = 0, which is what you are talking about.
No…What I am saying is that these forcings are subtly different but not radically different in the way that you seem to propose. In particular, Fs* is determined by doing a regression to zero temperature change rather than by looking at the instantaneous forcing. A point of their paper is that this way of doing things seems to get a value for the forcing that is more in line with that found after stratospheric adjustment than that found before. However, they are not claiming that it includes effects of the water vapor feedback that would manifest themselves as the temperature rises. I think you should get the Gregory paper and read it carefully. I just gave it a cursory reading, enough to confirm that my interpretation of what Hansen et al. quoted from that paper is the correct interpretation of what they are actually saying.
Willis, the point of my discussion is to describe how things happen under the hypothetical situation that positive feedbacks dominate. Your claim was that you can somehow determine from the top-of-the-atmosphere radiation balance (by some method that I still don’t understand) what the average climate sensitivity is. I am explaining how your understanding of the relation between the top-of-the-atmosphere radiation balance and the radiative forcing (your basically equating the two) is incorrect and that is leading you to make unjustified conclusions. If you refuse to entertain the possibility that the feedbacks are positive even long enough to understand how the scientists understand how things work in that case then we can’t get anywhere. If it makes you feel better, take what you quoted from me and replace it with the following:
Changing it to such a hypothetical in no way undermines my basic point of trying to get you to understand the understanding of how these positive feedbacks operate and that they increase the temperature change precisely because they make additional changes to the radiative imbalance.
Willis,
I realize that a picture would be worth a thousand words in explaining what I think Gregory et al. are doing. I have drawn a little Excel graph to illustrate it, so if there is a way that I could send it to you, that would help.
Willis: Here is the picture that illustrates, as I understand it, the difference between the instantaneous forcing F_i and the forcing F_s* as defined by Gregory et al.:
http://www.frontiernet.net/~jshore/Illustration_of_forcing_definitions
Sorry…Let’s try that link again: http://www.frontiernet.net/~jshore and then click on “Illustration_of_forcing_definitions”.
Joel Shore (10:06:55), many thanks, your link totally cleared it up.
Excellent. A picture is worth a thousand words, a graph and a spreadsheet is worth a million.
I finally get it. They are using a regression line to estimate Fs, by using the “b” term in the
y = m*x +b
OK, now we’re past that, your interpretation was correct, mine was wrong.
As I recall, the underlying issue was whether the feedbacks were included in the estimation of Fs. Since the climate model used to estimate the data for the regression line contains all of the feedbacks (water vapor, ice, cloud, etc), it seems clear that they are included. Are we in agreement on that?
Joel Shore (19:53:08) : edit
Joel, let me recap so I can try to clarify the issues. I am looking at the difference between the theoretical temperature of the earth with its current surface albedo but no atmosphere, and the current temperature of the earth. The current temperature obviously includes all of the feedbacks.
The difference in the temperatures is about 20°C. The difference in the TOA radiative forcing is about 150 W/m2. This implies that the climate sensitivity (overall, not at equilibrium) is on the order of 0.13°C/Wm-2. In other words, by adding 150 W/m2 of forcing, we get a 20°C temperature rise. (Equilibrium sensitivity will be less.)
I’m not clear why the issue of positive and negative feedbacks enters into the question. The current temperature of the Earth includes both. Positive feedbacks increase the sensitivity, negative feedbacks decrease it. The 20°C difference in temperatures includes both.
You say I am equating top-of-the-atmosphere radiation balance and the radiative forcing. I don’t recall saying anything about TOA radiation balance. What is your definition of each term, and where is my error in my calculation that the addition of 150W/m2 in TOA forcing gives 20°C of temperature rise?
Thanks for your perseverance in continuing the discussion.
w.
Willis says:
Yes…The climate model will include all of these things. So, for example, when the temperature changes, the water vapor feedback will come into play. However, the ***radiative forcing*** itself is defined by this regression to zero temperature change, which means that it includes only that part of the water vapor feedback that acts independent of any (tropospheric) temperature change. As far as I know, that is not any of the water vapor feedback, at least as I understand it. (Given Hansen’s cryptic language, perhaps there is some small adjustment of water vapor that occurs with the addition of CO2 even in the absence of any temperature change, although it is beyond my imagination to know what that would be. Actually, I suppose it could be the part due to any adjustment in the stratospheric temperature.) Similarly for the ice – albedo feedback.
It’s late now, so I will answer your second post tomorrow!
Take care,
Joel
Willis says:
But, that is the problem. The definition of what is a “forcing” and what is a “feedback” depends on context. By calling that 150 W/m^2 the forcing, you are making a decision that is not compatible with determining the climate sensitivity that we are interested in from the radiative forcing due to increased CO2.
I suggest that you read the excellent piece here by Chris Colose, especially the last part where he talks about what would happen if we removed CO2 from the atmosphere: http://chriscolose.wordpress.com/2010/02/18/greenhouse-effect-revisited/ Note that Chris claims that the result would be a dramatic reduction in water vapor too and that the end result would be a much colder earth, nearly removing the entire greenhouse effect.
Now, you may not believe in this positive feedback due to water vapor (+ clouds) but that still doesn’t mean that you are entitled to do your calculation for climate sensitivity under the ***assumption*** that it doesn’t happen and then claim that such a calculation shows that it (i.e., a high climate sensitivity due to positive feedbacks) doesn’t happen. And, that is exactly what you have done because, if things work as he describes, the reduction in radiative forcing due simply to removing the CO2 from the atmosphere would cause a dramatic temperature reduction (basically, the 20 C temperature change that you are talking about). However, this reduction in radiative forcing is not the 150 W/m^2 that you calculate because that value also includes the radiative effect of the change that occurs in water vapor, which is clearly a feedback and not a forcing in the scenario that Chris describes (and in most scenarios that one can think of).
Willis,
Now that I understand where your estimate of 150 W/m^2 is coming from, I think I see another problem with your argument, a problem that explains the mystery of why your climate sensitivity calculation doesn’t lead to something close to the sensitivity in the absence of feedbacks (like I thought it should).
It seems to me that you have gotten the 150 W/m^2 by taking the emission at a temperature of 288 K (the surface temperature of the earth) and subtracting the emission at the effective radiative temperature, 255 K. However, you then take your temperature difference to be the difference between 288 K and the ~268 K temperature that you calculate the earth would be at if we remove both the greenhouse effect and the albedo effect of clouds.
I think that this is inconsistent, i.e., if you use the 150 W/m^2 then the correct temperature change to use is 33 K. If you use the 20 K temperature change, then the correct radiative forcing to talk about is ~100 W/m^2 (the difference between radiation at 288 K and 268 K). With this correction, you will then get a value for the climate sensitivity (~0.2 – 0.22 K per W/m^2) that is closer to the no-feedback value. (It is still a little low of it, but I think that is accounted for by the fact that this sensitivity depends on the radiating temperature and is usually computed at the 255 K effective radiating temperature whereas you are computing the average sensitivity over a temperature range that is higher than this.)
So, at this point I think I now understand pretty much everything about your calculation, i.e., what it computes and why it is wrong (or, more precisely, why, if done correctly it would just be a complicated way of computing approximately the no-feedback value of the climate sensitivity).
Joel Shore (09:04:58) : edit
I’m sorry if my lack of clarity lead you to that conclusion. The ~ 150 K is from the Kiehl-Trenberth global energy budget, or from my reworking of it in the head post. Basically, K/T say that of the ~ 235 W/m2 emitted by the earth to keep it in radiation balance, 40 W/m2 is directly emitted by the surface (through the “atmospheric window”), and 30 W/m2 is from the clouds. This leaves 165 W/m2 from the TOA. My calculations show a lower number, 147 W/m2 of up/downwelling longwave radiation from the TOA. Pick either one, it makes little difference to the end result.
Anyhow, that’s where I got the number. I am comparing it to the temperature of a “thought-experiment” Earth with no atmosphere, but with the same surface albedo. The surface albedo is somewhere on the order of 15%, which gives a temperature with Stefan-Boltzmann of about 20°C cooler than the present temperature. Since the 150W/m2 is accompanied by a 20°C warming, I divide one by the other to get the sensitivity. As I pointed out, this temperature change of 20°C includes all of the possible feedbacks.
I am still very interested in any objections you might have to that reasoning, thanks for your ideas. What am I missing here?
w.