'Correcting' Trenberth et al.

(See the note below before taking this post seriously – Anthony)

Guest essay by Steven Wilde

clip_image002

Here we see the classic energy budget analysis supporting the hypothesis that the surface of the Earth is warmer than the S-B equation would predict due to 324 Wm2 of ‘Back Radiation’ from the atmosphere to the surface.

It is proposed that it is Back Radiation that lifts the surface temperature from 255K, as predicted by S-B, to the 288K actually observed because the 324 Back Radiation exceeds the surface radiation to the air of 222 Wm2 ( 390 Wm2 less 168 Wm2) by 102 Wm2. It is suggested that there is a net radiative flow from atmosphere to surface of 102 Wm2.

I now discuss an alternative possibility.

The portions I wish to focus on are:

i) 390 Wm2 Surface Radiation to atmosphere

ii) 78 Wm2 Evapo-transpiration surface to atmosphere

iii) 24 Thermals surface to atmosphere

iv) 324 Back Radiation atmosphere to surface

The budget needs to be amended as follows:

The 78 Wm2 needs to be corrected to zero because the moist adiabatic lapse rate during ascent is less than the dry lapse rate on adiabatic descent which ensures that after the first convective cycle there is as much energy back at the surface as before Evapo-transpiration began.

The 24 Wm2 for thermals needs to be corrected to zero because dry air that rises in thermals then warms back up to the original temperature on descent.

Therefore neither ii) nor iii) should be included in the radiative budget at all. They involve purely non radiative means of energy transfer and have no place in the radiative budget since, being net zero, they do not cool the surface. AGW theory and the Trenberth diagram incorrectly include them as a net surface cooling influence.

Furthermore, they cannot reduce Earth’s surface temperature below 255K because both conduction and convection are slower methods of energy transmission than radiation. To reduce the surface temperature below 255K they would have to work faster than radiation which is obviously not so.

They can only raise a surface temperature above the S-B expectation and for Earth that is 33K.

Once the first convective overturning cycle has been completed neither Thermals nor Evapo-transpiration can have any additional warming effect at the surface provided mass, gravity and insolation remain constant.

As regards iv) the correct figure for the radiative flux from atmosphere to surface should be 222 Wm2 because items ii) and iii) should not have been included.

That also leaves the surface to atmosphere radiative flux at 222 Wm2 which taken with the 168 Wm2 absorbed directly by the surface comes to the 390 Wm2 required for radiation from the surface.

The rest of the energy budget diagram appears to be correct.

So, how to decide whether my interpretation is accurate?

I think it is generally accepted that the lapse rate slope marks the points in the atmosphere where there is energy balance within molecules that are at the correct height for their temperature.

Since the lapse rate slope intersects with the surface it follows that DWIR equals UWIR for a zero net radiative balance if a molecule at the surface is at the correct temperature for its height. If it is not at the correct surface temperature it will simply move towards the correct height by virtue of density variations in the horizontal plane (convection).

Thus, 222 UWIR at the surface should equal 222 DWIR at the surface AND 222 plus 168 should add up to 390 and, of course, it does.

AGW theory erroneously assumes that Thermals and Evapo-transpiration have a net cooling effect on the surface and so they have to uplift the radiative exchange at the surface from 222 Wm2 to 324 Wm2 and additionally they assume that the extra 102 Wm2 is attributable to a net radiative flux towards the surface from the atmosphere.

The truth is that there is no net flow of radiation in any direction at the surface once the air at the surface is at its correct temperature for its height, which is 288K and not 255K. The lapse rate intersecting at the surface tells us that there can be no net radiative flux at the surface when surface temperature is at 288K.

A rise in surface temperature above the S-B prediction is inevitable for an atmosphere capable of conducting and convection because those two processes introduce a delay in the transmission of radiative energy through the system. Conduction and convection are a function of mass held within a gravity field.

Energy being used to hold up the weight of an atmosphere via conduction and convection is no longer available for radiation to space since energy cannot be in two places at once.

The greenhouse effect is therefore a product of atmospheric mass rather than radiative characteristics of constituent molecules as is clearly seen when the Trenberth diagram is corrected and the lapse rate considered.

Since one can never have more than 390 Wm2 at the surface without increasing conduction and convection via changes in mass, gravity or insolation a change in the quantity of GHGs cannot make any difference. All they can do is redistribute energy within the atmosphere.

There is a climate effect from the air circulation changes but, due to the tiny proportion of Earth’s atmospheric mass comprised of GHGs, too small to measure compared to natural variability.

What Happens When Radiative Gases Increase Or Decrease?

Applying the above correction to the Trenberth figures we can now see that 222 Wm2 radiation from the surface to the atmosphere is simply balanced by 222 Wm2 radiation from the atmosphere to the surface. That is the energy being constantly expended by the surface via conduction and convection to keep the weight of the atmosphere off the surface. We must ignore it for the purpose of energy transmission to space since the same energy cannot be in two places at once.

We then have 168 Wm2 left over at the surface which represents energy absorbed by the surface after 30 Wm2 has been reflected from the surface , 77 Wm2 has been reflected by the atmosphere and 67 Wm2 has been absorbed by the atmosphere before it reaches the surface.

That 168 Wm2 is then transferred to the atmosphere by conduction and convection leaving a total of 235 Wm2 in the atmosphere (168 plus 67).

It is that 235 Wm2 that must escape to space if radiative balance is to be maintained.

Now, remember that the lapse rate slope represents the positions in the atmosphere where molecules are at their correct temperature for their height.

At any given moment convection arranges that half the mass of the atmosphere is too warm for its height and half the mass is too cold for its height.

The reason for that is that the convective process runs out of energy to lift the atmosphere any higher against gravity when the two halves equalise.

It must follow that at any given time half of the GHGs must be too warm for their height and the other half too cold for their height.

That results in density differentials that cause the warm molecules to rise and the cold molecules to fall.

If a GHG molecule is too warm for its height then DWIR back to the surface dominates but the molecule rises away from the surface and cools until DWIR again equals UWIR.

If a GHG molecule is too cold for its height then UWIR to space dominates but the molecule then falls until DWIR again equals UWIR.

The net effect is that any potential for GHGs to warm or cool the surface is negated by the height changes relative to the slope of the adiabatic lapse rate.

Let’s now look at how that outgoing 235 Wm2 is dealt with if radiative gas concentrations change.

It is recognised that radiative gases tend to reduce the size of the Atmospheric Window (40 Wm2) so we will assume a reduction from 40 Wm2 to 35 Wm2 by way of example.

If that happens then DWIR for molecules that are too warm for their height will increase but the subsequent rise in height will cause the molecule to rise above its correct position along the lapse rate slope with UWIR to space increasing at the expense of DWIR back to the surface and rising will only stop when DWIR again equals UWIR.

Since UWIR to space increases to compensate for the shrinking of the atmospheric window (from 40 Wm2 to 35 Wm2) the figure for radiative emission from the atmosphere will increase from 165 to 170 which keeps the system in balance with 235 Wm2 still outgoing.

If the atmosphere had no radiative capability at all then radiative emission from the atmosphere would be zero but the Atmospheric Window would release 235 Wm2 from the surface.

If the atmosphere were 100% radiative then the Atmospheric Window from the surface would be zero and the atmosphere would radiate the entire 235 Wm2.

==============================================================

Note: I’m glad to see a number of people pointing out how flawed the argument is. Every once in awhile we need to take a look at the ‘Slayer’ mentality of thinking about radiative balance, just to keep sharp on the topic. At first I thought this should go straight into the hopper, and then I thought it might make some good target practice, so I published it without any caveat.

Readers did not disappoint.

Now you can watch the fun as they react over at PSI.  – Anthony

P.S. Readers might also enjoy my experiment on debunking the PSI light bulb experiment, and note the reactions in comments, entirely opposite to this one. New WUWT-TV segment: Slaying the ‘slayers’ with Watts

Update: Let me add that the author assuredly should have included a link to the underlying document, Earth’s Global Energy Budget by Kiehl and Trenberth …

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
417 Comments
Inline Feedbacks
View all comments
Frank
April 14, 2014 2:14 pm

Trick wrote: “What general situation does the Planck distribution not apply to?”
Upon further reflection, I’m willing to admit that the Planck function may be fundamental, because it can be derived from more basic principles of physics. Schwarzschild’s eqn can not. HOWEVER, the first step in the derivation of Planck’s Law is the ASSUMPTION that radiation is in EQUILIBRIUM with the medium it is passing through. ANYTIME EQUILIBRIUM DOESN’T EXIST, Planck’s Law can’t be used. Then one needs to consult the Schwarzschild eqn to determine how quickly (with distance traveled though the medium) the radiation will approach the equilibrium situation specified by Planck Law. The rate of equilibration depends on the density of absorbing/emitting molecules and their absorption cross-section (parameters that aren’t found in the Planck’s law). In our atmosphere, the assumption that equilibrium exists is incorrect at many wavelengths. Those who create optically-thick, isothermal “slab” models of the atmosphere (which should emit blackbody radiation) are using models that have little to do with our real atmosphere.
For solids and liquids of decent size, the assumption that their internal radiation is in equilibrium with the medium is reasonable; and they emit something close to blackbody radiation. Even here, the Schwarzschild eqn with scattering terms has sometime important to contribute – an explanation for emissivity and absorptivity. Are there modified derivations of Planck’s Law that explain the existence of emissivity less than one? If not, Planck’s Law is “wrong” for them.
Furthermore, if the solid or liquid of interest is a “thin film” too small to permits equilibrium between the molecules of the film and radiation, the predictions of Planck’s Law are also wrong. Low-e coatings are placed on glass so that the composite can violate Planck’s Law!
Let me turn your question around: When DOES Planck’s Law apply? Answer: Only for blackbodies, objects that rarely exist outside the laboratory. Planck’s Law does comes close to predicting the intensity and frequency of the radiation many solid, liquids and stars emit.
When does the Schwarzschild eqn not apply? Answer: I’m not aware of any situations it doesn’t apply.

joeldshore
April 14, 2014 6:00 pm

Bart says:

No. The reason the spectral plot of emissions is missing a chunk of red is that the oceans are blue. And, they are blue whether you are looking at the blue marble from space, or from the edge of a dock.

(1) This spectrum is primarily in the mid and far-infrared, so your comments are not even correct.
(2) These photos are taken all over the place, including over deserts and the spectrum can be almost perfectly matched by calculations of radiative transfer in the atmosphere. You are now denying not only a lot of science but a lot of technology (i.e., entire parts of the remote sensing technology).

Trick
April 14, 2014 6:01 pm

Frank 2:14pm: The basics do all refer to radiation in equilibrium with matter. I was unable to find a quick ref. on where any breakdown non-equilibrium begins perhaps you have one.
“…if the solid or liquid of interest is a “thin film”…”
A big caveat is macro sized bodies relative to wavelength, right there on page two of Planck’s ‘Theory of Heat Radiation’: “Throughout the following discussion it will be assumed that the linear dimensions of all parts of space considered, as well as the radii of curvature of all surfaces under consideration, are large compared with the wave lengths of the rays considered.”
”..blackbodies, objects that rarely exist outside the laboratory…”
BBs don’t exist in lab.s either, all real objects reflect esp. glancing rays. Here is how to get around the issue of no blackbodies existing although black radiation bath does commonly exist.
Suppose have an instrument that can measure radiant power over some range of frequencies anywhere in the electromagnetic spectrum. If were to point the instrument in a particular direction at a source of radiation, a measurably emitting real body, the instrument would dutifully measure a radiant power. Now can ask, what temperature must a theoretical non-existent blackbody have in order for the instrument reading to be the same?
This temperature is called the brightness temperature of the source, not to be confused with the ordinary (or thermodynamic) temperature of that source. Even if the radiation measured is mostly or entirely emitted (as opposed to reflected) by a real body, its brightness temperature is not the same as its temperature unless happen to choose a frequency range over which the emissivity of the body is almost 1. A brightness temperature always exists.
Whatever the instrument reads, can always find one and only one brightness temperature – keep in mind that this temperature depends on the frequency interval and possibly the direction (unless the source is isotropic). And if the instrument is equipped with a polarizing filter, and we were to rotate it, the brightness temperature might change (unless the source is un-polarized.)
For Earth L&O surface with measured emissivity/absorption of ~.95-.98 thus reflecting 2%to5% of incident radiation, the brightness temperature is very near to the real thermodynamic temperature and many authors just forget the preceding. Especially many posters.
“..Planck’s Law can’t be used. Then one needs to consult the Schwarzschild eqn. (SE)”
If Planck’s distribution can’t be used then neither can SE because SE contains Planck distribution as a component. So if you haven’t had trouble found applying SE then Planck distribution must be equally useful. I have not in memory run across a problem applying Planck distribution steady state because the brightness temperature is always useful. Perhaps you have one. I’m not really interested enough spend time in pursuing that line of inquiry.

joeldshore
April 14, 2014 6:09 pm

Stephen Wilde says:

As long as convection maintains the lapse rate slope set by mass and gravity (bynetting out all the changes from internal forcing elements such as GHGs) the surface temperature does not need to change

More nonsense. The reason why more GHGs increase the surface temperature is not be changing the lapse rate but by changing the effective radiating level.
And, the reason why convection cannot cancel out the changing of forcing elements such as GHGs is exactly because the atmosphere is unstable to convection only so long as the lapse rate is greater than the adiabatic lapse rate and so it can’t drive the temperature profile with altitude to be more equal than adiabatic lapse rate. Nikolov & Zeller showed this in their silly exercise where they put in convection into the simplest greenhouse model (e.g., Willis’s Steel Greenhouse) in such a way that they did drive the atmosphere to an isothermal state and then made a big deal about the fact that the radiative greenhouse effect disappeared (apparently because they were completely ignorant of what they had done and the literature that would have told them what to expect before they ever tried it).
Stephen, You keep talking nonsense and we will keep talking correct atmospheric physics.

joeldshore
April 14, 2014 6:14 pm

phi says:

All this is very strange since this phenomenon is perfectly logical, it is of the order of magnitude of the initial effect and we can say it was already identified by Manabe in the 1960s.

Why don’t you give us an exact specific reference to that paper and the part you are talking about, so we can evaluate what you are claimng in that context?
bart says:

Yes. The problem there is that it is a static quantity, which does not change in response to greater heating. But, that is not a correct viewpoint. Convection increases with heat. This provides a negative feedback which can effectively cancel any radiative heating from the additional GHG.

Another person who has failed to learn from the elementary errors of Nikolov and Zeller. Please read my post to Stephen above about why you are not correct (about the convection being able to “effectively cancel” the radiating heating).

joeldshore
April 14, 2014 6:41 pm

bart says:

Yes, it does. The system
dT/dt = -a*T^4 + b*CO2
dCO2/dt = k*(T – To)
is unstable for b and k both greater than zero.

Now that I’ve worked through this, I’ll say, “Fair enough”, but your 2nd equation assumes that if you increase the temperature above some equilibrium temperature, the CO2 concentration will just increase forever. That doesn’t seem very realistic and it is the source of the problem because, of course, of CO2 concentration increases forever than, sure, the temperature has to keep increasing too.
A more realistic set of equations would lead to a higher CO2 concentration giving a higher equilibrium temperature…where then both the CO2 and the temperature are in equilibrium.
I.e., you have essentially designed your model to get the desired result of a divergence because your model only has one temperature and one CO2 concentration at which CO2 and temperature can both be in equilibrium!
Off the top of my head, I imagine that adding something a term like -c*CO2 to the 2nd equation might give something more realistic.

joeldshore
April 14, 2014 6:55 pm

Okay…Going back, I understand where your 2nd equation comes from, which is essentially based on believing that the long term (multidecadal) as well as short term trends of CO2 concentration can be ascribed to temperature changes. Since nobody but you (not even the Greening Earth Society) believes that the CO2 increases over these multidecadal periods are attributable to temperature changes rather than human emissions, you’ve basically concluded that the climate system must be unstable if these positive feedbacks exist and if something else…which nobody else believes but you…is also true. [Even then, I am not sure you couldn’t fit the data just as well with a c value small enough that it would prevent instability…but that’s kind of irrelevant when you are doing a fit to the data that includes a component that everybody else but you knows to be due to something you’ve left out of your model.]

April 14, 2014 7:26 pm

joelshore says:
Okay…Going back, I understand where your 2nd equation comes from, which is essentially based on believing that the long term (multidecadal) as well as short term trends of CO2 concentration can be ascribed to temperature changes. Since nobody but you (not even the Greening Earth Society) believes that the CO2 increases over these multidecadal periods are attributable to temperature changes rather than human emissions…
In fact, ∆CO2 is caused by ∆T. In other words, changes in temperature are the cause of changes in atmospheric CO2.
That causation has been shown on all time scales, from months to hundreds of millennia. And that fact destroys the false presumption that changes in CO2 will cause changes in global temperature, since that has never been observed.
The alarmist clique started out with a flat wrong premise — that ∆CO2 is the cause of ∆T. It isn’t, in any measurable way. Any CO2-induced warming is so insignificant that it can be completely disregarded. It is just too small to measure, at current and projected concentrations. Almost all the warming happened in the first 20 – 40 ppmv. Now we are at ≈400 ppmv, and the effect is simply too small to measure.
So the alarmists’ wrong premise has led directly to their wrong conclusion: that a rise in CO2 will cause runaway global warming and climate catastrophe. It won’t. Thus, the “carbon” scare is debunked.
The real world disagrees with the alarmist clique, and for that reason their numbers are dwindling fast. Joelshore’s last sentence above shows him to be fatally ignorant about the most basic cause-and-effect between temperature and CO2. He has it backwards, so no wonder his conclusions are nonsensical. And no wonder that crowd has never gotten any prediction right.
There has never been a ‘fingerprint of AGW’ identified anywhere. It does not exist, in any scientifically measurable and testable manner. It is a complete figment of the swivel-eyed lunatics’ imaginations, where they still dream that their runaway global warming is right around the corner. Right. As if.
Hm-m-mm. So who to believe? joelshore? Or Planet Earth?
Because they cannot both be right.

Bart
April 14, 2014 8:21 pm

joeldshore says:
April 14, 2014 at 6:00 pm
Thank you for this input. I was obviously hasty, and need to think this through some more.
joeldshore says:
April 14, 2014 at 6:14 pm
You appear to be speaking in terms of absolute quantities, rather than perturbations. I, at least, am not saying by any means that the GHE does not exist. However, that does not mean that the effect is monotonic. It is a nonlinear system, and the sensitivity is not necessarily positive for all conditions. I do not have to invert the entire temperature profile with altitude to negate an incremental increase. I only have to oppose the increment.
In the present climate state, it appears that the sensitivity is effectively nil. That is an empirical fact. The global temperature metric is essentially a trend and a ~60 year quasi-cycle, and these components have been active since well before rising CO2 could have produced them.
At some point, the climate community is going to have to face reality. The baseline GHE theory is not working.
joeldshore says:
April 14, 2014 at 6:41 pm
“…but your 2nd equation assumes that if you increase the temperature above some equilibrium temperature, the CO2 concentration will just increase forever.”
This is the system which has been observed for the past 56 years. There may be some other limiting feedback terms which are not, in fact, yet observable. However, within this time period, if the sensitivity of surface temperature to CO2 concentration were significantly positive, enough to have been driving the increase in temperature over that interval, then there should have been a distinct exponentially increasing character to both CO2 and temperature.
To the contrary, temperatures have leveled out, and the rate of change of CO2 has leveled out.
Here is an interesting consideration, relevant to the previous discussion: If b is less than zero, then the system is stable.
“A more realistic set of equations would lead to a higher CO2 concentration giving a higher equilibrium temperature…where then both the CO2 and the temperature are in equilibrium.”
How would that be “more realistic”, when it is not what is observed?
“I.e., you have essentially designed your model to get the desired result of a divergence because your model only has one temperature and one CO2 concentration at which CO2 and temperature can both be in equilibrium!”
I did not design it that way. It is an empirical observation. See link above.
“Off the top of my head, I imagine that adding something a term like -c*CO2 to the 2nd equation might give something more realistic.”
Obviously, not more realistic, because this is the reality. You’ve got to match the observations. Otherwise, you are just making things up as you go along.

Bart
April 14, 2014 8:27 pm

joeldshore says:
April 14, 2014 at 6:55 pm
“Since nobody but you (not even the Greening Earth Society) believes that the CO2 increases over these multidecadal periods are attributable to temperature changes rather than human emissions, you’ve basically concluded that the climate system must be unstable if these positive feedbacks exist and if something else…which nobody else believes but you…is also true.”
It isn’t a matter of belief. This is the reality.
“Even then, I am not sure you couldn’t fit the data just as well with a c value small enough that it would prevent instability…”
It does not matter. If it is small enough to be unobservable in the last 56 years, then it is small enough not to affect the results over the last 56 years. And, the dynamics should have played out very differently than they have.

Bart
April 14, 2014 8:34 pm

joeldshore says:
April 14, 2014 at 6:55 pm
Maybe you are thinking that some other set of dynamics could fit the data, and not lead to the same conclusion?
Sorry, no. The uniqueness of the solutions of differential equations means that, anything else you might come up with has to be essentially equivalent.

Frank
April 14, 2014 11:22 pm

Trick wrote: “If Planck’s distribution can’t be used then neither can [the Schwarzschild eqn] because SE contains Planck distribution as a component.”
The Schwarzschild eqn gives the “correct” answer because the equilibrium predicted by the Planck function B(lamba,T) is modified by other terms which describe scattering (the source of emissivity and absorptivity) and the rate at which equilibrium is approached (which is proportional to n*o.
If one imagines a weight hanging from a spring in a gravitational field, the height (z) of the weight could be calculated from Hookes Law: mg = -k(z-z_0). In doing so, I assumed the system was at equilibrium. If I set up the problem correctly: F = m*z”(t) = -k(z(t)-z(0) – mg – friction damping (proportional to z'(t)?). The differential eqn has a family of time-dependent solutions that depend on initial conditions. Likewise the Schwarzschild equation covers a range of distance-dependent solutions that depend on initial conditions and the medium.

joeldshore
April 15, 2014 7:32 am

Bart says:

“A more realistic set of equations would lead to a higher CO2 concentration giving a higher equilibrium temperature…where then both the CO2 and the temperature are in equilibrium.”
How would that be “more realistic”, when it is not what is observed?

Well, it is the sort of behavior that has been observed over longer time scales. Given the significant temperature changes that have occurred in the past, your model would have led to runaways of the CO2 levels.

It isn’t a matter of belief. This is the reality.

I think you are too enamored with an empirical fit you have done that lacks any realistic physical mechanism. It is not such a surprise that you can fit the data reasonably well:
(1) You are fitting the shorter-time behavior reasonably well because over short times, it is realistic that a rise in temperature leads to a release of CO2 into the atmosphere.
(2) The fit of the multidecadal trend is not too difficult. The SLOPE of the CO2 concentration vs time curve has been increasing approximately linearly because that’s how emissions have been increasing. The temperature has also been increasing roughly linearly. So, it is not surprising that your model can approximately fit that.
The only coincidence that makes it possible is that it seems that the coefficients for the short time and long time fit are close enough that you can get a reasonable fit using one coefficient.
dbstealey says:

In fact, ∆CO2 is caused by ∆T. In other words, changes in temperature are the cause of changes in atmospheric CO2.

So, you believe Bart’s claim that the current rise in CO2 is not due mainly to our emissions but rather is just a result of the temperature increase?

Bart
April 15, 2014 7:53 am

Bart says:
April 14, 2014 at 8:21 pm
joeldshore says:
April 14, 2014 at 6:00 pm
Still, the question is not answered: How much of that gap is due to transmission losses, and how much is due to the initial surface distribution? How is the initial surface spectrum measured for confirmation?

Bart
April 15, 2014 8:09 am

joeldshore says:
April 15, 2014 at 7:32 am
“Well, it is the sort of behavior that has been observed over longer time scales.”
Actually, it isn’t. Historically, the temperature change always leads the CO2. Dr. Murry Salby has demonstrated how that “history” has been corrupted.
“Given the significant temperature changes that have occurred in the past, your model would have led to runaways of the CO2 levels.”
All I can tell you for certain is what the data say for the past 56 years. It is mathematically local, not global. It says nothing about long term stability.
And, again, it is stable if surface temperature sensitivity is actually negative, and this is consistent with the “pause” we are currently observing.
“I think you are too enamored with an empirical fit you have done that lacks any realistic physical mechanism.”
Data is primary, theory is secondary. As the Feynman quote goes, if your theory does not match experiment, you are wrong.
“You are fitting the shorter-time behavior reasonably well because over short times, it is realistic that a rise in temperature leads to a release of CO2 into the atmosphere.”
It’s a positive feedback regardless. And, there is no countervailing negative feedback over the time period in question which can oppose it, if the temperature sensitivity is net positive.
“The temperature has also been increasing roughly linearly. So, it is not surprising that your model can approximately fit that. “
This is bass-ackwards. The SLOPE of the CO2 concentration vs time curve has been increasing approximately linearly because that’s how temperatures have been increasing. The emissions has also been increasing roughly linearly. So, it is not surprising that your model can approximately fit that.
“The only coincidence that makes it possible is that it seems that the coefficients for the short time and long time fit are close enough that you can get a reasonable fit using one coefficient.”
It is moot. Over this timeline, the model fits. If the temperature sensitivity over this timeline is significantly positive, then we should see dynamics which appear to increase exponentially over that timeline. We don’t. Ergo, temperature sensitivity is at best essentially nil. QED.
“So, you believe Bart’s claim that the current rise in CO2 is not due mainly to our emissions but rather is just a result of the temperature increase?”
This is a misinterpretation. The CO2 rate of change is not merely a result of the temperature increase. It is the result of some process which is temperature dependent. Something like, perhaps, an elevated level of CO2 in the waters currently upwelling. I discuss that potential mechanism here.
Human inputs are not temperature dependent. Hence, they are ruled out as the main driver.

Trick
April 15, 2014 9:06 am

Frank 11:22pm: “The Schwarzschild eqn gives the “correct” answer..” ~Yes, for Earth once its constants are measured & input (or obtained from HITRAN & input to MODTRAN). Planck distribution will work fine e.g. on Mars straight away b/c it uses fundamental constants of nature; SE won’t work on Mars until its constants are calculated, measured, looked up…

April 15, 2014 9:49 am

joelshore says:
CO2… …is the sort of behavior that has been observed over longer time scales… So, you believe Bart’s claim that the current rise in CO2 is not due mainly to our emissions but rather is just a result of the temperature increase?
Annual CO2 emissions are on the order of about 3% of the total, so annual fluctuations are mainly caused by ocean absorption and outgasing, which is caused by ∆T.
But the cumulative effect of human-emitted CO2 is large. That is entirely a good thing, because CO2 is harmless, and beneficial to the biosphere. We are starved of CO2, therefore more CO2 is better, at both current and projected concentrations. That is what the real world clearly tells us. The false alarm created by the ‘carbon’ scare is all politics, with no credible science supporting it. It is simply a scam. It appears that the public agrees with that assessment.
Next, js says:
I think you are too enamored with an empirical fit you have done that lacks any realistic physical mechanism.
This is what happens when one’s understanding becomes clouded by Belief. The ‘realistic physical principle’ is the fact that the oceans take in and emit CO2. Most people here understand that basic principle, which has been observed not only on yearly/decadal time frames, but out to hundereds of millennia. CO2 always follows temperature. It is beyond me how anyone could disagree with empirical observations. But religious belief will do that.

Frank
April 15, 2014 11:19 am

Frank wrote: ““The Schwarzschild eqn gives the “correct” answer..”
Trick replied: “Yes, for Earth once its constants are measured & input (or obtained from HITRAN & input to MODTRAN). Planck distribution will work fine e.g. on Mars straight away b/c it uses fundamental constants of nature; SE won’t work on Mars until its constants are calculated, measured, looked up…”
Franks responds: The absorption cross-sections for GHG’s, of course, don’t change from planet to planet. The density of GHGs and temperature in different places on the planet do change. One clearly can’t apply the Schwarzschild eqn without real or hypothetical GHG and temperature data. However, what can you learn about Mars from Planck/S-B? With knowledge of the planetary albedo, you can calculate a blackbody equivalent temperature (usually assuming emissivity is 1). The earth’s blackbody equivalent temperature is 255 degK. That’s the temperature about 5 km above the surface and emissivity isn’t 1 for the gases up there since they don’t emit at many thermal wavelengths and they aren’t in equilibrium with the radiation passing through them. The blackbody equivalent temperature on Venus is 184 degK (according to a Nasa website), but that is the temperature about 70 km above the surface. WIth a very thin atmosphere, the blackbody equivalent temperature for Mars, 210 degK, may be close to the surface temperature. In all three cases, the blackbody equivalent temperature is an “average” of roughly the fourth power of the temperature for all of the molecules emitting photons to space weighted by the number of photons that do reach space.
One can make useful first approximations about some things using Planck and S-B. One can make serious mistakes interpreting the results and applying them to systems that aren’t in equilibrium.

joeldshore
April 15, 2014 2:34 pm

Bart says:

Still, the question is not answered: How much of that gap is due to transmission losses, and how much is due to the initial surface distribution? How is the initial surface spectrum measured for confirmation?

You can read about the details in many places. The measurements have been done over a variety of different surfaces and detailed fits have been done comparing the radiative transfer theory to what is observed.
You show amazing amounts of skepticism when the data and analysis show you something you don’t want to believe and amazingly little when your modeling of the data tells you what you would like to believe!

joeldshore
April 15, 2014 2:39 pm

…I’ll just add that, with few exceptions, I think the initial surface distributions (i.e., the surface emissivities) for most terrestrial surfaces (certainly the oceans) are pretty boring in that part of the IR.

Bart
April 15, 2014 6:11 pm

joeldshore says:
April 15, 2014 at 2:34 pm
“You show amazing amounts of skepticism when the data and analysis show you something you don’t want to believe and amazingly little when your modeling of the data tells you what you would like to believe!”
Because the modeling of data is largely under my control, and I can see directly what multiple consistent sources are telling me, without being filtered through the perspective of those with unknown motivation or skill. ClimateGate showed us that there are many scientists more dedicated to “the cause” than they are to science, and that they are able and willing to bully others into hewing their line.
There actually is a reasonable resolution of this issue, and with your other criticisms, which I alluded to above. Basically, what we are talking about is local (in a mathematical sense) behavior. Overall, the GHE acts to heat the surface above what it would otherwise be. But, total forcing is a nonlinear relationship and, for some given range of climate variables, a maximum is reached, where the sensitivity levels off. Beyond that point, you might even get net cooling as a result of increasing GHG.
It would be like the difference between a secant line and a tangent line. The overall forcing could be a globally (in a mathematical sense) positive function, but not necessarily locally increscent for every climate state.
So, you would need to show more than just a gap in the surface to TOA transmission to establish that the GHE is producing greater incremental forcing with added CO2. You would need to demonstrate, via a time elapse succession of such plots, that increasing CO2 correlates with an increasing gap for the current climate state.
I do not question the basic radiative GHE. Only someone who does not understand radiative physics would. But, I can see with my own eyes that it isn’t working out according to that basic formulation. If you are honest with yourself, you should at least have some level of concern or doubt by this time. The pause simply cannot be reconciled with any significant CO2 to surface temperature sensitivity when CO2 concentration has risen 30% more above pre-industrial levels during that time.

Trick
April 15, 2014 6:54 pm

Frank 11:19am: “However, what can you learn about Mars from Planck/S-B?”
The global climate. The global Tmean. The EEH. Gives a reality check on basic theory. Helps understand the T fields and ranges to be encountered on landers. Venus surface temperature est. was determined using basic Planck/S-B close enough to allow the very 1st atm. entry to have a temperature instrument constructed to range up to ~700K.
“That’s the temperature about 5 km above the surface and emissivity isn’t 1 for the gases up there since they don’t emit at many thermal wavelengths and they aren’t in equilibrium with the radiation passing through them.”
The atm. gas emissivity looking up from surface is what counts for calculating the surface control volume Tmean in basic theory. Atm. gas is measured at most about 0.95 in humid tropics down to about 0.7 in dry polar regions; global mean ~0.8.
I would argue the equilibrium being long term steady state is appropriate for judicious application of Planck distribution, S-B and SE on planetary scale based on their successful predictions. Sure, conditions in a tornado not exactly in steady state long enough allow their judicious use. Birds & planes avoid ‘em too. Trailer parks not so much.
For exoplanets, Planck distribution, S-B will be decent planet wide tools also. Their constants are fundamental in nature.

April 15, 2014 11:31 pm

To summarise:
That 102 Wm2 contained in thermals and evapotranspiration moves by conduction from the mass or the surface to the mass of the atmosphere.
Once in the atmosphere it is in the form of gravitational potential energy which is not heat and which does not radiate.
Accordingly it cannot return to the surface by downward radiation and so K & T were wrong to add 102 Wm2 to DWIR.
Instead, it returns to the surface by adiabatic warming of descending air.
That means that the surface temperature enhancement is a result of conduction to atmospheric mass and not radiation from GHGs to the surface.
At all times the radiative exchange between surface and air is in balance, the adiabatic exchange between surface and air is in balance and the radiative exchange between energy into the surface and atmosphere from space and from the surface and atmosphere out to space is in balance.
The simplest scenario is this:
i) The radiative exchange between surface and atmosphere is in balance at 222 Wm2.
ii) The adiabatic exchange between surface and atmosphere is in balance at 102 Wm2.
iii) Energy absorbed by surface and atmosphere from space ( 67 + 168) is in balance with energy emitted by surface and atmosphere to space (165 + 30 + 40) which is 235 in each case.
The effect of radiative capability is therefore only to redistribute energy so that 168 absorbed by the surface becomes 40 emitted by the surface and 67 absorbed by the atmosphere becomes 195 emitted by the atmosphere (165 + 30).
Transparency to incoming shortwave and opacity to outgoing longwave simply re-apportions the share of the same amount of energy emitted to space between emissions from surface and atmosphere

gbaikie
April 16, 2014 1:32 am

So if you accept Trenberth et al as corrected, does this energy budget provide any clues as to why Earth has gone through periods of warming and cooling?
Would it indicate other an increase solar energy, the only factor which could cause earth to warm would be increase absorption of the solar energy of the Earth’s ocean?

April 16, 2014 2:32 am

gbaike,
I agree that the baseline equilibrium temperature (assuming constant gravity and atmospheric mass) could only be affected by a change in TSI from the sun or the proportion of that TSI getting past atmospheric mass.
In the absence of changes in TSI then albedo becomes the critical factor since that affects the proportion of available TSI able to enter the oceans in order to drive the hydrological cycle (effectively Earth’s climate).
It appears that solar variability does affect global cloudiness in ways that I have described separately.
So:
i) On long time scales the various periods of warming and cooling would be driven by the Milankovitch cycles which affect TSI.
ii) On shorter time scales there would be periods of less intense warming and cooling caused by solar changes altering Earth’s cloudiness and albedo and those solar changes would be modulated by ocean cycles which would sometimes supplement and sometimes offset solar variations.
I see no need for any other explanation.