Earth's baseline black-body model – "a damn hard problem"

The Earth only has an absorbing area equal to a two dimensional disk, rather than the surface of a sphere.

By Robert G. Brown, Duke University (elevated from a WUWT comment)

I spent what little of last night that I semi-slept in a learning-dream state chewing over Caballero’s book and radiative transfer, and came to two insights. First, the baseline black-body model (that leads to T_b = 255K) is physically terrible, as a baseline. It treats the planet in question as a nonrotating superconductor of heat with no heat capacity. The reason it is terrible is that it is absolutely incorrect to ascribe 33K as even an estimate for the “greenhouse warming” relative to this baseline, as it is a completely nonphysical baseline; the 33K relative to it is both meaningless and mixes both heating and cooling effects that have absolutely nothing to do with the greenhouse effect. More on that later.

I also understand the greenhouse effect itself much better. I may write this up in my own words, since I don’t like some of Caballero’s notation and think that the presentation can be simplified and made more illustrative. I’m also thinking of using it to make a “build-a-model” kit, sort of like the “build-a-bear” stores in the malls.

Start with a nonrotating superconducting sphere, zero albedo, unit emissivity, perfect blackbody radiation from each point on the sphere. What’s the mean temperature?

Now make the non-rotating sphere perfectly non-conducting, so that every part of the surface has to be in radiative balance. What’s the average temperature now? This is a better model for the moon than the former, surely, although still not good enough. Let’s improve it.

Now make the surface have some thermalized heat capacity — make it heat superconducting, but only in the vertical direction and presume a mass shell of some thickness that has some reasonable specific heat. This changes nothing from the previous result, until we make the sphere rotate. Oooo, yet another average (surface) temperature, this time the spherical average of a distribution that depends on latitude, with the highest temperatures dayside near the equator sometime after “noon” (lagged because now it takes time to raise the temperature of each block as the insolation exceeds blackbody loss, and time for it to cool as the blackbody loss exceeds radiation, and the surface is never at a constant temperature anywhere but at the poles (no axial tilt, of course). This is probably a very decent model for the moon, once one adds back in an albedo (effectively scaling down the fraction of the incoming power that has to be thermally balanced).

One can for each of these changes actually compute the exact parametric temperature distribution as a function of spherical angle and radius, and (by integrating) compute the change in e.g. the average temperature from the superconducting perfect black body assumption. Going from superconducting planet to local detailed balance but otherwise perfectly insulating planet (nonrotating) simply drops the nightside temperature for exactly 1/2 the sphere to your choice of 3K or (easier to idealize) 0K after a very long time. This is bounded from below, independent of solar irradiance or albedo (or for that matter, emissivity). The dayside temperature, on the other hand, has a polar distribution with a pole facing the sun, and varies nonlinearly with irradiance, albedo, and (if you choose to vary it) emissivity.

That pesky T^4 makes everything complicated! I hesitate to even try to assign the sign of the change in average temperature going from the first model to the second! Every time I think that I have a good heuristic argument for saying that it should be lower, a little voice tells me — T^4 — better do the damn integral because the temperature at the separator has to go smoothly to zero from the dayside and there’s a lot of low-irradiance (and hence low temperature) area out there where the sun is at five o’clock, even for zero albedo and unit emissivity! The only easy part is to obtain the spherical average we can just take the dayside average and divide by two…

I’m not even happy with the sign for the rotating sphere, as this depends on the interplay between the time required to heat the thermal ballast given the difference between insolation and outgoing radiation and the rate of rotation. Rotate at infinite speed and you are back at the superconducting sphere. Rotate at zero speed and you’re at the static nonconducting sphere. Rotate in between and — damn — now by varying only the magnitude of the thermal ballast (which determines the thermalization time) you can arrange for even a rapidly rotating sphere to behave like the static nonconducting sphere and a slowly rotating sphere to behave like a superconducting sphere (zero heat capacity and very large heat capacity, respectively). Worse, you’ve changed the geometry of the axial poles (presumed to lie untilted w.r.t. the ecliptic still). Where before the entire day-night terminator was smoothly approaching T = 0 from the day side, now this is true only at the poles! The integral of the polar area (for a given polar angle d\theta) is much smaller than the integral of the equatorial angle, and on top of that one now has a smeared out set of steady state temperatures that are all functions of azimuthal angle \phi and polar angle \theta, one that changes nonlinearly as you crank any of: Insolation, albedo, emissivity, \omega (angular velocity of rotation) and heat capacity of the surface.

And we haven’t even got an atmosphere yet. Or water. But at least up to this point, one can solve for the temperature distribution T(\theta,\phi,\alpha,S,\epsilon,c) exactly, I think.

Furthermore, one can actually model something like water pretty well in this way. In fact, if we imagine covering the planet not with air but with a layer of water with a blackbody on the bottom and a thin layer of perfectly transparent saran wrap on top to prevent pesky old evaporation, the water becomes a contribution to the thermal ballast. It takes a lot longer to raise or lower the temperature of a layer of water a meter deep (given an imbalance between incoming radiation) than it does to raise or lower the temperature of maybe the top centimeter or two of rock or dirt or sand. A lot longer.

Once one has a good feel for this, one could decorate the model with oceans and land bodies (but still prohibit lateral energy transfer and assume immediate vertical equilibration). One could let the water have the right albedo and freeze when it hits the right temperature. Then things get tough.

You have to add an atmosphere. Damn. You also have to let the ocean itself convect, and have density, and variable depth. And all of this on a rotating sphere where things (air masses) moving up deflect antispinward (relative to the surface), things moving down deflect spinward, things moving north deflect spinward (they’re going to fast) in the northern hemisphere, things moving south deflect antispinward, as a function of angle and speed and rotational velocity. Friggin’ coriolis force, deflects naval artillery and so on. And now we’re going to differentially heat the damn thing so that turbulence occurs everywhere on all available length scales, where we don’t even have some simple symmetry to the differential heating any more because we might as well have let a five year old throw paint at the sphere to mark out where the land masses are versus the oceans, and or better yet given him some Tonka trucks and let him play in the spherical sandbox until he had a nice irregular surface and then filled the surface with water until it was 70% submerged or something.

Ow, my aching head. And note well — we still haven’t turned on a Greenhouse Effect! And I now have nothing like a heuristic for radiant emission cooling even in the ideal case, because it is quite literally distilled, fractionated by temperature and height even without CO_2 per se present at all. Clouds. Air with a nontrivial short wavelength scattering cross-section. Energy transfer galore.

And then, before we mess with CO_2, we have to take quantum mechanics and the incident spectrum into account, and start to look at the hitherto ignored details of the ground, air, and water. The air needs a lapse rate, which will vary with humidity and albedo and ground temperature and… The molecules in the air recoil when the scatter incoming photons, and if a collision with another air molecule occurs in the right time interval they will mutually absorb some or all of the energy instead of elastically scattering it, heating the air. It can also absorb one wavelength and emit a cascade of photons at a different wavelength (depending on its spectrum).

Finally, one has to add in the GHGs, notably CO_2 (water is already there). They have the effect increasing the outgoing radiance from the (higher temperature) surface in some bands, and transferring some of it to CO_2 where it is trapped until it diffuses to the top of the CO_2 column, where it is emitted at a cooler temperature. The total power going out is thus split up, with that pesky blackbody spectrum modulated so that different frequencies have different effective temperatures, in a way that is locally modulated by — nearly everything. The lapse rate. Moisture content. Clouds. Bulk transport of heat up or down via convection. Bulk transport of heat up or down via caged radiation in parts of the spectrum. And don’t forget sideways! Everything is now circulating, wind and surface evaporation are coupled, the equilibration time for the ocean has stretched from “commensurate with the rotational period” for shallow seas to a thousand years or more so that the ocean is never at equilibrium, it is always tugging surface temperatures one way or the other with substantial thermal ballast, heat deposited not today but over the last week, month, year, decade, century, millennium.

Yessir, a damn hard problem. Anybody who calls this settled science is out of their ever-loving mind. Note well that I still haven’t included solar magnetism or any serious modulation of solar irradiance, or even the axial tilt of the earth, which once again completely changes everything, because now the timescales at the poles become annual, and the north pole and south pole are not at all alike! Consider the enormous difference in their thermal ballast and oceanic heat transport and atmospheric heat transport!

A hard problem. But perhaps I’ll try to tackle it, if I have time, at least through the first few steps outlined above. At the very least I’d like to have a better idea of the direction of some of the first few build-a-bear steps on the average temperature (while the term “average temperature” has some meaning, that is before making the system chaotic).

rgb

5 3 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

446 Comments
Inline Feedbacks
View all comments
George E. Smith;
January 16, 2012 3:32 pm

“”””” Myrrh says:
January 14, 2012 at 5:59 pm
George E. Smith; says:
January 14, 2012 at 3:02 pm
So the Planck formula, and the S-B result are useful starting points to investigate emission from real objects. No real object obeys either the Planck radiation law, or the S-B law. In particular, the earth’s moon, is not even approximately close to being able to absorb ALL EM radiation that falls on it so the moon doesn’t obey either Planck or S-B, but despite Myrhr’s declaration; it doesn’t “junk” S-B, nor does NASA
🙂 Well, whatever else it did, my choice of wording spurred you into an excellent, but what do I know?, explanation… “””””
You know Myrrh; you just done gone and earned my respect.
It takes a real man to admit that maybe he has it wrong; and that IS the learning process.
I don’t know it all not even a tiny bit of it; and I have put my foot in it so many times; just ask Phil about some of my bobby dazzlers.
But I am willing to make the effort, to try and reduce parts of this stuff; the small parts I understand, to a level where ANYBODY can understand it.
And I think you have grasped it. The Planck and S-B constant (it’s a fundamental Physical constant) ONLY WORKS for a theoretical gizmo, that doesn’t really exist.
Fortunately we can and do make some very close approximations to a black body at some specific Temperature. You can actually buy a “Freezing Copper” black body cavity, that is set up to heat copper until it melts, and then let it refreeze as it slowly loses energy (radiation (NOT HEAT)), and then when it reaches the freezing point of copper, the Temperature fall stops, while the copper gives up it’s “latent heat” of melting, and while it holds at the copper freezing Temperature, the cavity aperture will emit a nearly perfect black body radiation spectrum centered on that Temperature’s peak wavelength. When the copper has all frozen, then the Temperature will continue to drop. Hopefully you got your measurements made during the freeze.
Filthy rich people can have a platinum freeze black body source. I dunno how filthy they are but they have to be rich.
So although the Planck and S_B relations don’t apply to anything real, it is a very good starting point as MANY real objects follow it fairly well OVER USEFUL WAVELENGTH RANGES
And that point is important.
The black body radiation (Planck) curve has 25% of its energy at wavelenghts shorter than the spectum peak, and 75% at longer wavelengths. It has only 1% at wavelengths shorter than 1/2 of the peak wavelength, and only 1% at wavelengths longer than 8 times the peak. So 98% of the eergy is between 0.5 and 8.0 times the peak wavelength, adn for most purposes that’s plenty good enough.
So for the solar spectrum at 6,000 K which peaks at about 0.5 microns, the 98% range is from 250 nm to 4.0 microns. For the earth surface emitted long wavelength infra-red for say 300 Kelvin, the peak is at 10.0 microns, so the useful range is 5.0 to 80.0 microns, which covers the important 15 micron CO2 band, and the 9.6 micron Ozone band.
You see it really doesn’t matter if a 300 K body goes out of whack (invites junking) at wavelengths less than 5 microns, or longer than 80 microns, because we aren’t expecting to have any interest out there for the 300 k earth surface, so if it is fairly close within that 98% spectrum range, we feel justified in using Planck and S-B to describe it; at least until we learn it is further out of whack.
So I have hopes for you Myrrh, congratulations.

January 16, 2012 5:07 pm

What am I missing here?
The definition of equilibrium.
You keep talking about things falling. Nothing is falling. The air column is static. If it is truly adiabatic (isolated) and you wait a long time, it will reach equilibrium. The definition of equilibrium is uniform temperature. I’m tempted to say “end of story” (because it is) but I will relent and give you a very, very simple example.
Suppose you have two containers of oxygen. The left hand container has a pressure of P. The right hand container has a pressure of 2P. We’ll assume that we’re far away from any sort of critical point so that the oxygen is an “ideal gas” or near enough, although that doesn’t matter. Make it a van der Walls gas, who cares?
Put the two in “thermal contact” and otherwise wrap them in insulation. Wait. When you come back you find a) The first container has half the temperature of the second; or b) The two have the same temperature; or c) something else (your choice) independent of the initial temperature of the gasses in the containers?
When you understand the correct answer to this question, you will understand that a pressure gradient per se has nothing to do with being in thermal equilibrium.
You can also consider Mr. Ocean. Below the thermcline, the Ocean has a temperature that is constant to within a hair (within a degree C). Of course the
pressure increases by an atmosphere every ten meters or so.
Note well that temperature has nothing to do with energy density in the example above. It has to do with the energy distributed per degree of freedom in the system. The number of degrees of freedom for oxygen at normal roomish temperatures is 5 — 3 translation and two rotation — per molecule. Heat it up enough and it goes up first to six, then to seven (or more) as one excites additional modes (there is one more rotational mode and vibrational modes but there is a quantum barrier that prevents them from participating in the sharing of energy close to room temperature — not enough molecules that can provide a full quantum of energy). Read:
http://en.wikipedia.org/wiki/Heat_capacity
especially remarks on monoatomic and diatomic gas, as well as:
http://en.wikipedia.org/wiki/Equipartition_theorem
see especially figure 4.
Again — if gravity created a spontaneous, permanent temperature gradient, our energy problems would be over. We would just generate electricity by putting a heat engine between the hot side and the cold side and wait for gravity to re-separate out the molecules like a little “Maxwell’s Demon”. But sadly (well, really it is rather fortunate, as violating energy and entropy rules would be “bad” as far the overall consistency of the Universe is concerned) reversible laws of nature like gravity do not.
This isn’t terribly easy to understand, I agree. Introductory thermodynamics is the undergraduate class that is a strong competitor to quantum mechanics in terms of level of difficulty. Statistical mechanics (the sound theoretical basis for thermodynamics) is more difficult than quantum theory. It is easy even for physicists to state things that are wrong, that make no real sense, that violate the laws of thermodynamics. This particular argument seems to be at the heart of one big, long, extended mistake: the idea that gravitational compression creates a static, stable, temperature gradient in a fluid. It does not. Period. To get a temperature gradient in anything, you have to have heat flow (or work being done) and gravity does no net work on a gravitationally confined fluid in static equilibrium.
rgb

January 16, 2012 5:11 pm

But if you do so, you remove energy from the tall container, so it is not a perpetual motion machine. You still have to add energy to the system to get continuous work out of it.
Not at all. I use the energy to drive a fan — in the gas. All the heat I turn into work remains, as energy, in the gas. The fan just runs forever.
rgb

January 16, 2012 9:34 pm

Anyway, whether by conduction or convection or intelligent design (!), if the column of air has arrived at a stable state with hotter stuff at the bottom and cooler stuff at the top, it will only stay in that condition if continually heated from the bottom, thus satisfying Robert’s need for a perpetual energy flow in order to maintain the gradient. If the column didn’t have that energy input, it would gradually lose energy and settle to the ground until limited by increasing density. I don’t quarrel about the system not being in TD equilibrium, it won’t be, but it will be ‘stable’ as long as the energy keeps coming.
Agreed. The reason the atmosphere takes on the thermal profile that it does is, to put it mildly (having just read through Caballero’s chapters on fluid thermodynamics again) “complicated”, but Caballero actually has as an exercise showing that thermodynamic equilibrium in a static fluid is uniform temperature both horizontally and vertically.
rgb

January 16, 2012 9:58 pm

Robert Brown repeated a common misunderstanding at 1008, 13Jan2012 with: Anything with a temperature radiates.
Well, there were a few assumptions in there. The matter has to be charged matter, for example. The emissivity may be very low for quantum mechanical reasons. The matter has to be present in sufficient density and with sufficient interaction coupling so that the concept of “temperature” is itself relevant.
But beyond that, yes, anything with a temperature radiates. I didn’t say that it follows a blackbody radiation curve, mind you, only that as charged particles accelerate, they radiate, and as charged matter not in its ground state interacts (even quantum mechanically) there have to be very special, unusual circumstances for there to be no electromagnetic coupling at all connecting the excited states to the ground state, no channels at all for the system to “cool” by emitting photons.
As was correctly pointed out, for transparent gases (low absorptivity and emissivity) the thermally generated radiation may be very weak, but it is not zero.
OTOH, “dark matter” and “dark energy” may not radiate. No charge (apparently) and/or no electromagnetic coupling — hence it is “dark”. But I await definitive confirmation, some way of seeing dark matter before I completely believe in it or worry about it further, and it is irrelevant to the current discussion.
rgb

January 16, 2012 10:21 pm

Joe says:
January 15, 2012 at 10:50 pm
This has been a very instructive interchange but it appears to be petering out -> finally! I can rarely spend much time reading blogs and comments except late at night and I could not spend the time on this one to get more involved…

Joe, you would find reading, and working through, Caballero to be very instructive, because many of your remarks indicate a misunderstanding of how radiative balance is managed. It isn’t local immediate equilibrium, because the temperature of everything is constantly changing — either warming or cooling, and doing so differently at different heights. All that matters is whether total energy absorbed on average over years equals total energy radiated away on average over years. If not decades. It doesn’t matter at all where the energy is incident on the Earth — usually “somewhere sunlight is illuminating” since the Sun is the big source of external energy — or where it is emitted as radiation — from the surface of the ground, from the top surface of a cloud, reflected from the surface of the ocean, from water vapor at some height, from CO_2 at a different height, from O_3 at still a different height, or whether the heat was absorbed in the tropics but radiated away near the north pole. All that matters is that on average ins equal outs to maintain (on average) a constant temperature. To the extent that they never precisely balance minute to minute, hour to hour, day to day, the temperature everywhere is in flux, warming here, cooling there, and the Earth’s outgoing radiation balance is similarly fluctuating with respect to time of day, albedo, latitude, time of year, emissivity, what the major oscillations and weather are doing to the jet stream, sea surface temperatures.
So it is important to understand that radiating some of the long wavelength outgoing radiation from the top of the troposphere instead of directly from the ground does lead to higher average temperatures on the ground. However, it’s not at all clear that adding more CO_2 will substantially alter the effective radiation height of the upper atmosphere — the atmosphere is optically thick already as far as the CO_2 band is concerned and the radiation is more or less emitted (within an optical path of) the top of the troposphere already. There isn’t any room for it to move higher — it runs into the stratosphere! In fact, if CO_2 concentrations did indeed push it much higher, one might actually see the CO_2 band emission temperature increase, and faster cooling. Unless the addition of CO_2 is going to somehow increase the actual thickness of the troposphere while maintaining the same lapse rate, I suspect that we are well into a regime where additional CO_2 has almost no effect on the outgoing radiation profile, which may be why it has been very difficult to directly observe in e.g. NASA IR spectroscopy of the surface. At the very least the signal is buried in noise that is orders of magnitude greater.
rgb

Editor
January 17, 2012 1:57 am

Robert Brown says:
January 16, 2012 at 5:07 pm

What am I missing here?

The definition of equilibrium.
You keep talking about things falling. Nothing is falling. The air column is static. If it is truly adiabatic (isolated) and you wait a long time, it will reach equilibrium. The definition of equilibrium is uniform temperature. I’m tempted to say “end of story” (because it is) but I will relent and give you a very, very simple example.

Thank you for the explanation, Robert. I fear it didn’t respond to my question:

Consider a gas in a kilometre-tall sealed container. You say it will have no lapse rate, so suppose (per your assumption) that it starts out at an even temperature top to bottom.
Now, consider a collision between two of the gas molecules that knocks one molecule straight upwards, and the other straight downwards. The molecule going downwards will accelerate due to gravity, while the one going upwards will slow due to gravity. So the upper one will have less kinetic energy, and the lower one will have more kinetic energy.
After a million such collisions, are you really claiming that the average kinetic energy of the molecules at the top and the bottom of the tall container are going to be the same?

I don’t see anything in there about “things falling”. I asked about whether after a million collisions where one atom goes up and decelerates and one goes down and accelerates the KE is the same top and bottom. I don’t see why that should be. And unfortunately, although your explanation was clear, it didn’t answer my question.
What is wrong with what I describe here? I’m starting to think you may be right, but I don’t understand why my description above is inaccurate. I thought that the relation between potential and kinetic energy was the standard explanation for the reason that there is a DALR. It equalizes the energy in the gas, and it is the reason for the “g” term in the equation for the DALR, g / Cp.
So why would that not work in a sealed container? What is different in a sealed container than out here on the surface of the planet, that out here we have a DALR, and inside the sealed container there is none?
Many thanks for your answers to my questions,
w.

January 17, 2012 2:04 am

With regard to gas at equilibrium in a gravitational field, it appears that the Velasco et al. paper discussed here http://tallbloke.wordpress.com/2012/01/04/the-loschmidt-gravito-thermal-effect-old-controversy-new-relevance/#comment-13608 says that there is indeed a lapse rate but that it’s negligible for gas-molecule ensembles the size of our atmosphere.

wayne
January 17, 2012 3:28 am

Dr. Brown, there is a paper http://arxiv.org/PS_cache/arxiv/pdf/1003/1003.1508v2.pdf by Gerlich and Tscheuschner and they end up with exactly the opposite of your stance on what you and Willis were discussing above, whether a lapse would naturally form under gravity in an idealized column of air. Have you read it and have any thoughts? I’m leery (and a bit confused after this last week).
They summarize:

Results
By combining hydrodynamics, thermodynamics, and imposing the above listed assumptions
for planetary atmospheres one can compute the temperature profiles of idealized atmospheres.
In case of the adiabatic atmosphere the decrease of the temperature with height is described
by a linear function with slope −g/Cp, where Cp depends weakly on the molecular mass. …

Not trying to put you on the hot seat but I cannot seem to find the error in their derivative and equations but also I am rather new to thermodynamics. Two weeks ago I firmly fell as you do, isothermic, the I waivered. The courses I just went through seemed to not delve to such depths. It may just be they end up on the last page combining in a manner that ends up breaking the zero’th law… that is according to your explanation.
This seems to be one of the most misunderstood areas that should be well known but brother, such diverse views!

Bryan
January 17, 2012 4:13 am

wayne
G&T say
” In case of the adiabatic atmosphere the decrease of the temperature with height is described
by a linear function with slope −g/Cp, where Cp depends weakly on the molecular mass. …”
However earlier they say that the in a mixture of gases such as air the radiative contributions are included in the bulk thermodynamic quantities such as Cp.
Certainly looking up the tables and comparing say N2 and CO2 over the atmospheric temperature range 250K to 350K we find that
N2 changes by 0.04% i.e. almost constant
CO2 changes by 13% !!!!!!!
Why does CO2 change so much?
In the case of CO2 extra degrees of freedom over and above translational become available corresponding for instance to wavelengths 15um and 4um.
G&T warn here is an obvious danger here that if an attempt is made to do a separate purely radiative calculation then some energy could be counted twice.

dr.bill
January 17, 2012 5:16 am

re wayne, January 17, 2012 at 3:28 am :
Hi Wayne,
Pardon me for responding to a question directed to someone else, but I think what’s missing in the interpretation of “eventual equilibrium” is the final physical location of the gas molecules. If you keep heating at the bottom, you can make the top of the gas column stay where it is, or rise, or fall by some amount, depending on the energy input, and you can get a stable lapse rate if you persist with the same amount of heating for sufficent time.
If, however, the column receives NO heating, it will eventually end up as a “dense as possible” thin layer on the ground, and THAT layer will eventually attain the same temperature as the surface it’s sitting on (assumed to have a never-varying temperature), and the gas will have a uniform temperature throughout its now “very short” height.
/dr.bill

January 17, 2012 5:56 am

Willis:
Pardon me for butting into your colloquy with Dr. Brown, particularly because this is a non-rigorous answer. For the time being, I believe the rigorous answer is given by the Velasco et al. paper I mentioned above.
But a non-rigorous but more intuitively appealing answer may be that a lower number of higher-altitude molecules are being knocked upward by a larger number of lower-altitude molecules being knocked downward by fewer higher-altitude molecules, and the difference maintains goes into maintaining the greater potential energy.
Not rigorous, but maybe it helps?

Joe Postma
January 17, 2012 8:09 am

===================================
Willis Eschenbach says:
January 16, 2012 at 11:38 am
Consider a gas in a kilometre-tall sealed container. You say it will have no lapse rate, so suppose (per your assumption) that it starts out at an even temperature top to bottom.
Now, consider a collision between two of the gas molecules that knocks one molecule straight upwards, and the other straight downwards. The molecule going downwards will accelerate due to gravity, while the one going upwards will slow due to gravity. So the upper one will have less kinetic energy, and the lower one will have more kinetic energy.
After a million such collisions, are you really claiming that the average kinetic energy of the molecules at the top and the bottom of the tall container are going to be the same?
I say no. I say after a million collisions the molecules will sort themselves so that the TOTAL energy at the top and bottom of the container will be the same. In other words, it is the action of gravity on the molecules themselves that creates the lapse rate.
==================================
Willis this is a very nice description of the physics. But consider not a million collisions, but ~10^30 particles undergoing ~10^9 collisions per second.
I would like to point out that the atmosphere should not be referred (as some people have) to a as a static fluid – it is not, it is a compressible gas. Fluids don’t compress, gases do.
Also, the equation for balance of energy is very simple and is discussed at great length in these two papers, which I will re-link to again:
http://www.tech-know.eu/uploads/Understanding_the_Atmosphere_Effect.pdf
http://principia-scientific.org/publications/The_Model_Atmosphere.pdf
For a gas in a “general” state, or near-state, of thermal equilibrium (the total energy in the gas column 1m^2 is about ~10^7 Joules and the daily variation about this value is some very small percentage – this will be shown in my upcoming paper on the differential heat-flow equation which characterizes this air column), there is a balance between kinetic and potential energy. That very simply, as Willis very beautifully described it, leads to more kinetic energy being found at the bottom of the air column and less at the top, and hence the gas temperature tracks that. Combine that with the definition of what an average means, and you must conclude that the bottom of the atmosphere HAS to be warmer than the average of the entire thermodynamic ensemble average. Again, please refer to the linked papers for an in-depth discussion of that. The definition of an average temperature means that ~half of the particles WILL be hotter than the average. That the half are hotter doesn’t mean conservation of energy has been violated! Such is the meaning of “average”, but when you apply the meaningless conservation of power in place of conservation of total energy, it really confuses the thermodynamics.
Lastly, here is a video simulation of an ideal gas column in a gravitational field:

This simulation can be scaled up to a very large number of particles, at the expense of CPU time, but you actually don’t need to. The nice thing about ideal gases is that they scale – i.e., the behavior of a 3-particle ensemble might be quite unique, but as soon as you start getting up to ~a few dozen particles in a confined space the general behavior of the ensemble will be essentially identical to any other larger set of particles in the same space. This is the basis of statistical thermodynamics, of course, if I may not have described it very well.
In any case, the simulation lets you SEE this balance between kinetic and potential energy taking place, and also density. There is a higher density of particles near the ground and they are also bouncing around faster (the data can be extracted from the sim to show this), thus, more dense and more hot near the surface. And then cooler and less dense at altitude.
So, the thermal equilibrium profile of a compressible gas column in a gravitational field IS one of the temperature distribution described. That IS its thermal equilibrium state. Start with U = CpT + gh : that simple. Thermal equilibrium as equating to a constant, uniform temperature throughout is only one possible state for a given type of ensemble to which that end-state would apply – a metal bar heated on both ends, perhaps. But given that pressure changes non-linearly with altitude when in thermal equilibrium, and there’s no problem with that, there’s no reason to assume temperature can’t also have a distribution. There IS energy coming in to the system (the Sun) and energy leaving (LW radiation), and it is roughly equal, and that keeps the column “afloat”.
Also, given the fact that U = CpT + gh leads to the exactly-measured DALR in reality, I would say it’s a successful application of basic physics. G&T are correct in stating that the radiative effects are already included in the Cp, and that counting them over again is a double-counting. This was described and explained very nicely by Timothy Casey here:
The Shattered Greenhouse: http://greenhouse.geologist-1011.net/

January 17, 2012 8:58 am

The Postma animation above is not inconsistent with my (non-physicist’s) interpretation of the above-mentioned Velasco et al. paper, which does conclude that there’s a lapse rate at equilibrium. But (again, if my calculations are correct) it says that the lapse rate is negligible for as many molecules as are in our atmosphere.
I’m just a layman, but I have elsewhere implored others to show me where this (non-intuitive) conclusion is wrong, and I have so far received no convincing response.

January 17, 2012 9:05 am

I don’t see anything in there about “things falling”. I asked about whether after a million collisions where one atom goes up and decelerates and one goes down and accelerates the KE is the same top and bottom. I don’t see why that should be. And unfortunately, although your explanation was clear, it didn’t answer my question.
Imagine a plane surface in the gas. In a thin slice of the gas right above the surface, the molecules have some temperature. Right below it, they have some other temperature. Let’s imagine the gas to be monoatomic (no loss of generality) and ideal (ditto). In each layer, the gravitational potential energy is constant. Bear in mind that only changes in potential energy are associated with changes in kinetic energy (work energy theorem), and that temperature only describes the average internal kinetic energy in the gas.
Here’s the tricky part. In equilibrium, the density of the upper and lower layers, while not equal, cannot vary. Right? Which means that however many molecules move from the lower slice to the upper slice, exactly the same number of molecules must move from the upper slice to the lower slice. They have to have exactly the same velocity distribution moving in either direction. If the molecules below had a higher temperature, they’d have a different MB distribution, with more molecules moving faster. Some of those faster moving molecules would have the right trajectory to rise to the interface (slowing, sure) and carry energy from the lower slice to the upper. The upper slice (lower temperature) has fewer molecules moving faster — the entire MB distribution is shifted to the left a bit. There are therefore fewer molecules that move the other way at the speeds that the molecules from the lower slice deliver (allowing for gravity). This increases the number of fast moving molecules in the upper slice and decreases it in the lower slice until the MB distributions are the same in the two slices and one accomplishes detailed balance across the interface. On average, just as many molecules move up, with exactly the same velocity/kinetic energy profile, as move down, with zero energy transport, zero mass transport, and zero alteration of the MB profiles above and below, only when the two slices have the same temperature. Otherwise heat will flow from the hotter (right-shifted MB distribution) to the colder (left-shifted MB distribution) slice until the temperatures are equal.
The only way for this not to be true is to have a “Maxwell’s Demon” at the interface. Gravity is reversible and cannot act as a Maxwell’s Demon. Basically what I am saying is that a state where the two slices are at different temperatures is unlikely compared to a state where they are the same — there are many, many more ways to arrange all of the total energy of the system where the internal kinetic energy per molecules is on average the same in the two slices than there there are ways to arrange it so that they are separated, as you can understand from the simple description of the process describing transfer of heat above. The entropy of the system is greater when the temperatures are the same. The two-color (otherwise identical) salt grains are mixed, not separated, as you bounce them around, even in gravity.
This argument works for any partitioned gas, and is how one proves (with a bit more work) that thermal equilibrium is equivalent to “the same temperature throughout” very nearly independent of anything. I mean that this statement is true for a staggering array of cases where it can be shown explicitly or measured, so much so that it is the Zeroth Law of thermodynamics at this point, not really an option. Thermal equilibrium is isothermal as long as there is any pathway, reversible or not, between two reservoirs and quite independent of whether or not the reservoirs are stacked vertically or horizontally or have the same or different pressures or have the same or different gases in them in the same or different initial states — after all of the mixing and sharing of energy is done, the equilibrium system is isothermal.
rgb

Joe Postma
January 17, 2012 9:16 am

Sorry for double-posting this video but I want to make another point with it:

I don’t know if everyone here has an intuitive-mind for physics, but when I look at that simulation (and having wrote it) I can see in my head exactly what would happen if you introduced additional parameters.
Imagine you take this simple gas, and add a molecule which has various internal degrees of freedom that can be excited by collision. When these internal degrees of freedom become excited, they DO so because they have absorbed kinetic energy from the collision. Kinetic energy is therefore taken out of the aggregate ensemble, to be converted into individual internal energy states of the additional molecule(s). A new equilibrium state of the aggregate ensemble will thus be established, and that state MUST have lower kinetic energy, because a small parcel of energy was “lost” to an internal state of the molecule(s).
Now, as long as that internal energy isn’t lost, this new equilibrium state will be constant, though cooler than before. However, now consider that the internal energy of the molecule can be exited from the aggregate system, by radiating the energy away. That parcel of energy then leaves the ensemble completely. Soon after, the internal state will again be re-activated by another collision, and thus another parcel of energy is taken out of the system. Now you keep on running that sequence, and that is how radiative emission causes kinetic cooling. Even for a simple gas, some small bit of radiation is lost just in the collisions themselves, though others have pointed out it is a very small rate. Molecular emission greatly amplifies the radiative loss, then, of course, and thus causes cooling.
So this is another project I want to finish: to write that ideal-gas simulation including molecules with internal states. It seems obvious to conclude initially that having some (non-radiating) molecules with internal states (initially inactive) will cause a “relaxation” of the kinetic thermal profile due to the energy being taken out of such, into the internal states activated by collision, compared what said profile would be otherwise.
If you then turned on radiative emission for those molecules you would obviously have eventual complete collapse of the whole gas column due to cooling.
So therefore you’d have to, and could, provide additional input kinetic energy to the gas simply via its collisions with the ground surface, which would be akin to the sunlight-heated surface transferring heat to the gas. I’ve approximated some of that already and seen the results, just by making the collisions damped with inelasticity.
Anyway, it’s pretty clear that having molecules with internal states causes damping of the kinetic, i.e. thermal, collisions, and that should relax the entire thermal profile.

Joe Postma
January 17, 2012 9:41 am

That’s a very good description Robert, just above in your last post. Painted a very nice mental picture of what’s going on, which I really like!
So, agreed that the same number of particles moving up, as moving down, between two slices.
The upward moving particles, from below, are initially warmer, say. The downward moving particles from above are initially cooler. Is this a stable equilibrium?
Or, lets change that and assume they are at the SAME temperature, above and below.
The upward moving particles have to lose kinetic energy, however, to potential energy. At some point, if they’re unimpeded, they actually pass though a complete zero of kinetic energy (vertically), and that has to be a “cooler” aggregate state than the maximum of kinetic energy they have at the surface.
So, the up-moving particles lose energy, the down-moving ones gain energy. The average of the whole ensemble is somewhere in the middle.
I am not sure there is a demon. Yes the number of particles passing through an infinitesimal slice is equal; however, they enter into a volume with less kinetic energy than they just previously had…so they have to be cooler.
So perhaps, on average, the particles would have equal kinetic energy when passing through each particular infinitesimal slice. But, the number of particles passing through each slice decreases with altitude (re: pressure, and not all of them bounce that high). And because the up-moving ones must be losing kinetic energy, and the kinetic energy becomes zero at the highest altitude, there must also be a decrease in the kinetic energy of the particles with altitude, i.e. temperature.
So the number of particles passing through each infinitesimal slice decreases with altitude, and they have to have a lower kinetic energy at each slice, with altitude, but each slice has on average an equal number of particles passing through it up and down.

Joe Postma
January 17, 2012 9:50 am

===========================================
Joe Postma says:
January 17, 2012 at 9:41 am
So the number of particles passing through each infinitesimal slice decreases with altitude, and they have to have a lower kinetic energy at each slice, with altitude, but each slice has on average an equal number of particles passing through it up and down.
===========================================
Actually, maybe the simpler thing to think about, would be: it is not quite true that each infinitesimal slice has a perfectly equal number of particles passing THROUGH it up and down: some of the upward moving particles, following a parabolic path due to gravity, just perfectly climax directly on the slice,and never actually pass completely through it and move above it. Hence, pressure gradient, fewer particles with altitude, and a thermal gradient. There’s a demon called gravity pushing some of the particles back down below the slice, never letting some though it.

January 17, 2012 10:02 am

I am not sure there is a demon. Yes the number of particles passing through an infinitesimal slice is equal; however, they enter into a volume with less kinetic energy than they just previously had…so they have to be cooler.
No, they don’t, because the mean free path is smaller than the thickness of my slices. Otherwise one cannot assign the slices a “temperature”.
Nothing is changed if you put a thin perfectly conducting barrier in between the slices and insist on the number of collisions above and below being equal. Again, the thermal distribution of speeds within the mean free path of the surface has to be identical.
You’re on the right track trying to build an applet. I’m looking for one that is already built, myself — I’m lazy. A simple MD simulation that sums and averages molecular speeds as a function of height whenever the mean free path is small compared to secular coarse grained distances will make the point. Wait a long time and you’ll observe equipartition of energy, precisely as you would if you inserted a whole series of aluminum foil “pistons” in between layers of gas.
The point is that you will violate the zeroth law and detailed balance of energy transport unless each degree of freedom has 1/2 kT energy, quite independent of whether or not gravity acts. This is independent of the pressure, the density, the number of molecules, the kinds of molecules. It is true for a mixture of gases, gases at different pressures and densities of different kinds in different containers. It is true for solids in contact with gases, liquids in contact with solids, everything in or not in gravitational fields. Equilibrium is isothermal.
How many times do I need to assert this? Do you really think that every thermodynamics textbook in the Universe is wrong and we’re just now discovering it?
rgb

January 17, 2012 11:36 am

Robert Brown: “Do you really think that every thermodynamics textbook in the Universe is wrong and we’re just now discovering it?”
It does seem arrogant, doesn’t it? Still, I don’t think that fairly describes what some of us think. What I for one think is that the Velasco et al. physics-teachers-journal comment I mentioned above says something different from what you do: it says gravity does indeed impose a lapse rate at equilibrium. If so, might not other physics folks say something similar?
Velasco et al. may be an outlier. More probably, I just don’t understand the (to this layman, daunting) math it contains. But as citizens we can’t simply throw up our hands and accept the word of whoever claims to be authoritative. My experience at least is that episodes in which that has been done tend to end badly.

Joe Postma
January 17, 2012 12:37 pm

Well there’s no question about equipartition of energy. But the energy available to equipart changes with altitude…it decreases. The available energy is not only specified by the aggregate sum divided equally to each slab, it is also specified by the altitude of the slab. U = CpT + gh. At each slice the energy is equiparted, surely, but when a particle falls it gains kinetic energy, when it rises it loses kinetic energy.
The relative difference between the slice thickness and MFP changes with altitude due to the change in pressure. I think it’s better to consider a true slice, an infinitesimally thin slice, because that’s simpler. Then, there’s a slab above and below the slice. The pressure of these slabs is certainly different. The total number of particles in each is different. And there’s some small number of particles that they’re constantly exchanging. Go to the next slice above the slab, and there’s fewer particles being exchanged, etc.
How do you explain, then, that the DALR is perfectly described and derived by U = CpT + gh ? The total energy of a slab of air is given by its thermal capacity, its temperature, and its potential energy i.e., by its kinetic and potential energy. If the total energy of the slab is not changing because it is in equilibrium with the inputs and outputs, then DT/Dh = -g/Cp. And that’s what is observed…we observe a temperature decrease with altitude, of exactly this value.
Maybe, then, we need not to be so bounded by the restriction of thinking of the gas column as being in thermal equilibrium. There’s obviously a great thermal load emplaced in the day time at the bottom of the column.
It may be correct that in an ideal-gas thermal equilibrium in a gravity field, the T should be uniform. But if that’s true then I don’t know why U = CpT + gh still gives a perfect description of the atmosphere, seemingly quite accidentally. I can run that sim and extract the speed distribution from slabs at various heights…something tickles my brain that I may have done that once already and found the result you insist..not sure. But I can also run with with damping included to see if that changes things, because damping certainly is occurring and the gas is not perfectly ideal.
We observe a temperature decrease with altitude and it is described nicely by the equation U = CpT + gh : the sum of kinetic and potential energy. If it’s a radiative effect that’s causing this and it is outside the bounds of the standard physics of kinetic & potential energy, then the DALR is still incorporated via the Cp parameter to make that equation still work. It means interpreting that equation as we intended to set it up, as a balance of kinetic and potential energy, is in error, and that what the equation represents, bc of the Cp parameter, is the resulting distribution due to the radiative effects included in the Cp parameter. But then you would think that CO2 should have a gigantic Cp so that it can dominate the DALR since the rest of the atmosphere is said to have little radiative significance. But we all know how the average Cp of the atmosphere is calculated and CO2 actually has a negligible contribution to it!
UPDATE: I went away and ran my sim to collect some data.
I have ran my ideal-gas simulation and extracted the speed distribution for two different sets of particles: at a snapshot occurring every 200 particle collisions I collect the data, and keep re-collecting it over a long period, letting it average out, building up the two speed-distributions. One set of particles is all of those between the arbitrary height of 0.5 & 1.0 in the sim, and the other set of particles is those corresponding to a height of 4.0 or above. The average height is about 2.5. MFP is close to that as found at STP, ~10 molecular diameters.
After a ridiculous number of collisions/data collections, two speed-distributions start to come up. As far as I can tell by looking at the graph, they peak at the same place, which means they have the same temperature. This is what Robert is saying should happen, and thus is why it is confusing that U = CpT + gh -> dT/dh = -g/Cp works.
So all this is very strange. -g/Cp works but Cp is dominated by radiatively inert gases. We observe -g/Cp in reality and it seems U = CpT + gh is a perfectly good way of characterizing the energy, and we would have thought this in the first place, before we even looked, that there should be a thermal gradient in the atmosphere when energy input = energy output, due to this very simple physics.
Maybe we just can’t think of the situation in terms of thermal equilibrium with the restrictions as described by Robert. There is equilibrium in regards to the basic energy coming in = going out, but thermal equilibrium as strictly defined, with isothermal end-state, it is not, by simple observation of course. There’s “new” energy always coming in (daytime) pumping up the system, and there’s “old” energy always being lost due to radiation.
The analogy would be to a metal bar heated on one end, with a cold-sink on the other, with either end held fixed in temperature by the source/sink. There’d be a thermal energy flow down the bar and that coming in would be equal to that leaving, but there would certainly be a temperature distribution down the length of the bar and it would eventually become static – unchanging but not uniform. You might call this “equilibrium” but it isn’t isothermal thermodynamic equilibrium as described by Robert.

Joe Postma
January 17, 2012 1:28 pm

Perhaps this is relevant:
http://en.wikipedia.org/wiki/Non-equilibrium_thermodynamics
“Temperature for bodies in a steady state but not in thermodynamic equilibrium:
While for bodies in their own thermodynamic equilibrium states, the notion of temperature safely requires that all empirical thermometers must agree as to which of two bodies is the hotter or that they are at the same temperature, this requirement is not safe for bodies that are in steady states though not in thermodynamic equilibrium. It can then well be that different empirical thermometers disagree about which is the hotter, and if this is so, then at least one of the bodies does not have a well defined absolute thermodynamic temperature.”
So, steady state but not thermodynamic equilibrium, sounds like the atmosphere.
“Temperature for bodies not in a steady state:
When a body is not in a steady state, then the notion of temperature becomes even less safe than for a body in a steady state not in thermodynamic equilibrium. This is also a matter for study in non-equilibrium thermodynamics.”
That sounds even more like the atmosphere, because it isn’t actually perfectly steady-state.
So the atmosphere is basically the worst-case study for applied thermodynamics. It’s not in steady state and not in equilibrium; it is close to being steady-state but still not in equilibrium. So the requirement of iso-temperature is in fact not there in the theory.
But we do still have the extensive/intensive properties of parcels of gas, i.e. thermal kinetic and potential energy, and somehow the equation for this results in the observed thermal profile of the atmosphere. I guess the “somehow” is due to the requirement of iso-temeprature not applying to the thermodynamic state of this system, since the system doesn’t fit the required parameters for such a state to be achieved in any case.

Joel Shore
January 17, 2012 2:28 pm

The lapse rate in the Earth’s atmosphere is well-understood: If convection were not possible, an atmosphere could have any lapse rate it wanted depending on where energy is absorbed and emitted. Once one considers the possibility of convection, it is found that lapse rates larger than the appropriate adiabatic lapse rate are unstable to convection and hence convection occurs and lowers the lapse rate back down to the appropriate adiabatic lapse rate.
This is why the troposphere, strongly heated from below and cooled from above, is approximately at the adiabatic lapse: It would be at an even higher lapse rate if convection could not occur but since convection is sparked, the lapse rate is lowered back down to the adiabatic lapse rate.
And, it also explains why other parts of the atmosphere, like the stratosphere are not at the adiabatic lapse rate. For these regions, the heating is such that the lapse rate is less steep than the adiabatic lapse rate (or even has temperature INCREASING with height) and hence it is stable and convection is suppressed.
The important thing to recognize is that the lapse rate alone does not determine the surface temperature. To determine the surface temperature, you must also know the temperature at some height. The constraint actually ends up being that the temperature at the effective radiating level has to be the Earth’s blackbody temperature of 255 K. Where the effective radiating level is depends on the opacity of the atmosphere to terrestrial radiation, i.e., on the greenhouse effect. For the Earth with its present constituents, the effective radiating level is about 5 km and the environmental lapse rate is about 6.5 K per km yielding a surface temperature of about 255 K + (6.5 K/km)*(5 km) = 287.5 K. As the levels of greenhouse gases in the atmosphere increases, the effective radiating level goes up and (to first order) the lapse rate doesn’t change. As a result, the surface temperature increases. For example, if the effective radiating level moved up to 6 km, then the surface temperature would rise to 255 K + (6.5 K/km)*(6 km) = 294 K.

Bryan
Reply to  Joel Shore
January 17, 2012 3:13 pm

Joel Shore says
.
“The lapse rate in the Earth’s atmosphere is well-understood: If convection were not possible, an atmosphere could have any lapse rate it wanted depending on where energy is absorbed and emitted.”
This is an odd way to express an idea as well as being wrong.
What is meant by “atmosphere could have any lapse rate it wanted “?
It seems to allow the atmosphere a consciousness to want something.
Its a bit like another statement often heard but is still nonsense
“nature abhors a vacuum”
The atmosphere at times has a still air condition(no convection) know as the neutral atmosphere.
It then follows rigidly the DALR of g/Cp = – 9.8 /Km in the Earths tropopause

Editor
January 17, 2012 3:56 pm

Dr. Brown, thank you so much. After following your suggestion and after much beating of my head against Caballero, I finally got it.
At equilibrium, as you stated, the temperature is indeed uniform. I was totally wrong to state it followed the dry adiabatic lapse rate.
I had asked the following question:

Now, consider a collision between two of the gas molecules that knocks one molecule straight upwards, and the other straight downwards. The molecule going downwards will accelerate due to gravity, while the one going upwards will slow due to gravity. So the upper one will have less kinetic energy, and the lower one will have more kinetic energy.
After a million such collisions, are you really claiming that the average kinetic energy of the molecules at the top and the bottom of the tall container are going to be the same?

What I failed to consider is that there are fewer molecules at altitude because the pressure is lower. When the temperature is uniform from top to bottom, the individual molecules at the top have more total energy (KE + PE) than those at the bottom. I said that led to an uneven distribution in the total energy.
But by exactly the same measure, there are fewer molecules at the top than at the bottom. As a result, the isothermal situation does in fact have the energy evenly distributed. More total energy per molecules times fewer molecules at the top exactly equals less energy per molecule times more molecules at the bottom. Very neat.
Many thanks,
w.
Cross-posted to the “Matter of Some Gravity” thread.

Myrrh
January 17, 2012 6:45 pm

George E. Smith; says:
January 16, 2012 at 3:32 pm
……………
Thank you, George. Much appreciated.
Pierre R Latour says:
January 13, 2012 at 7:18 am
GHG Theory 33C Effect Whatchamacallit
Further to my comment on this to you here and to an exchange I had about it here: http://wattsupwiththat.com/2012/01/15/sense-and-sensitivity-ii-the-sequel/#comment-866656
I have expanded on it here: http://wattsupwiththat.com/2012/01/13/a-matter-of-some-gravity/#comment-867720
Just to keep you in the loop because I’ve quoted you, and thank you, your post helped concentrate my attention on an aspect that’s been increasingly bothering me..

Verified by MonsterInsights