
By Robert G. Brown, Duke University (elevated from a WUWT comment)
I spent what little of last night that I semi-slept in a learning-dream state chewing over Caballero’s book and radiative transfer, and came to two insights. First, the baseline black-body model (that leads to T_b = 255K) is physically terrible, as a baseline. It treats the planet in question as a nonrotating superconductor of heat with no heat capacity. The reason it is terrible is that it is absolutely incorrect to ascribe 33K as even an estimate for the “greenhouse warming” relative to this baseline, as it is a completely nonphysical baseline; the 33K relative to it is both meaningless and mixes both heating and cooling effects that have absolutely nothing to do with the greenhouse effect. More on that later.
I also understand the greenhouse effect itself much better. I may write this up in my own words, since I don’t like some of Caballero’s notation and think that the presentation can be simplified and made more illustrative. I’m also thinking of using it to make a “build-a-model” kit, sort of like the “build-a-bear” stores in the malls.
Start with a nonrotating superconducting sphere, zero albedo, unit emissivity, perfect blackbody radiation from each point on the sphere. What’s the mean temperature?
Now make the non-rotating sphere perfectly non-conducting, so that every part of the surface has to be in radiative balance. What’s the average temperature now? This is a better model for the moon than the former, surely, although still not good enough. Let’s improve it.
Now make the surface have some thermalized heat capacity — make it heat superconducting, but only in the vertical direction and presume a mass shell of some thickness that has some reasonable specific heat. This changes nothing from the previous result, until we make the sphere rotate. Oooo, yet another average (surface) temperature, this time the spherical average of a distribution that depends on latitude, with the highest temperatures dayside near the equator sometime after “noon” (lagged because now it takes time to raise the temperature of each block as the insolation exceeds blackbody loss, and time for it to cool as the blackbody loss exceeds radiation, and the surface is never at a constant temperature anywhere but at the poles (no axial tilt, of course). This is probably a very decent model for the moon, once one adds back in an albedo (effectively scaling down the fraction of the incoming power that has to be thermally balanced).
One can for each of these changes actually compute the exact parametric temperature distribution as a function of spherical angle and radius, and (by integrating) compute the change in e.g. the average temperature from the superconducting perfect black body assumption. Going from superconducting planet to local detailed balance but otherwise perfectly insulating planet (nonrotating) simply drops the nightside temperature for exactly 1/2 the sphere to your choice of 3K or (easier to idealize) 0K after a very long time. This is bounded from below, independent of solar irradiance or albedo (or for that matter, emissivity). The dayside temperature, on the other hand, has a polar distribution with a pole facing the sun, and varies nonlinearly with irradiance, albedo, and (if you choose to vary it) emissivity.
That pesky T^4 makes everything complicated! I hesitate to even try to assign the sign of the change in average temperature going from the first model to the second! Every time I think that I have a good heuristic argument for saying that it should be lower, a little voice tells me — T^4 — better do the damn integral because the temperature at the separator has to go smoothly to zero from the dayside and there’s a lot of low-irradiance (and hence low temperature) area out there where the sun is at five o’clock, even for zero albedo and unit emissivity! The only easy part is to obtain the spherical average we can just take the dayside average and divide by two…
I’m not even happy with the sign for the rotating sphere, as this depends on the interplay between the time required to heat the thermal ballast given the difference between insolation and outgoing radiation and the rate of rotation. Rotate at infinite speed and you are back at the superconducting sphere. Rotate at zero speed and you’re at the static nonconducting sphere. Rotate in between and — damn — now by varying only the magnitude of the thermal ballast (which determines the thermalization time) you can arrange for even a rapidly rotating sphere to behave like the static nonconducting sphere and a slowly rotating sphere to behave like a superconducting sphere (zero heat capacity and very large heat capacity, respectively). Worse, you’ve changed the geometry of the axial poles (presumed to lie untilted w.r.t. the ecliptic still). Where before the entire day-night terminator was smoothly approaching T = 0 from the day side, now this is true only at the poles! The integral of the polar area (for a given polar angle d\theta) is much smaller than the integral of the equatorial angle, and on top of that one now has a smeared out set of steady state temperatures that are all functions of azimuthal angle \phi and polar angle \theta, one that changes nonlinearly as you crank any of: Insolation, albedo, emissivity, \omega (angular velocity of rotation) and heat capacity of the surface.
And we haven’t even got an atmosphere yet. Or water. But at least up to this point, one can solve for the temperature distribution T(\theta,\phi,\alpha,S,\epsilon,c) exactly, I think.
Furthermore, one can actually model something like water pretty well in this way. In fact, if we imagine covering the planet not with air but with a layer of water with a blackbody on the bottom and a thin layer of perfectly transparent saran wrap on top to prevent pesky old evaporation, the water becomes a contribution to the thermal ballast. It takes a lot longer to raise or lower the temperature of a layer of water a meter deep (given an imbalance between incoming radiation) than it does to raise or lower the temperature of maybe the top centimeter or two of rock or dirt or sand. A lot longer.
Once one has a good feel for this, one could decorate the model with oceans and land bodies (but still prohibit lateral energy transfer and assume immediate vertical equilibration). One could let the water have the right albedo and freeze when it hits the right temperature. Then things get tough.
You have to add an atmosphere. Damn. You also have to let the ocean itself convect, and have density, and variable depth. And all of this on a rotating sphere where things (air masses) moving up deflect antispinward (relative to the surface), things moving down deflect spinward, things moving north deflect spinward (they’re going to fast) in the northern hemisphere, things moving south deflect antispinward, as a function of angle and speed and rotational velocity. Friggin’ coriolis force, deflects naval artillery and so on. And now we’re going to differentially heat the damn thing so that turbulence occurs everywhere on all available length scales, where we don’t even have some simple symmetry to the differential heating any more because we might as well have let a five year old throw paint at the sphere to mark out where the land masses are versus the oceans, and or better yet given him some Tonka trucks and let him play in the spherical sandbox until he had a nice irregular surface and then filled the surface with water until it was 70% submerged or something.
Ow, my aching head. And note well — we still haven’t turned on a Greenhouse Effect! And I now have nothing like a heuristic for radiant emission cooling even in the ideal case, because it is quite literally distilled, fractionated by temperature and height even without CO_2 per se present at all. Clouds. Air with a nontrivial short wavelength scattering cross-section. Energy transfer galore.
And then, before we mess with CO_2, we have to take quantum mechanics and the incident spectrum into account, and start to look at the hitherto ignored details of the ground, air, and water. The air needs a lapse rate, which will vary with humidity and albedo and ground temperature and… The molecules in the air recoil when the scatter incoming photons, and if a collision with another air molecule occurs in the right time interval they will mutually absorb some or all of the energy instead of elastically scattering it, heating the air. It can also absorb one wavelength and emit a cascade of photons at a different wavelength (depending on its spectrum).
Finally, one has to add in the GHGs, notably CO_2 (water is already there). They have the effect increasing the outgoing radiance from the (higher temperature) surface in some bands, and transferring some of it to CO_2 where it is trapped until it diffuses to the top of the CO_2 column, where it is emitted at a cooler temperature. The total power going out is thus split up, with that pesky blackbody spectrum modulated so that different frequencies have different effective temperatures, in a way that is locally modulated by — nearly everything. The lapse rate. Moisture content. Clouds. Bulk transport of heat up or down via convection. Bulk transport of heat up or down via caged radiation in parts of the spectrum. And don’t forget sideways! Everything is now circulating, wind and surface evaporation are coupled, the equilibration time for the ocean has stretched from “commensurate with the rotational period” for shallow seas to a thousand years or more so that the ocean is never at equilibrium, it is always tugging surface temperatures one way or the other with substantial thermal ballast, heat deposited not today but over the last week, month, year, decade, century, millennium.
Yessir, a damn hard problem. Anybody who calls this settled science is out of their ever-loving mind. Note well that I still haven’t included solar magnetism or any serious modulation of solar irradiance, or even the axial tilt of the earth, which once again completely changes everything, because now the timescales at the poles become annual, and the north pole and south pole are not at all alike! Consider the enormous difference in their thermal ballast and oceanic heat transport and atmospheric heat transport!
A hard problem. But perhaps I’ll try to tackle it, if I have time, at least through the first few steps outlined above. At the very least I’d like to have a better idea of the direction of some of the first few build-a-bear steps on the average temperature (while the term “average temperature” has some meaning, that is before making the system chaotic).
rgb
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Robert Brown says:
January 17, 2012 at 10:02 am
This seems to me to be a dicey premise to be using as input to your argument for isothermal equilibrium. The premise implies that, over a tiny altitude extent (something greater than the mean free path), the temperature does not change. In essence, you are assuming dT/dz = 0 over your slice. This assumption might be what your reasoning is extropolating over all z, resulting in a lapse rate of 0 everywhere.
Robert Brown:
What willb said.
Not that I think there’d be a detectable temperature lapse rate at equilibrium. But I do find persuasive the argument, advanced in the Velasco et al. paper I mentioned above, that it would be non-zero (but negligible) in theory. Yes, it’s something of a how-many-angels-can-dance-on-the-head-of-a-pin type of distinction, but I would have thought a physics professor would find that important pedagogically.
Joe Postma says:
January 17, 2012 at 12:37 pm
Thanks. I think exactly explains the problem. The DALR is just that – adiabatic. The explains the change in temperature with a change in altitude when no additional energy is added to the system.
However, as long as the maximum surface temperature is larger than the temperature of the air above it, additional energy will enter the atmosphere. The same argument applies as long as any layer of air is warmer than the layer above it – energy will move from warmer to colder and not follow the adiabatic (energy not changing) equations.
That does not describe the measured lapse rate. It does describe the temperature change of a parcel of air that rises because it is less dense than the surrounding air.
Willis Eschenbach says:
January 17, 2012 at 3:56 pm
And from this it follows that greenhouse gases cool the atmosphere. To be clear, without greenhouse gases the troposphere would not get cooler with increasing height.
From that, it immediately follows that increasing the amount of any greenhouse gas in the atmosphere will make the atmosphere cooler, not warmer as some people claim, as described in my 2009 paper – How Greenhouse Gases Work.
It does seem arrogant, doesn’t it? Still, I don’t think that fairly describes what some of us think. What I for one think is that the Velasco et al. physics-teachers-journal comment I mentioned above says something different from what you do: it says gravity does indeed impose a lapse rate at equilibrium. If so, might not other physics folks say something similar?
I don’t care what other physics folk say, not in this case. The number of people who have proposed violations of the laws of thermodynamics are legion, but they are never observed to be violated, are they?
However, I do teach this, so permit me to once again try to explain it in such a way that everybody understands it. This is a part of an answer I wrote to this same issue on Tallbloke’s blog:
Suppose you have a thermometer. A thermometer, recall, measures temperature. It does so on the basis of the zeroth law. You put the thermometer into thermal contact with (say) a fluid at some temperature and heat flows in or out of that fluid (presumed to have a lot more heat content than the variation so the thermometer itself isn’t changing the temperature much) until the two are in thermal equilibrium If it is a mercury thermometer, the mercury expands or contracts thermally out of a reservoir and you can read the temperature off of a little scale on the side that basically transforms the volume of the mercury into a temperature. Understand? Suppose your thermometer reads 20C.
Now you move the thermometer somewhere else. You put it into a glass of water, or take it with you on a road trip and drive a few hours. You wait for it to come to equilibrium with its environment and check it and again you see it real 20C. What does this mean?
Specifically, it means that the two systems (with well-defined local equilibrium temperatures) have the same temperature. If you put the thermometer into system A and it reads T_A, and put it into system B and it reads T_B, if T_A = T_B they have the same temperature. But what, exactly, does “have the same temperature” mean?
It means “if A and B are placed in thermal contact, they will be in mutual thermal equilibrium, specifically no net heat will flow from A to B or B to A.”
That’s the zeroth law. It defines the thermal equilibrium of two systems as the condition where no heat flows in between them, and establishes the transitivity of equilibrium:
Zeroth Law: If system A is in thermal equilibrium with system C, and system B is in thermal equilibrium with system C, then system A is in thermal equilibrium with system B.
This law is the basis of the thermometer, system C in my example. If we know the thermal properties of system C so that we can transform its equilibrium state into a linear scale of temperature (I will skip the historical process that lead to not only managing this but establishing the absolute or kelvin temperature scale) we can make it into a thermometer, and use it to predict whether system A and B, brought into thermal contact, would exchange heat.
The first and second law are also important in this process. In order for the systems in question to be in equilibrium, we have to manage energy flow into and out of them. First law states that we can’t have heat flow in or out (adiabatic) or work coming in or out, although it allows for a quasi-static progression through a set of states with the same temperature that takes in heat and transforms it directly into work (isothermal expansion). This kind of process can occur, but cannot be made into a cyclic process that “just” makes heat into work. The Second law establishes (among other things) the direction of heat flow. It always flows from the hotter to the colder system when they are placed in thermal contact so that they can exchange heat. Heat can never flow from the colder to the hotter system, nor can a single system evolve in time into a thermal equilibrium with different temperatures at different places as long as heat exchange is possible between those places.
So now imagine a supposed column of air that has spontaneously separated into a low temperature at the top and a higher temperature at the bottom because gravity has compressed the fluid until it is in static mechanical equilibrium. From the zeroth law, this means that if you put your thermometer in the top it will read T_t, and if you put your thermometer in at the bottom it will read T_b, where T_b > T_t.
Now imagine taking a thermally insulated silver wire that is exposed at the top and the bottom so that it is in thermal contact with the fluid there and only there. Place it into the fluid vertically, so that the top of the wire is in contact with gas at T_t, the bottom of the wire is in contact with the fluid at T_b. What will happen? Well, now you’ve got a piece of silver (an excellent conductor of heat) with a temperature gradient across its ends. Obviously, heat will flow out of the fluid at the bottom and up to the fluid at the top, cooling the bottom, warming the top. You must now use your common sense and experience of the world to predict which of the following will be true:
a) Heat will flow forever — as fast as it is delivered to the top, gravity will somehow re-sort the energy so that it flows back to the bottom and keeps it warmer than the top, to be picked up by the silver and conducted back up to the top, to fall to the bottom, to be conducted to the top, to fall to the bottom…
b) Heat will flow until the top and bottom are, in fact, in thermal equilibrium. They were not in equilibrium before. Only when the temperature of the top and the temperature of the bottom are the same will heat stop flowing in the wire, and once that is established the system is truly in equilibrium.
In the latter case, of course, you don’t need the wire. The gas itself conducts heat. In general, you never need “a wire” within a system. If you imagine using your thermometer to measure the temperature of any two neighboring coarse grained chunks of fluid (big enough to internally be in “thermal equilibrium”, small enough to be considered differential chunks as far as calculus and secular changes in gravitational potential and so on are concerned), the only way that heat will not flow between the two chunks (that are in thermal contact) is if the thermometer reads the same thing when it is in contact with each chunk separately. Otherwise if you connect them with an imaginary silver wire (that really only represents the process of heat conduction), heat would flow.
That’s the importance of the zeroth law in this discussion. Thermal equilibrium is isothermal, period. Otherwise it literally contradicts the simplest and most ubiquitous of our experiences of heat — that it flows from hot to cold, that it only flows if things aren’t at the same temperature, that thermal equilibrium is transitive (so we can build devices that measure equilibrium and quantify it as a temperature), and — by direct implication — if any isolated system ever spontaneously separates into hot and cold sides (spontaneous implying that it is a stable process with a well-defined direction to the thermal gradient, e.g. bottom to top) one can build a perpetual motion machine of the second kind, one that converts thermal energy directly to work in a cyclic engine.
Yes, physicists are known to make mistakes in their thermodynamic reasoning that unwittingly lead them to conclusions that violate the laws of thermodynamics, especially physicists that don’t do thermo or stat mech in their research and physicists for whom their one real thermo course and prelims were long ago and far away who work in something else entirely. It’s easily done and I am not innocent of error in this sort of thing myself. Physics is not the easiest subject in the world to master and we are all mere mortals with ageing, beer-damaged brains.
However, some mistakes are more egregious than others, and any discussion that contains “thermal equilibrium” and “thermal gradient” in the same sentence describing a single system is not only wrong, it is so wrong that the utterer of the statement really does need to go back to basics and work on passing prelims again. Or worse, go back, pick up an introductory textbook on physics (such as Tipler and Mosca, or Halliday, Resnick and Walker) and work through thermo again.
Thermal equilibrium is isothermal. Thermal equilibrium is both defined and consistently observed to be isothermal. There is a law of thermodynamics that pretty much says “being in thermal equilibrium means having the same temperature throughout” in any system permitting internal thermal transport. It is easy to show that other laws of thermodynamics, in particular the law forbidding energy conservative magic (perpetual motion machines of the second kind) would be broken if thermal equilibrium was not isothermal. There is an entire heuristic argument (Maxwell’s Demon) and conceptual algebraic argument (detailed balance) and detailed derivation and proof (statistical mechanics and the increase of entropy/Third Law) that all physics majors should have gone through explaining why spontaneous thermal separation does not and can not occur (fundamental answer: it can, it’s just almost infinitely improbable, as likely as all of the molecules of air in the room you are sitting in suddenly deciding to bounce just the right way to end up as a blob of liquid air off in a corner and leave you gasping in a vacuum).
Now, can we please, please stop asserting egregious violations of laws of physics as explanations for the warming of the globe? That isn’t being skeptical of CAGW, that is just being stupid, especially after the error is pointed out and carefully, rigorously explained.
Note well that this argument says nothing at all about non-equilibrium thermal separation. In fact, thermal separation is by definition only possible in non-equilibrium thermodynamics, when accompanied by heat flow or external work. The Earth is an open system, so sure, it can and does maintain a steady state (average) thermal gradient in its atmosphere but not because gravity is providing steady state work or energy. To explain a bottom to top thermal gradient that is maintained over time, one must be able to describe the sources and transport of energy that maintains it.
I continue to await a reasonable description of energy transport — absorption of actual heat input from somewhere (where “somewhere” must ultimately be “the Sun”) and the processes that lead to it being distributed in such a way as to produce net “warming” in N&Z. Simply invoking density and PV = NkT as an “explanation” does not do it, not at all.
rgb
And from this it follows that greenhouse gases cool the atmosphere. To be clear, without greenhouse gases the troposphere would not get cooler with increasing height.
From that, it immediately follows that increasing the amount of any greenhouse gas in the atmosphere will make the atmosphere cooler, not warmer as some people claim, as described in my 2009 paper – How Greenhouse Gases Work.
This is far from clear. First of all, cooler than what? What is your initial state from which you measure relative cooling vs warming? Second, how do you compute the average temperature? By volume? By volume weighted by mass density? The upper atmosphere ends up much cooler than it would be if the atmosphere were perfectly transparent, to be sure, but the bulk of the atmosphere is actually underneath all of that cold, and the bottom of the atmosphere — where we live, and snow falls, and summer rages — is in moderately good thermal contact with the surface and follows surface temperatures.
What greenhouse gases do is alter the pattern of the outgoing radiation that ultimately has to balance incoming radiation from following an approximate blackbody curve at the temperature of the surface to following a blackbody curve at the temperature of the surface in only part of the spectrum — the “water window” — while being modulated down to emission from the colder atmospheric greenhouse gases at a different, lower, temperature. In order to maintain overall thermal balance, the surface temperature has to be higher than it would otherwise be, because the cooler upper atmosphere emits less energy per unit time per unit “area” (above a given surface area of the Earth) than would have been emitted otherwise.
That’s all. Nothing complicated. True completely independent of any particular mechanism for heat transport from the surface to the upper atmosphere. It depends only on the atmosphere being optically thick in some bands, so that emission in those bands that actually escapes from the Earth occurs only from the cooler upper troposphere. No mention of “upwelling” or “downwelling” IR, no mention of convective or conductive heat transfer or mechanisms that establish the thermal gradient in the atmosphere in the first place.
The surface, therefore, will always be warmer, and the air near the surface (in good thermal contact with the surface) is also likely to be warmer. How much air is warmer vs the amount that is cooler, how much heat is involved in the heat capacity of the atmosphere vs the heat that is involved in the heat capacity of the relevant part of the ground or oceans? I don’t know, but I do know that computing it or even estimating it from measurements is a nontrivial process, not something one is likely to be able to pronounce upon without a lot of quantitative work based on actual global data. It is a solution to a non-equilibrium heat flow problem with lots of complexity, and the answer might well depend in some detail on the heat capacity of the various components involved. If the Earth is modelled as a thin layer of blackened aluminum foil (very low heat capacity) covered with a thick atmosphere, you’ll very likely get a different answer than you would get if the Earth is modelled as a layer of water sitting on top of the blackened aluminum foil and covered with a thin atmosphere, where by thick and thin I’m referring to their relative heat capacity, not their optical thickness.
That’s why I think it is very important to state your premises when you talk about heating or cooling. Heating or cooling relative to what state of which model system? The ideal superconducting blackbody? A rotating Earth with no lateral heat transport or atmosphere (and with what surface heat capacity?) A rotating tipped Earth where one has to average over many annual cycles to get an “average temperature” of some sort to use as the basis? With what heat capacity? With an atmosphere or not? With lateral transport?
These are all of the reasons I’m a skeptic. The GHE is real, and contributes to the net warming of the Earth’s surface, but when one talks about responses to changes in greenhouse driving, one has to understand all of the other things that lead to heating or cooling, and define the reference model from which heating or cooling are measured. One also has to worry about whether one cares if the overall heat content of the system is lower if the temperature of the surface is higher, or vice versa!
rgb
This seems to me to be a dicey premise to be using as input to your argument for isothermal equilibrium. The premise implies that, over a tiny altitude extent (something greater than the mean free path), the temperature does not change. In essence, you are assuming dT/dz = 0 over your slice. This assumption might be what your reasoning is extropolating over all z, resulting in a lapse rate of 0 everywhere.
Well then, feel free to build the perpetual motion machine of the second kind that is enabled by a steady state temperature differential that is not maintained by any external source of heat energy or work. Or, learn the laws of thermodynamics that make this not only impossible, but absurd.
rgb
Many thanks,
w.
You are quite welcome. Now, if only a few other people would take the time to read Caballero and work through it, or the time to go (back) to their intro physics textbooks and look at the thermodynamics section to understand what temperature is (Zeroth Law).
This whole discussion has inspired me to go ahead and set up my very first (very preliminary) draft of part III of my online physics textbook, with the first part devoted to thermodynamics. It still isn’t sufficient to help people answer all of these questions, but the arguments I’m putting into this and the other threads are — they are classic arguments used in all of the intro texts.
Big hint — any time your answer permits you to build a perpetual motion machine, it is wrong, and you can build a heat engine that runs between any temperature differential. If any process spontaneously creates a temperature differential (without external work or energy being exchanged, it permits a perpetual motion machine of the second kind to be built, and incidentally violates the direction of entropy (third law) as hot reservoirs gain \Delta Q/T_H of entropy which is strictly less than the \Delta Q/T_C that the cold reservoir loses so that the entropy of the (isolated) system decreases.
Hmmm, no it doesn’t. Never happens.
rgb
I am not sure there is a demon. Yes the number of particles passing through an infinitesimal slice is equal; however, they enter into a volume with less kinetic energy than they just previously had…so they have to be cooler.
Dear Joe,
I know you took thermo if you are an astrophysicist. I know that you know the laws of thermodynamics. I know that you can understand that one can build a perpetual motion machine of the second kind if your assertion is true. We both know that — no we can’t. We can both compute the fact that the entropy of your hypothesized system decreases as thermal separation occurs.
Can we stop now? Go to your shelf, pick up a textbook with thermodynamics in it, and read about the zeroth law. Thermal equilibrium is the very definition of isothermal. Or do the actual textbook exercise and compute the detailed balance to convince yourself that there can be no thermal gradient at equilibrium.
rgb
=====================================
Robert Brown says:
January 19, 2012 at 5:22 am
Dear Joe,
I know you took thermo if you are an astrophysicist. I know that you know the laws of thermodynamics. I know that you can understand that one can build a perpetual motion machine of the second kind if your assertion is true. We both know that — no we can’t. We can both compute the fact that the entropy of your hypothesized system decreases as thermal separation occurs.
Can we stop now? Go to your shelf, pick up a textbook with thermodynamics in it, and read about the zeroth law. Thermal equilibrium is the very definition of isothermal. Or do the actual textbook exercise and compute the detailed balance to convince yourself that there can be no thermal gradient at equilibrium.
rgb
=======================================
Robert, this is beside the point. I tried explaining above that we’re not talking about a system in isothermal equilibrium. The atmosphere IS OBSERVED to have a lapse rate. If YOU think that the OBSERVED atmosphere means we can violate the 2nd Law, then go ahead and try it. I never said that, you did; so stop putting words in my mouth. We also have the fact that -g/Cp describes the observed temperature profile for dry air; that’s a fact, and we know the various reasons for why the exact quantitative value of -g/Cp is not always realized.
The atmosphere IS OBSERVED to change temperature with altitude. That is all I have been talking about, and connecting it to -g/Cp. I don’t know where this stuff about the observed atmosphere violating the laws of thermo is coming in from. It is also mostly static in its distribution – only the lower atmosphere changes temperature but between 3 & 20km the gradient is static.
So, let’s stop introducing some ideal system which has nothing to do with what we’re talking about and prevents us from actually understanding the first thing about it. If the ideal definitions of thermal equilibrium means that the atmosphere must be isothermal, then those definitions are not the physics we must refer to since they do not match the observed system.
It sounds as if you are insisting that if there’s non-isothermicity in the atmosphere with altitude we could build a perpetual motion machine. Well, the atmosphere IS non-isothermetic with altitude, so you’re going to have to figure that one out since it was you, not I, who came up with the idea you build such a machine in this case. Besides, maybe one could work as you’re trying to build it, who knows, but if it did I am sure the energy would be coming from somewhere for it. It is over-unity efficiency that would be a problem, not that it could run forever given a system which has a continuous supply of new energy coming in from the Sun. Aren’t there rumours that Tesla figured out a “free=energy” device of some sort using static charge in the atmosphere? Anyway, this is all not relevant.
I’d like to point out yet another oddity of standard GHE theory. There’s supposed to be back-radiation from IR-emitting molecules, predominantly CO2, that cause 33C of additional heating. That’s the basic GHE theory.
So tell me then: on a spectrometer plot taken from the surface of the Earth and pointing upwards, or even from one taken from above the Earth and looking down – where is the emission line?
If CO2 is radiating all this energy and by definition this emission has to be spectral, then where is the huge & incredibly bright emission line introducing an additional 150 W/m2 into the surface, and that which should also be exiting the TOA? The ENTIRE output spectrum of the Earth only comes out to 240 W/m2, so this additional 150 W/m2 from spectral emission must be huge! Where is it?
It doesn’t exist.
Instead, where it could exist, is a huge hole in the spectrum, a LACK of energy power. It HAS TO EMIT to be said to cause heating by radiative emission in the first place. Yet where it could emit, it doesn’t exist. And at the bottom of the notch where CO2 should be emitting to add all this extra power, is a smooth blackbody curve corresponding to something like -80C. There is a very small emission peak right at 15um, but it’s barely worth noting.
So fine, let’s pretend to go with the standard theory of GHG back-emission. And so I’ll ask: where is the emission?
Pointing an IR sensor at the sky and it telling you the temperature, converted to some power units, is simply taking a temperature reading! It has NO relevance to the huge emission line we should see from CO2 & GHG’s causing all this heating. That’s a fraudulent interpretation of the measurement! The IR sensor is measuring a rough black-body that has LACK of emission flux at GHG wavelengths.
It might just be that simple. Fine: radiation causes heating, we all know that. So show me the radiation from CO2 then. Oh it doesn’t exist? Well then what the heck…non-existent, non-observable spectral emission from GHG’s causes heating. Wonderful.
Now someone might try to back-track and say that the downward IR gets all absorbed by the time it reaches the ground, and because it is all absorbed this is why it is causing heating. But wait, it can’t very well be said to be heating the surface then, can it, if it never GETS to the surface. And additionally, you can’t hide at the bottom of the atmosphere anyway – the TOA is free to space and there’s NO reason we shouldn’t see the 150 W/m2 of GHG spectral emission there. But it’s not there either, is it; except for a tiny little pip at 15um with maybe a couple of Watts in it.
Someone might also try to back-track and say that because all the 15um radiation is absorbed and you don’t see it, that’s why it causes heating. But that’s still inconsistent with the spectral reading at the TOA – it should still be seen at the TOA – and it also implies that LACK of emitted radiation is what causes heating. But that’s what non-GHG’s implicitly do in the first place – not radiate and therefore trap heat – and the whole GHG theory is based on the idea that GHG’s spectrally radiate.
This whole theory is shot; full of holes. With this OP and the comments here in it, and the other one by Robert Brown and the comments in that one, it is clear to anyone reading that GHG Theory is dead. I still think my treatises give a good explanation of why it is dead.
http://www.tech-know.eu/uploads/Understanding_the_Atmosphere_Effect.pdf
http://principia-scientific.org/publications/The_Model_Atmosphere.pdf
http://principia-scientific.org/publications/Copernicus_Meets_the_Greenhouse_Effect.pdf
Read only the last of those links if you want – it is a very short paper with a succinct summary of the paradigm shift.
Of course, I give thanks to the book that started it all.
The isothermal/adiabatic distribution for an isolated ideal gas in a gravitational field has long been debated.
For the isothermal distribution we have Maxwell, Boltzmann and Clausius.
For the adiabatic distribution we have Loschmidt, Laplace and Lagrange.
The smart money must be with the isothermal advocates but I would not regard this as a debate of which was settled and of historical interest only.
Clausius clincher argument of the perpetual motion machine being possible for the adiabatic distribution turns out to be very hard to prove with real components given 9.8K/km scale.
There has been no experiment to settle the matter!
Here for instance is a member of the physics department of the University of California making a very up to date case for the adiabatic distribution.
http://arxiv.org/PS_cache/arxiv/pdf/0812/0812.4990v3.pdf
=======================
This whole theory is shot; full of holes. With this OP and the comments here in it, and the other one by Robert Brown and the comments in that one, it is clear to anyone reading that GHG Theory is dead. I still think my treatises give a good explanation of why it is dead.
========================
I meant this OP by Robert and the other one by Willis Eschenbach, and the comments therein.
====================================
Robert Brown says:
January 19, 2012 at 4:19 am
The Earth is an open system, so sure, it can and does maintain a steady state (average) thermal gradient in its atmosphere but not because gravity is providing steady state work or energy. To explain a bottom to top thermal gradient that is maintained over time, one must be able to describe the sources and transport of energy that maintains it.
rgb
====================================
Alright yes, I see. My point is that the thermal gradient exists and is described for the most part by -g/Cp. That equation does not actually require thermal equilibrium to work, it merely requires that the net change of energy in the system is zero – that definition implies nothing about the thermal distribution. So that’s an important distinction. Just like the metal bar heated/cooled from two ends example: the net energy coming into the bar is equal to that going out, so there is “equilibrium” in THAT sense, but that is not actually thermal equilibrium in the strict sense of isothermicity. There will be a temperature distribution down the length of the bar. Static, no change in total internal energy, but not in thermal equilibrium within itself.
Now, the next part:
==================================
Robert Brown says:
January 19, 2012 at 4:19 am
I continue to await a reasonable description of energy transport — absorption of actual heat input from somewhere (where “somewhere” must ultimately be “the Sun”) and the processes that lead to it being distributed in such a way as to produce net “warming” in N&Z. Simply invoking density and PV = NkT as an “explanation” does not do it, not at all.
==================================
Yes I do agree with that. However, I would throw away the assumption everyone makes that there NEEDS to be “net additional warming”. As I explained in my papers I’ve linked to several times, the real-time power flux into the Earth-system actually has a temperature value of +121C at maximum, or +30C on average given a hemisphere & albedo. N&Z use an averaging approach that leads to an input temperature even smaller than the mistaken P/4 value of -18C.
I do agree that is doesn’t make sense to have gravity acting as a free-energy source. I understand why you’re bringing that up now and I struggled solving that myself early on. You (Robert) have my obedience in that regard.
And so, what you request:
“I continue to await a reasonable description of energy transport — absorption of actual heat input from somewhere and the processes that lead to it being distributed”
is exactly what I am attempting with the heat-flow differential equation that I am developing that models the input and outputs in real-time with, real-time – non-diluted by averaging – values. As I said, you don’t need to model the intricate workings of a capacitor to be able to model the voltage in an R-C circuit – you just need the basic gross values for the parameters describing the capacitor and the resistor. This is analogous to temperature (voltage) with mass (resistance) and thermal capacity (capacitance). The equation is analogous but it doesn’t “depend” on the analogy, as it were.
And what I have found with such an equation is that we have to appreciate the average ground temperature is stable and at around +5C all over the planet. You have to include that. You can’t just say that the ground temperature is zero Kelvin (or 2.3K) and as such has no effect on the surface. Changing the level of the plateau upon which the solar insolation varies has a HUGE effect on the resulting temperature balances at the surface.
Joe Postma says:
January 19, 2012 at 7:42 am
I’d like to point out yet another oddity of standard GHE theory.
Mr. Postma, here is another oddity that I find. Standard radiative heat transfer equation says that if T1=T2 then Q/A = ZERO. But some how in GHE theory if T1>T2 then T2 will transfer energy back to T1.
===========================================
Bryan says:
January 19, 2012 at 8:00 am
The isothermal/adiabatic distribution for an isolated ideal gas in a gravitational field has long been debated.
For the isothermal distribution we have Maxwell, Boltzmann and Clausius.
For the adiabatic distribution we have Loschmidt, Laplace and Lagrange.
The smart money must be with the isothermal advocates but I would not regard this as a debate of which was settled and of historical interest only.
Clausius clincher argument of the perpetual motion machine being possible for the adiabatic distribution turns out to be very hard to prove with real components given 9.8K/km scale.
There has been no experiment to settle the matter!
Here for instance is a member of the physics department of the University of California making a very up to date case for the adiabatic distribution.
http://arxiv.org/PS_cache/arxiv/pdf/0812/0812.4990v3.pdf
=============================================
Yes well the problem we’ve just identified, between Robert and I going back and forth, is realizing that the usual statement leading into the derivation of -g/Cp of
“At thermal equilibrium the change in energy of the system is zero”
is incorrect. Functionally what we are seeking, and defining in accordance with reality, is that the net change of energy in the system is zero. There’s nothing wrong with that part of it. What is wrong in the leading statement is that this corresponds to thermal equilibrium.
The condition of dU = 0 does not actually have to correspond to thermal equilibrium as defined to the strict thermodynamic sense of isothermicity. Robert is quite correct in having pointed out what he has been. The condition of dU = 0 simply states, at most, that you might expect a static temperature distribution in the system. In that regard, then the equation (resulting in -g/Cp) does work quite nicely, doesn’t it.
So, this is actually a MAJOR development, in my opinion. We have all just participated in correcting and improving a standard definition and starting point of a common analysis. We now have a bridge that the isothermal people and the adiabatic people can meet upon. And to be fair, for the purpose of understanding, not for blame, we understand that the initial starting point of the adiabatic “group” was incorrect: it was incorrect to say that dU = 0 corresponds to thermal equilibrium. That condition actually only implies “stasis” within the system, but the system is free to have a thermal distribution since energy is continuously moving through it. This latter part is what the isothermal “group” didn’t catch on to.
It does not matter if the the non ghg atmosphere is isothermic in the hypothetical example you give because the actual data for the moon which has no atmosphere is 100k below what s-b predict it should be.The atmospheric pressure must raise the near surface temperature by 100k before we even consider the greenhouse effect.It is not therefore radiating more energy to space then it receives from the Sun.The surface can radiate all the short wave radiaton back into space as long wave radation then.
@mkelly
Indeed!
mkelly says:
You have had this explained to you many, many times in many, many threads and yet you persist in spreading such falsehoods! The heat flow, that is the net flow of energy, is from the warmer planet (T1) to the colder atmosphere (T2). However, the amount of heat flow that occurs depends on T2 as well as T1. The way this comes about is because radiative energy is transferred in both directions. However, the radiative transfer from the planet to the atmosphere is larger than the transfer from the atmosphere to the planet…and hence, the net radiative energy flow, which we call the heat flow, is from the planet to the atmosphere.
This field is the only field where it is at all controversial that the rate of heat flow between objects at a temperature T1 and T2 with T1 > T2 depends on T2 as well as T1. And, the reason it is controversial is that some peope like you would apparently prefer to believe pseudoscientific nonsense over real science.
@Joe Kirklin Postma ( http://wattsupwiththat.com/2012/01/12/earths-baseline-black-body-model-a-damn-hard-problem/#comment-869426 ):
Dewitt Payne has provided you with the answer where you have asked the same thing in another thread: http://wattsupwiththat.com/2012/01/13/a-matter-of-some-gravity/#comment-869453
It could be that the two sources of heat the surface temperature and the heating caused by atmospheric pressure are in equilibrium at the S-B predicted temperature therefore no temperature change takes place between the surface and the atmosphere.
Yes I saw that Joel. I saw that several of you agreed that there is no spectral radiation from GHG’s. Thanks for the back-link.
The isothermal/adiabatic distribution for an isolated ideal gas (no heat enters or leaves the gas) in a gravitational field has long been debated.
The outcome either way, though interesting, has no relevance to the greenhouse theory as far as I can see.
Joe Postma says:
That is not what was agreed.
Robert;
That 2009 paper has given me a solid long-term awareness of the potent interactions that keep the tropopause paused ever since about early 2010. The mechanism of “handing off” between CO2 and H20 at the stratosphere boundary is a powerful insight. As is the heat-pipe-like functioning of the water cycle, using latent heat energy transfers to get said energy up to the cloud tops, whence it is “dumped” upwards. Thank you again.
I’d recommended it many times to others, though its impact hasn’t been all it should have been. Perhaps with this backing, its time has come!