Earth's baseline black-body model – "a damn hard problem"

The Earth only has an absorbing area equal to a two dimensional disk, rather than the surface of a sphere.

By Robert G. Brown, Duke University (elevated from a WUWT comment)

I spent what little of last night that I semi-slept in a learning-dream state chewing over Caballero’s book and radiative transfer, and came to two insights. First, the baseline black-body model (that leads to T_b = 255K) is physically terrible, as a baseline. It treats the planet in question as a nonrotating superconductor of heat with no heat capacity. The reason it is terrible is that it is absolutely incorrect to ascribe 33K as even an estimate for the “greenhouse warming” relative to this baseline, as it is a completely nonphysical baseline; the 33K relative to it is both meaningless and mixes both heating and cooling effects that have absolutely nothing to do with the greenhouse effect. More on that later.

I also understand the greenhouse effect itself much better. I may write this up in my own words, since I don’t like some of Caballero’s notation and think that the presentation can be simplified and made more illustrative. I’m also thinking of using it to make a “build-a-model” kit, sort of like the “build-a-bear” stores in the malls.

Start with a nonrotating superconducting sphere, zero albedo, unit emissivity, perfect blackbody radiation from each point on the sphere. What’s the mean temperature?

Now make the non-rotating sphere perfectly non-conducting, so that every part of the surface has to be in radiative balance. What’s the average temperature now? This is a better model for the moon than the former, surely, although still not good enough. Let’s improve it.

Now make the surface have some thermalized heat capacity — make it heat superconducting, but only in the vertical direction and presume a mass shell of some thickness that has some reasonable specific heat. This changes nothing from the previous result, until we make the sphere rotate. Oooo, yet another average (surface) temperature, this time the spherical average of a distribution that depends on latitude, with the highest temperatures dayside near the equator sometime after “noon” (lagged because now it takes time to raise the temperature of each block as the insolation exceeds blackbody loss, and time for it to cool as the blackbody loss exceeds radiation, and the surface is never at a constant temperature anywhere but at the poles (no axial tilt, of course). This is probably a very decent model for the moon, once one adds back in an albedo (effectively scaling down the fraction of the incoming power that has to be thermally balanced).

One can for each of these changes actually compute the exact parametric temperature distribution as a function of spherical angle and radius, and (by integrating) compute the change in e.g. the average temperature from the superconducting perfect black body assumption. Going from superconducting planet to local detailed balance but otherwise perfectly insulating planet (nonrotating) simply drops the nightside temperature for exactly 1/2 the sphere to your choice of 3K or (easier to idealize) 0K after a very long time. This is bounded from below, independent of solar irradiance or albedo (or for that matter, emissivity). The dayside temperature, on the other hand, has a polar distribution with a pole facing the sun, and varies nonlinearly with irradiance, albedo, and (if you choose to vary it) emissivity.

That pesky T^4 makes everything complicated! I hesitate to even try to assign the sign of the change in average temperature going from the first model to the second! Every time I think that I have a good heuristic argument for saying that it should be lower, a little voice tells me — T^4 — better do the damn integral because the temperature at the separator has to go smoothly to zero from the dayside and there’s a lot of low-irradiance (and hence low temperature) area out there where the sun is at five o’clock, even for zero albedo and unit emissivity! The only easy part is to obtain the spherical average we can just take the dayside average and divide by two…

I’m not even happy with the sign for the rotating sphere, as this depends on the interplay between the time required to heat the thermal ballast given the difference between insolation and outgoing radiation and the rate of rotation. Rotate at infinite speed and you are back at the superconducting sphere. Rotate at zero speed and you’re at the static nonconducting sphere. Rotate in between and — damn — now by varying only the magnitude of the thermal ballast (which determines the thermalization time) you can arrange for even a rapidly rotating sphere to behave like the static nonconducting sphere and a slowly rotating sphere to behave like a superconducting sphere (zero heat capacity and very large heat capacity, respectively). Worse, you’ve changed the geometry of the axial poles (presumed to lie untilted w.r.t. the ecliptic still). Where before the entire day-night terminator was smoothly approaching T = 0 from the day side, now this is true only at the poles! The integral of the polar area (for a given polar angle d\theta) is much smaller than the integral of the equatorial angle, and on top of that one now has a smeared out set of steady state temperatures that are all functions of azimuthal angle \phi and polar angle \theta, one that changes nonlinearly as you crank any of: Insolation, albedo, emissivity, \omega (angular velocity of rotation) and heat capacity of the surface.

And we haven’t even got an atmosphere yet. Or water. But at least up to this point, one can solve for the temperature distribution T(\theta,\phi,\alpha,S,\epsilon,c) exactly, I think.

Furthermore, one can actually model something like water pretty well in this way. In fact, if we imagine covering the planet not with air but with a layer of water with a blackbody on the bottom and a thin layer of perfectly transparent saran wrap on top to prevent pesky old evaporation, the water becomes a contribution to the thermal ballast. It takes a lot longer to raise or lower the temperature of a layer of water a meter deep (given an imbalance between incoming radiation) than it does to raise or lower the temperature of maybe the top centimeter or two of rock or dirt or sand. A lot longer.

Once one has a good feel for this, one could decorate the model with oceans and land bodies (but still prohibit lateral energy transfer and assume immediate vertical equilibration). One could let the water have the right albedo and freeze when it hits the right temperature. Then things get tough.

You have to add an atmosphere. Damn. You also have to let the ocean itself convect, and have density, and variable depth. And all of this on a rotating sphere where things (air masses) moving up deflect antispinward (relative to the surface), things moving down deflect spinward, things moving north deflect spinward (they’re going to fast) in the northern hemisphere, things moving south deflect antispinward, as a function of angle and speed and rotational velocity. Friggin’ coriolis force, deflects naval artillery and so on. And now we’re going to differentially heat the damn thing so that turbulence occurs everywhere on all available length scales, where we don’t even have some simple symmetry to the differential heating any more because we might as well have let a five year old throw paint at the sphere to mark out where the land masses are versus the oceans, and or better yet given him some Tonka trucks and let him play in the spherical sandbox until he had a nice irregular surface and then filled the surface with water until it was 70% submerged or something.

Ow, my aching head. And note well — we still haven’t turned on a Greenhouse Effect! And I now have nothing like a heuristic for radiant emission cooling even in the ideal case, because it is quite literally distilled, fractionated by temperature and height even without CO_2 per se present at all. Clouds. Air with a nontrivial short wavelength scattering cross-section. Energy transfer galore.

And then, before we mess with CO_2, we have to take quantum mechanics and the incident spectrum into account, and start to look at the hitherto ignored details of the ground, air, and water. The air needs a lapse rate, which will vary with humidity and albedo and ground temperature and… The molecules in the air recoil when the scatter incoming photons, and if a collision with another air molecule occurs in the right time interval they will mutually absorb some or all of the energy instead of elastically scattering it, heating the air. It can also absorb one wavelength and emit a cascade of photons at a different wavelength (depending on its spectrum).

Finally, one has to add in the GHGs, notably CO_2 (water is already there). They have the effect increasing the outgoing radiance from the (higher temperature) surface in some bands, and transferring some of it to CO_2 where it is trapped until it diffuses to the top of the CO_2 column, where it is emitted at a cooler temperature. The total power going out is thus split up, with that pesky blackbody spectrum modulated so that different frequencies have different effective temperatures, in a way that is locally modulated by — nearly everything. The lapse rate. Moisture content. Clouds. Bulk transport of heat up or down via convection. Bulk transport of heat up or down via caged radiation in parts of the spectrum. And don’t forget sideways! Everything is now circulating, wind and surface evaporation are coupled, the equilibration time for the ocean has stretched from “commensurate with the rotational period” for shallow seas to a thousand years or more so that the ocean is never at equilibrium, it is always tugging surface temperatures one way or the other with substantial thermal ballast, heat deposited not today but over the last week, month, year, decade, century, millennium.

Yessir, a damn hard problem. Anybody who calls this settled science is out of their ever-loving mind. Note well that I still haven’t included solar magnetism or any serious modulation of solar irradiance, or even the axial tilt of the earth, which once again completely changes everything, because now the timescales at the poles become annual, and the north pole and south pole are not at all alike! Consider the enormous difference in their thermal ballast and oceanic heat transport and atmospheric heat transport!

A hard problem. But perhaps I’ll try to tackle it, if I have time, at least through the first few steps outlined above. At the very least I’d like to have a better idea of the direction of some of the first few build-a-bear steps on the average temperature (while the term “average temperature” has some meaning, that is before making the system chaotic).

rgb

Get notified when a new post is published.
Subscribe today!
5 3 votes
Article Rating
446 Comments
Inline Feedbacks
View all comments
KevinK
January 12, 2012 6:02 pm

Dr. Brown;
VERY WELL PUT, thank you.
In the engineering fields we consider these “intractable” analytic problems, meaning that we cannot ever hope to solve them analytically. So, to make forward progress we turn to alternative approaches.
As a simple example;
You have a complex Earth orbiting satellite with mechanical, optical, electrical and other components that are held together with an assortment of nuts/bolts/glue, etc. (sorry, no rubber bands allowed).
You wish to launch it into space on top of a big roaring rocket that is going to shake Holy H—l out of it before it reaches the vibration free nearly zero-G altitude where it will operate.
How do you ascertain that it will survive the launch ?
Well here is how;
You write down everything you and others have learned from launching other things into space.
You do not repeat their mistakes, and copy what worked for them (if applicable)
Then, (get this all you true believers in the value of computer models); you make a test copy (aka a Qualification Model) and you SHAKE IT EVEN HARDER THAN the rocket ever could.
Yes indeed, that’s how’s it’s done, imagine a big honking table that you bolt your satellite to and the table shakes Holy H—l out of it and then some.
If it survives (about 95% of the time) and if you make a Flight Model using exactly the same extremely well documented assembly processes it is “very likely” that your satellite will make it into space and operate as advertised. But some still fail (lately about 5%, historically less than 1%).
Nobody attempts to analytically model the strength of all those mechanical interfaces when subjected to vibration, it would take forever and nobody would believe the “heritage” of your computer models (i.e. has it worked in the past ?) anyway.
Ok, as another example, how can you determine how long your satellite will operate once in orbit ?
Satellites operate in a very rough environment where radiation (those nasty cosmic rays) strike semiconductor junctions and render them useless, atomic oxygen rips material away from optical coatings and the occasional micrometeorite slices through a critical power supply wire.
Well, you could just make lots of satellites with different designs and wait a few years to see how long they last while on orbit. But given the cost of satellites (50 million and up to lots and lots of millions of dollars) this would be wasteful. Also, since modern satellites often last 5-10 plus years (Chandra was launched in 1999 and is still functioning) this would take decades to get useful information.
So we use (drumroll please) COMPUTER MODELS, these models are fairly simple and use empirically derived data (we put a satellite up with the Space Shuttle to do nothing but expose different materials to the space environment for years, then retrieved it to find out which lasted the longest) to predict how long a satellite designed with certain materials will last. We also predict how often a micrometeorite can be reasonably expected to take out an important electrical cable.
So we come up with an analytical prediction (i.e. a computer model) to predict the lifetime of the satellite. These predictions have been aligning pretty well with the performance of actual space vehicles, but we still get a mission failure every once in a while.
Sorry this has become quite a bit of a post, but the point is, you need to carefully assess when a computer model helps and when it just fools you.
The climate models which have been programmed to explicitly behave in correspondence with a hypothesis (“The Greenhouse Effect”) are just a silly and expensive attempt to beat an intractable analytical problem into submission with more computer power. I believe they are useless.
Cheers, Kevin.

LazyTeenager
January 12, 2012 6:03 pm

So Robert reckons he can work out the whole of atmospheric physics in one night and then discovers it is complicated.
People smarter than Robert have been working on this for a century or so, so I figure they will be able to tell him it is even more complicated than what he has discovered so far.
But I get the uncomfortable feeling this argument is going to devolve into a bit of logic that looks like this:
1. We do not understand everything about how blackbody radiation determines the baseline temperature.
2. Therefore we know nothing about how blackbody radiation determines the baseline temperature.
3. Therefore we know with absolute certainty that the blackbody radiation determines the baseline temperature to be 25C.
4. Therefore the greenhouse effect does not exist.
Please oh please prove me wrong.

Bill Illis
January 12, 2012 6:03 pm

Robert’s post mentions it takes “time” to raise the temperature of the components of the Earth system.
I thought I would point out some of those lags.
– daytime temperatures peak 3.5 hours after the peak of the solar radiation.
– water evaporates from the surface (taking some heat with it) and it eventually falls back to Earth as rain or snow (after dumping off its heat in the atmosphere) anywhere from 2 hours to an average 9 days later (side question, what is the temperature of layer where rain or snow forms versus the temperature of where it evaporates from).
– land temperatures lag behind the solar cycle by an average 34 days (the warmest day of the year is July 25th and solar radiation peaks about June 21) (the accumulation rate is about 0.5 W/m2 per day from the coldest day of the year, January 25th, to the warmest day of the year, July 25th).
– lakes and rivers peak about 45 days behind the seasonal cycle.
– ocean temperatures lag behind solar radiation by about 45 to 70 days.
– it takes 1500 years to warm up the entire ocean versus the Earth Surface temperature.
– it takes about 10,000 years to completely melt out a continental ice age.
– Venus is so much hotter than it really should be.
In other words, the components of the Earth system slowly accumulate and slowly release energy received from the Sun.
The Stefan-Boltzmann equation includes a mathematical expression called “Emissivity” but this is mostly ignored and mostly not understood that it contains a “time” element in it that must be taken into account. It is entirely possible that emissivity varies over time and over energy levels and it is entirely possible that an object can go on forever having an imbalance between the energy it is absorbing and the energy it is releasing (it can certainly last up to 10,000 years).
Emissivity needs to be further defined. It has a non-Zero effect on temperatures but it is treated as if it is Zero. For example, the lag in temperatures of the Earth system’s components means that they slowly accumulate very small amounts of energy per second. This could go on forever or it may reach an equilbrium level at some point.
The Greenhouse Effect may simply be a function of Emissivity.

Bill H
January 12, 2012 6:08 pm

Brian H says:
January 12, 2012 at 5:50 pm
“Net-net, CO2 thermalizes incoming radiation, and occasionally radiates acquired thermal radiation. Since O2 and N2 don’t do the latter, CO2 constitutes a “hole in the bucket” at higher altitudes, radiating (leaking) thermal energy to space that would otherwise hang around longer (raising local/average temperatures). ”
———————————————————————————
This is the key as to why CO2 is actually a cooling device rather than a heat storing one.. as the levels increase it has the black body effect increase… much more than the intake it keeps under solar input.. its that darkside net loss that is killing the AGW theory dead.. but they refuse to look at it…

Surfer Dave
January 12, 2012 6:10 pm

@Tim_Focketts
I looked up the geothermal heat flux at the surface and found that there is a paucity of real measurements globally. The USA has been studied reasonably well and while the average on continental USA is estimated at about 250mWm-2 there are hot spots up to 15Wm-2. That was from the Southern Methodist University website, very instructive. No-one really knows what the global average is, especially given that 70% of the planet is underwater. It is assumed that the flux on continental USA is representative of the flux on the entire planet. Some measurements in Australia, South Africa and Russia seems to confirm this but even so there are hot spots with fluxen well above 1Wm-2.
Given that seems like quite a low rate, how come the temperature below about 10m is 22C (295K) and increasing with depth? Is that 22C from heat that has gone down from the surface, or is it the balance between the steady outward flux from the geothermal sources (which include radioactive decay, heat from the earth’s EM currents, gravitational friction internally and the residual heat from planetary formation) and the inward flowing heat from the insolation? Would that internal temperature at 10m deep change in the absence of an atmosphere?
Also, on the original topic, one point to remember is that most of the physics of heat conductance in materials assumes that the heat flux is perpendicular to the surface of a flat plane. This is probably an acceptable simplification because in a uniform infinite plane the generally spherical flow of heat (either convective or radiative) from a point is balanced out by all the surrounding points so that the nett flow can be assumed to be perpendicular. My point is this, on a sphere each point has it own perpendicular that all converge on the centre of the sphere. The assumption of flux parallel to a flat surface is not valid for a sphere. I suspect that in some sense a sphere acts like a lense for the heat flow, the density of the flux and thus the temperature of the material of the sphere increases towards the centre. I know the earth seems “big” so that the flat plane analogy sort of makes sense, but that’s probably not correct.

Brian H
January 12, 2012 6:10 pm

Edit: “occasionally acquired thermal radiation energy.”

Brian H
January 12, 2012 6:12 pm

ARRrr, he said like a pirate but a week early:
Edit edit “and occasionally radiates acquired thermal radiation energy.”

LazyTeenager
January 12, 2012 6:16 pm

Michael Hart says
But where can us lesser mortals examine the algorithms and computer code, not to mention the assumptions about which bits can be safely ignored?
———–
There is some GCM source code dowloadable but these codes, even the old obsolete ones are rather large, and beyond mere mortal understanding.
In fact they are probably so large they are beyond the in-depth understanding of people who use them and challenging to understand for people who wrote them.
The assumptions, math and experimental work behind each individual component of a GCM program the will be found in the scientific literature and is also likely beyond your understanding. Each such thing represents 5 years work for 1-5 technical specialists.
So, in short, you can download the code but the exercise will not be useful to you.
Just learning how to get the data and create the input files is going to take several weeks of study and work.

eyesonu
January 12, 2012 6:26 pm

Robert Brown
I sense that you have a class of ‘students’ here at WUWT greater than you can imagine! Some are commenters and many more are ‘lurkers’. I am of the ‘lurker’ class and will not miss a lesson and strive to learn every bit of knowledge that you offer. I will not get an ‘A’ as no written exam is required. The ‘test’ will come when I refute the bullshit of the many uninformed.
I sincerely thank you. You still get no apple!

January 12, 2012 6:26 pm

Bill, I like what you say, but I don;t see how emissivity is going to save the day.
>Emissivity needs to be further defined.
emissivity = (Actual thermal radiation) / (ideal thermal radiation)
The definition is pretty straightforward. This definition can be applied at each wavelength or (more commonly) as a weighted average over all wavelengths. I don;t see what more definition is needed.
>but it is treated as if it is Zero …
If anything, it is treats as if it is exactly 1.00. Measurements show that it is indeed pretty close to 1 for many parts of the earth (water, many soils, many leaves … ). I understand the the emissivity is included in many calculations, but this will only be a correction of a few percent — far too little to account for major effects.
>the lag in temperatures of the Earth system’s components means
>that they slowly accumulate very small amounts of energy per second.
This is most certainly true. That is why people worry about small changes in atmospheric IR radiation having long-term effects on climate.
But it is also most certainly true that a few percent error in emissivity cannot account for the difference between outgoing IR seen from space (~ 240 W/m^2 on average, with clear effects of CO2 and H2O) and outgoing IR as seen from just above earths surface (~ 390 W/m^2 on average, with a nearly blackbody spectrum).

LazyTeenager
January 12, 2012 6:32 pm

Thermodynamic professor, power plant manager on January 12, 2012 at 9:41 am said:
Backradiation is bullshit from cooler gas to warmer surface, physically impossible!
——–
You don’t need billions of dollars to prove this statement to be false.
Just go down to your local electronics shop and use $100 to buy an IR thermometer.
Point it at the sky. Measure the temperature. You have just measured the long wave down welling infrared radiation.
Case closed.
But if that is not enough consider unglazed thermal collectors used for swimming pool heaters. The LWIR from the sky figures in the efficiency calculations for those.
Score: checkout chick 1, professor 0.
So professor needs to go back to school and learn the laws of thermodynamics properly this time

eyesonu
January 12, 2012 6:35 pm

I should also add that I am also a student of the several ‘associate’ teachers here WUWT University.

January 12, 2012 6:38 pm

Surfer Dave says:
>Given that seems like quite a low rate, how come the temperature
>below about 10m is 22C (295K) and increasing with depth?
>Is that 22C from heat that has gone down from the surface,
Certainly not.
>or is it the balance between the steady outward flux from the geothermal sources
Yes, the early earth was much hotter and has been cooling for a long time. But there are literally miles of insulation, so the heat flow will be rather small.
>Would that internal temperature at 10m deep change in the absence of an atmosphere?
Certainly it would be a bit cooler. I strongly suspect the temperature gradient would stay about the same, so you could pretty much subtract ~ 33 K (give or take a bit depending on exactly how you want to model the earth with no atmosphere) from the temperature at any depth.
(Of course, over the course of billions of years, this could have a significant effect all by itself. An earth that NEVER had an atmosphere would presumably be cooler than an earth that lost its atmosphere recently. )

KevinK
January 12, 2012 6:43 pm

Dr. Brown;
My last post got a bit verbose, so I split it into two parts for the comfort of any that wish to read further.
While you have captured many of the concerns with respect to our ability to predict the temperature of the Earth with/without “GHGs” I believe you may have missed a few important ”wrinkles” in the problem. I will suggest just two for everybody’s consideration.
First, the Albedo of a surface is HIGHLY dependent on the angle of the arriving optical radiation. This is well known and has been adequately characterized by the Bi-Directional Reflectivity Function, aka the BDRF. This is well known among those folks that design Earth Imaging Satellites. This surely seems to call into question the simplifying assumption that the spherical Earth can be replaced by a flat disk, IF only it was this simple.
Secondly, I see no discussion amongst the climate science community of the speed at which heat travels through the materials comprising the atmosphere of the Earth. It is well understood among engineers that heat propagates through different materials at different speeds. For example, most computers use aluminum as a “heatsink” to remove heat from the microprocessor. Like most engineering decisions this is a tradeoff between cost and performance. Aluminum is less costly and provides a reasonable level of performance for most applications. However, when higher performance is desired without concern for cost Copper is used instead of Aluminum. Why is that some may ask? well… it’s because the thermal diffusivity of copper is higher than that of aluminum. Thermal diffusivity is (from a system perspective) an effective measure of the “speed of heat” through a material. Heat flows through copper at a higher speed, thus allowing higher performance. In some applications synthetic diamond is applied to give even higher “speeds of heat” when the cost is justified.
My hypothesis is that increases in “GHGs” are displaced by decreases in “non-GHGS” (after all there are only 1,000,000 ppmv of gases in the atmosphere,by definition). Heat travels through “non-GHGs” at the “speed of heat”. Heat (or its equivalent IR radiation) travels through “GHGs” at close to the “speed of light” (albeit there are some slight delays introduced from a few (10-20max) short side trips backs towards the surface as “backradiation”). Hence the addition of “GHGs” to the atmosphere only make the gases in the atmosphere warm up faster, or cool down faster after any change in the energy input to the system (i.e. sunrise/sunset/accumulation of clouds/dissipation of clouds). This clearly indicates that no “higher equilibrum” temperature can be caused by “GHGs”.
Cheers, Kevin.

January 12, 2012 6:43 pm

One more thing, Surfer Dave,
Geothermal energy fluxes might well be a few W/m^2 rather than a few 0.1 W/m^2 — I am certainly not an expert on the topic. This would be interesting and it would have some bearing on energy balances. But even the highest estimates you gave sill less than 10% of the power flows attributed to GHG’s so geothermal energy [transmission] is still a perturbation, not a replacement for GHG’s and atmospheric effects.

Bill Illis
January 12, 2012 6:53 pm

Tim Folkerts says:
January 12, 2012 at 6:26 pm
But it is also most certainly true that a few percent error in emissivity cannot account for the difference between outgoing IR seen from space (~ 240 W/m^2 on average, with clear effects of CO2 and H2O) and outgoing IR as seen from just above earths surface (~ 390 W/m^2 on average, with a nearly blackbody spectrum).
———
Crunch the numbers on a per second basis since the energy coming in during the daytime is actually an average 956 joules/second/m2. How much does the surface energy level change? At night, basically zero is coming in. How much does the surface energy level change (per second).

Surfer Dave
January 12, 2012 7:11 pm

@Tim_Folkerts
Thanks Tim, useful
Given that Hansen’s recent “Imbalances” came up with an imbalance of 500mWm-2 then I would think that a geothermal heat flux of almost the same magnitude as the “imbalance” should be brought into the complete story?

January 12, 2012 7:14 pm

Dr Brown is doing his homework. I’ve been doing some homework as well to understand this all a bit better. For all those claiming that CO2 would only have a small warming effect — or even a cooling effect — here is your homework. Explain the shape of the IR spectrum as observed from space looking down and from the surface looking up. (there are good examples here: http://wattsupwiththat.com/2011/03/10/visualizing-the-greenhouse-effect-emission-spectra/)
* Explain the shape of the dotted lines labeled with different temperatures
* Explain the cause of the various “bites” in the spectrum.
* Estimate the order of magnitudes of the GHG effects at a given place and time (ie when a specific satellite image was taken). .
This is what you need to be able to do to start discussing the GHE intelligently. Otherwise, you are free to keep idly speculating, but don’t expect much support except from other idle speculators.

eyesonu
January 12, 2012 7:21 pm

While I’m on the topic of ‘teachers, let me suggest this as follows. Every teacher/professor of physics, thermodynamics, ‘climate science’, real science, meteoroligical, or other related disipline should direct his/her students to study the past several days of WUWT as related to this discussion/topic.
The topic is very important and the forum is a model for exchange of knowledge.

AusieDan
January 12, 2012 7:39 pm

Hi Willis
I think that you may be missing the possibility that the extra heat at the surface, is just borrowed by the work due to gravity, from the higher up regions of the atmosphere.
If you work through the data for the various planets and moons, you will see how it works very simply, regardless of the percentage of GHGs in the atmosphere of the planet or moon under study.

johndo9
January 12, 2012 8:33 pm

For Dr Brown or anyone wanting to run the integral surface temperature.
Just back on the atmosphere less world (moon average temperature) problem.
The very small amount of heat coming from radioactive decay, (I thought about 0.5 watt/sq metre while someone above suggested 1 watt/sq metre) limits the minimum temperature of the never sun exposed parts.
The lowest temperature then would be around 50 to 60 K. It may add 20 to 30 K to the average for the moon.
On earth where the lowest surface temperature may be around 180 K the contribution from the interior of 0.5 to 1 watt/sq metre is quite insignificant.

January 12, 2012 8:48 pm

I second the request of Joules Verne 1/12 09:21 for greater information on the uncertainty in the albedo. Everytime I see a value, it is 30%. Ok, is that 3E-01 or is that 3.00000E-01? How many significant digits is that “30%” good to?
But there is another more basic question I have about the albedo. For this I have to return to “square 3.” Why do we account for albedo only on the direct-from-sun energy, but everyone seems to ignore it on surface-to-sky path?
Please humor me for a moment…. I haven’t seen this concept explained.
Let’s set up a thought experiment where we normalize energy from the Sun as 1.
That energy hits some atmospheric phenomena and 30% of it (call it A) is reflected as albedo into space. That leaves (1 – A)=B passing through the phenomena to strike the ground.
All of that B that hits the ground must reradiate (over some time) or else the ground would continuously heat up. So we also have an upward amount of energy B [so far so good..]… that strikes the underside of those albedo phenomena…. And then what happens? Almost everything I read has B just passing through into space as if the phenomena didn’t exist. Really? That’s quite a trick.
Let us suppose, in a general formulation, that upward B radiation encounters the phenomena and a fraction “a” is reflected back downward (and 1-a continues into space). ( a does not necessarily equal A ). Now we have some extra energy (b*a) traveling back down on a second leg…. By superposition, downward b*a is hits the surface and must be returned upwards, only to meet with the partial reflector and we have a third downward leg b*a^2, to be repeated in an infinite series.
By this “reverberation” within a partially trapped wave guide, we have energy striking the surface in the series: b + b*a + b*a^2 + …b*a^n. Rewrite as: b*(1 + a + a^2 + … a^n) which infinite series (if a<1) can be replaced with b*(1/(1-a)), or b/(1-a).
Finally, lets remember that b = 1-A. So that means the energy hitting the ground can be calculated as (1-A)/(1-a). What if the albedo phenomena reflects upward and downwards equally well? A=a. Then the ground energy is not (1-A) but it is 1.0000. The actual value of the albedo doesn’t matter if A=a.
I did this in the general case because it is quite possible that because of changes in spectra of the direct energy and the ground reflected energy, “a” may very will not be equal to “A”. In the steady-state, “divide insolation by 4”, without day-night, world, I can be talked into a A.
But I am having an increasingly hard time accepting a=0, which seems to be the value for “a” used by default in many examples. Where have I gone off the rails?
Finally, I cannot quit without returning to the real world where insolation is really divided by 2, we have day and night, and albedo is a function of time and season. You can warm the morning in a cloudless Pacific, then bring on the afternoon clouds as Nature’s thermostat (Willis: WUWT 6/7/11) and bask in the warmth of ( a = A approximately.)

January 12, 2012 8:51 pm

formatting correction to above: “I can be talked into “a” not equal to “A” .

January 12, 2012 9:01 pm

Embellishment on the trapped waveguide….
Have I short changed the earth’s total albedo? Nope.
The escaped energy off the first leg is B*(1-a). What leaks through the second bounce is B*a*(1-a). What leaks on the third leg is B*a^2*(1-a). Which again becomes an infinite series of B*(1-a)*(1+a+a^2+..) which rewrites to B*(1-a)*(1/(1-a)) which reduces to B, the same result you get if you treat the albedo as transparent when going up.

Jim D
January 12, 2012 9:32 pm

It is very simple. Net incoming solar radiation for a spherical earth with albedo 0.3 is 240 W/m2.
Black-body temperature required to radiate 240 W/m2 is 255 K. QED. Any questions?

1 5 6 7 8 9 18