Guest post by David Archibald
The greenhouse gasses keep the Earth 30° C warmer than it would otherwise be without them in the atmosphere, so instead of the average surface temperature being -15° C, it is 15° C. Carbon dioxide contributes 10% of the effect so that is 3° C. The pre-industrial level of carbon dioxide in the atmosphere was 280 ppm. So roughly, if the heating effect was a linear relationship, each 100 ppm contributes 1° C. With the atmospheric concentration rising by 2 ppm annually, it would go up by 100 ppm every 50 years and we would all fry as per the IPCC predictions.
But the relationship isn’t linear, it is logarithmic. In 2006, Willis Eschenbach posted this graph on Climate Audit showing the logarithmic heating effect of carbon dioxide relative to atmospheric concentration:
And this graphic of his shows carbon dioxide’s contribution to the whole greenhouse effect:
I recast Willis’ first graph as a bar chart to make the concept easier to understand to the layman:
Lo and behold, the first 20 ppm accounts for over half of the heating effect to the pre-industrial level of 280 ppm, by which time carbon dioxide is tuckered out as a greenhouse gas. One thing to bear in mind is that the atmospheric concentration of CO2 got down to 180 ppm during the glacial periods of the ice age the Earth is currently in (the Holocene is an interglacial in the ice age that started three million years ago).
Plant growth shuts down at 150 ppm, so the Earth was within 30 ppm of disaster. Terrestrial life came close to being wiped out by a lack of CO2 in the atmosphere. If plants were doing climate science instead of us humans, they would have a different opinion about what is a dangerous carbon dioxide level.
Some of the IPCC climate models predict that temperature will rise up to 6° C as a consequence of the doubling of the pre-industrial level of 280 ppm. So let’s add that to the graph above and see what it looks like:
The IPCC models water vapour-driven positive feedback as starting from the pre-industrial level. Somehow the carbon dioxide below the pre-industrial level does not cause this water vapour-driven positive feedback. If their water vapour feedback is a linear relationship with carbon dioxide, then we should have seen over 2° C of warming by now. We are told that the Earth warmed by 0.7° C over the 20th Century. Where I live – Perth, Western Australia – missed out on a lot of that warming.
Nothing happened up to the Great Pacific Climate Shift of 1976, which gave us a 0.4° warming, and it has been flat for the last four decades.
Let’s see what the IPCC model warming looks like when it is plotted as a cumulative bar graph:
The natural heating effect of carbon dioxide is the blue bars and the IPCC projected anthropogenic effect is the red bars. Each 20 ppm increment above 280 ppm provides about 0.03° C of naturally occurring warming and 0.43° C of anthropogenic warming. That is a multiplier effect of over thirteen times. This is the leap of faith required to believe in global warming.
The whole AGW belief system is based upon positive water vapour feedback starting from the pre-industrial level of 280 ppm and not before. To paraphrase George Orwell, anthropogenic carbon dioxide molecules are more equal than the naturally occurring ones. Much, much more equal.
Sponsored IT training links:
Worried about 642-426 exam results? Join 642-446 training program and get complete set of 350-029 dumps with 100% success guarantee.






“”” dr.bill (05:00:25) :
(Reed, cba, George, …) A peripheral note about “Reif”:
It appears that most of us have suffered through this book during our education. I’ve never had a problem with Reif’s actual content, but his way of packaging the explanations leaves a lot to be desired. My own TD prof, way back then, bitched about it constantly, but for some reason continued to require the book for his courses. We came to the conclusion that he liked to have something that “bugged him” so that he would be inspired to teach us the “right” way. “””
Dr Bill, I have to plead ignorance of Reif’s Texbook, since my early education was half a world away; “far flung” is the word we use. So I can’t comment intelligently on it, but if he says Kirchoff doesn’t require thermal equilibrium, he is plain wrong on that factoid.
A whole lot of science is plagued by incorrect application of principles in situations where they simply don’t apply. They may still be valuable principles; but we can’t use them willy nilly as if they are universally true. I think a whole lot of second law utterances, suffer from that same problem.
As in LWIR photons from earth are not allowed to land on the sun, because it is hotter than the earth. Electromagnetic radiation (photons) and “heat” are two different species, that have no means of communicating with each other; hell one of them isn’t even a noun.
My students were all pre-med, so becoming fluent in even elementary Optics and Atomic Physics, was not out front and center in their career objectives.
By the way, this thread has generated a lot of interesting discussion. Sometimes they peter out into the background noise.
If this serves to probe into earth’s energy budget processes more along the lines of Gaia’s approach (as in real time); it will have been useful.
“”” Glenn Tamblyn (19:25:01) :
My, My, My. What a confused mess this post is. Either David Archibald doesn’t have the slightest grasp of logical consistancy, or he does and his post is deliberately deceptive. You decide.
He begins with the basic point that without any greenhouse gases in the atmosphere the temperature would be 30 DegC colder. This is not contentious, its basic Thermodynamics. And the known logarithmic radiation behaviour of CO2 is also well known. “””
Well Glenn, you apaprently are the possessor of the Rosetta stone.
I’ve been looking for this “”” And the known logarithmic radiation behaviour of CO2 is also well known “”” well known information, for some time, so far with no luck whatsoever.
I would aprpeciate a citation; preferrably from some established scientific journal, or either a Physical model of the process that leads to such a logarithmic radiation behavior, or else a good measured or credible proxy data set showing such a relationship experimentally. I have lots of both measured and proxy reconstructed data sets over a variety of time scales out to so far 600 million years, and CO2 “doublings” of five times; well actually doublings^-1, and so far, none of those data sets I have found demonstates a logarithmic CO2 radiation behavior; nor have I found any physics textbook or jouranl papers on any physical process leading to such a result.
So you could do us all a favor if you have the answer.
George E. Smith (09:54:47) :
Phil, of course I know THAT it is traditional to divide the real TSI numbers by four on the theory that a circle has area pi.r^2, while a sphere has area 4.pi.r^2. I just don’t know why they would do that, since Gaia doesn’t do that.
Presumably because they need to equate incoming with outgoing?
I’m quite surprised by your 62% average cloud cover figure; 50% is the number I have seen most, but I admit I haven’t done a lot of digging; so I’m ambivalent about it. If it is indeed 62%, then that would give me an even more jaundiced view of the albedo importance of surface ice and snow.
It’s referenced as follows:
Rossow, W. B., and Y.-C. Zhang, 1995: Calculation of surface
and top of atmosphere radiative fluxes from physical quantities
based on ISCCP data sets 2. Validation and first results.
J. Geophys. Res., 100, 1167–1197.
See: http://isccp.giss.nasa.gov/products/onlineData.html
If the cloud tops are hotter than the cloud bottoms (
don’t dispute thatI do, it’s the other way round!), then they also will radiate more upwards, than the bottoms do downwards. I still say the exit path is favored over the earthly direction.Henry Pool (10:00:16) wrote: “In addition, I should tell you that they recently discovered that there is also absorption of CO2 in the UV region… I think this makes it even more complicated….so it also acts a bit like ozone….”
Henry, if I understand your comment correctly, it suggests that possibly CO2 high in the atmosphere absorbs UV radiation (energy) before it gets into the lower atmosphere, thus, reducing energy that could serve to warm the atmosphere and/or surface.
Now, I may misunderstand the implications you report or the results of the mechanism the report suggests, but this possible mechanism certainly hasn’t been incorporated into the theoretical models used to predict AGW as a result of Man-caused CO2 concentrations in the atmosphere.
If so, then there is another reason to reject calls for limiting CO2, let alone severe CO2 reduction.
There is so much Science doesn’t know about CO2 concentrations in the atmosphere.
That’s a state of knowledge that hardly justifies government intervention and regulation likely to depress economic activity right at a time when the U.S. economy and most of the world’s economy is already depressed.
George E. Smith (10:29:26) :
“”” Glenn Tamblyn (19:25:01) :
And the known logarithmic radiation behaviour of CO2 is also well known. “””
Well Glenn, you apaprently are the possessor of the Rosetta stone.
I’ve been looking for this “”” And the known logarithmic radiation behaviour of CO2 is also well known “”” well known information, for some time, so far with no luck whatsoever.
I would aprpeciate a citation; preferrably from some established scientific journal, or either a Physical model of the process that leads to such a logarithmic radiation behavior, or else a good measured or credible proxy data set showing such a relationship experimentally. I have lots of both measured and proxy reconstructed data sets over a variety of time scales out to so far 600 million years, and CO2 “doublings” of five times; well actually doublings^-1, and so far, none of those data sets I have found demonstates a logarithmic CO2 radiation behavior; nor have I found any physics textbook or jouranl papers on any physical process leading to such a result.
So you could do us all a favor if you have the answer.
For the general behavior you could try here, it’s a long used approach of astronomers. Basically weak absorbers are ~linear (Beer’s Law), moderate absorbers ~log and strong absorbers ~square root (scan down to “Equivalent Width Versus Line Strength”):
http://web.njit.edu/~gary/321/Lecture6.html
You can find a detailed derivation using Voigt profiles in some advanced optics texts.
The GHGs come from all three regimes: CFCs; CO2; CH4 & N2O
See http://www.ipcc.ch/ipccreports/tar/wg1/222.htm
Derek (03:06:38)
I say again. The samples are taken up at something like a 13,000 feet elevation or so. At night, the island is colder than the ocean. Cold air sinks, and blows down the volcano on all sides. This air has travelled thousands of miles across the Pacific, so it is free from recent additions of CO2 and is well mixed.
CO2 from the volcano, on the other hand, is not well mixed. It is generally more concentrated near the ground. And it changes quickly with time, as an errant wind blows it about.
As a result, there is very little difference between two consecutive day’s good samples. And there is virtually no difference between good tower and ground level samples. And there is virtually no difference between the multiple good samples taken over the course of each night.
This makes it very easy to identify bad samples, they stick out clearly against a background of good samples.
Regarding undersea volcanoes, the wind is not coming off the sea. It is coming downwards from the upper troposphere. That’s why the measurements are taken there.
As I said, I’m as skeptical as a guy gets, and I’ve looked hard at this question. Temperature measurements are a huge problem … but in my opinion, background CO2 measurements are not.
Frank (08:39:47)
Frank, the 2.94 was the calculation by MODTRAN for a specific situation of clouds and latitude. You are correct that the IPCC uses 3.7 W/m2 as a global average, as do I. However, I have never seen a scientific explanation for the derivation of that number (3.7 W/m2), I use it simply because I have nothing better.
“”” Reed Coray (19:22:26) :
George Smith. If you care to continue writing, I’d be happy to read what you have to say. However, I think I’ve exhausted my “understanding” of “grey body” absorption and “grey body” emission. So, unless a specific point comes up that I’m almost positive I know is incorrect, I’m going to let this subject go. I still believe it is incorrect to use a T^4 law for emission when the emissivity is a function of frequency. Furthermore, although it’s a matter of semantics, I believe a “grey body” is a surface whose absorptivity and emissivity are equal and independent of frequency. The absorptivity/emissivity of a grey body surface may change with time, but at any instant in time, the emissivity and absorptivity are equal. I may be wrong, but as the saying goes: “that’s my story and I’m sticking to it.”
Again, thanks for the stimulating discussion. “””
Well Reed, I think it might be helpful, if you adjusted your understanding of what “Grey Body” really means.
First off it is apparent to me, that you do understand what a “Black Body” is.
Of course it is a quite fictional edifice, as are ALL scientific models; but it happens to be one which developed completely from first principles; along with the new introduction of Planck’s constant (h), and it is one of the crown jewels of modern physics. Arguably (h) is not even a new introduction, since it is also enshrined in Einstein’s E = h.nu ; which won him a Nobel prize, in Physics, whereas E=m.c^2 did not. As it turns out, with proper selection of materials and cavity construction, very close laboratory approximations to a Black Body are possible, so they are very useful devices, and along with other methods, have enabled the verification of Planck’s law to very high prescision, and with the help of astronomy, over extremely wide spectral ranges. So few would venture to dismiss black body radiation. The Planck law sets an upper boundary to the radiation AT ANY WAVELENGTH that can be emitted by any physical body; solely as a result of the Temperature of that body.
Many real bodies either absorb or emit “almost as much” EM radiation as a BB over a broad enough spectrum to almost qualify as black bodies, falling short either at the spectrum edges, or in the completeness of absorption or emission as the case may be; but the result is that their absorption or emission spectrum at a particular temperature “looks like” that of a black body; but less emission or absoprtion than a real black body.
A real body that absorbs or emits some fraction of the energy of a true black body, that is reasonably constant over a wide enough spectral region is referred to as a “Grey Body”. Wide enough spectral region practically means, from about 0.5 of the spectral peak wavelength, to 8 times the peak wavelength.
So for a solar spectrum absorber, that would be from about 250 nm to 4.0 microns wavelength. That contains 98% of the solar spectrum, with one percent lost beyond each end.
The “Grey” part of the description really has two facets. It’s not black, so the absorptance (or emittance) is less than 1.0. BUT it is also NOT colored, if the absorptance or emittance is reasonably constant over that wavelength range.
So a real body could be a grey emitter or a grey absorber; but it is less important that it be both.
A spectrally selective absorber such as would be designed for the solar thermal collector I mentioned; that absorbs all or most of the solar spectrum (0.25 to 4.0) microns, but had a very low emissivity in the spectral range from 4.0 microns to say 40 microns, so it doesn’t want to emit a lot of LWIR , is certainly not a black body, and it isn’t a real grey body either although for the incoming solar spectrum it is “grey enough”.
We might call such a body a “blue” body. Conversely a body that did the reverse spectrally, we would call a red body; or maybe pink if it didn’t absorb or emit very strongly. This is somewhat analagous to the terms “blue noise” and “pink noise” that analog circuit designers talkabout, to distinguish it from spectrally flat “white” (Gaussian) noise.
Pink noise is often also “1/f” noise, where the noise peak amplitude, can occur without limit; but with ever diminishing frequency of occurrence. It can be shown that for true 1/f noise, the energy content is the same in any octave of the spectrum.
So a noise glitch that occurs only once every 24 hours on average, may be very large, but there’s not much energy associeated with the average of such events. I often say that the “Big Bang” was simply the bottom end of the 1/f noise spectrum.
But back at the “grey body”, the ocean is a pretty good example of a grey body. Water has a refractive index of about 1.333 over quite a large spectral range, certainly most of the solar spectrum. There are some anomalous index regions, often around 1.0 microns, due to a molecular resonance absorption, but 1.33 works over a wide range. The average Fresnel surface reflection coefficient is given by ((n-1)/(n+1))^2
So we have ((4/3-1)/(4/3+1))^2 = ((1/3)/(7/3))^2 =1/49 about 2%
This is the normal incidence reflection coefficient of water; which means that 98% of the normal incidence solar spectrum range of wavelengths propagates into the water, where ultimately it is absorbed by something unless the water is shallow.
At angles other than normal incidence, you have to take polarization into account, and the reflection coefficient for one polarization goes to zero at the Brewster angle of incidence . B =arctan(n), which is 53 deg for water.
At 53 deg off axis, the reflected light is plane polarized; but now the reflection coefficient for the reflected polarization has increased somewhat over the 2%. the net result, is that the total reflection coefficient remains almost constant at the 2% level out to the Brewster angle (53 deg for water), and then both polarizations experience increased refelction up to 100% at 90 deg incidence (grazing).
So overall allowing for diffuse illumination, water absorbs about 97% of incident sunlight111, and reflects about 3%
So reasonably at least for the solar spectrum, we can regard (deep ocean)water as a grey body with an absorptance of 97%.
But we have to be careful, because extreme depth is the source of most of that absorption; so we should not apply that to shallower waters, that are quite transparent over much of the soalr range.
So just what the total spectral emissivity curve for water looks like, I am not sure I know; nor how it depends on water depth. I would still expect the surface to emit rather greyly in the 300K region LWIR spectrum.
I have not done any extensive analysis, as to what happens to 4th power laws when you have objects that are not approximately grey over the spectral range of interest; but I get wary, when they diverge significantly from greyness.
I suspect that Phil knows what happens.
Phil. If the emissivity is a function of frequency, then the total power radiated by a planar unit of surface area at a temperature T is NOT the product of (a) the Stefan-Boltzmann constant, (b) the “frequency averaged emissivity”, (c) the size of the surface area, and (d) T^4. It may be a reasonable approximation, but that remains to be seen. I, therefore request that you provide me with a model of emissivity as a function frequency. For example, you might say “the emissivity is a step function: 1 for frequencies from 0 to X Hz, and 0.4 for frequencies from X Hz to infinity.” Once we have quantified the behavior of the emissivity as a function of frequency, we can compute (at least numerically) (a) the average emissivity using your definition
Similarly for the Earth’s emissivity:
∫ε(ν)Iearth(ν)dν/∫Iearth(ν)dν
Where Iearth is the emission spectrum of the earth
of “average emissivity”, and (b) using Planck’s law modified to include a frequency dependent emissivity, the total power radiated by a square meter of surface area at a temperature T, which we’ll assume is either 255 degrees or 288 degrees Kelvin (your choice). We can then compute the value of the product of (a) the Stefan-Boltzmann constant, (b) one square meter, (c) the average emissivity, and (d) T^4. It may turn out the two powers will be approximately equal, in which case the use of the T^4 law is a reasonable approximation. If they are significantly different, then the T^4 rule is a poor approximation and should not be used.
James F. Evans (11:08:25) :
Henry Pool (10:00:16) wrote: “In addition, I should tell you that they recently discovered that there is also absorption of CO2 in the UV region… I think this makes it even more complicated….so it also acts a bit like ozone….”
Henry, if I understand your comment correctly, it suggests that possibly CO2 high in the atmosphere absorbs UV radiation (energy) before it gets into the lower atmosphere, thus, reducing energy that could serve to warm the atmosphere and/or surface.
Well I’m rather more sceptical than you, I want to know who ‘they’ are and where they published?
Nick Stokes (14:24:38) :
I believe a lot of people here have never seen or made by themselves real time measurements of CO2 in air. And I believe there is confusion in what is “background CO2 level”. The background CO2 level had been introduced by C. Keeling and you will find it in the higher troposphere (4-8 km or up) or over sea surface (marine boundary layer MBL).
Let´s check an Ameriflux station, Harvard Forest, far from human influence only influenced by soil, vegetation and wind. You can easily measure there changes of 100-200 ppm per week up to 500 ppm and more. But thats not essential. The annual average near ground is within 1% the background CO2 level measured by areoplane over this station in 1-8 km altitude. Do you need more evidence?
Therefore they use it as a background station in the WDCGG list.
This fact I have used to calculate upper tropospheric background level at the historical stations. And that´s why the Giessen data by Kreutz are valid and supply a precise base to do calculations.
best regards
Ernst Beck
“”” Phil. (10:58:36) :
George E. Smith (09:54:47) :
…….
Presumably because they need to equate incoming with outgoing?
…….What about equating the thermal effects with reality ?
I’m quite surprised by your 62% average cloud cover figure;
…….
It’s referenced as follows:
Rossow, W. B., and Y.-C. Zhang, 1995: Calculation of surface
and top of atmosphere radiative fluxes from physical quantities
based on ISCCP data sets 2. Validation and first results.
J. Geophys. Res., 100, 1167–1197.
I’m quite happy to accept your assertion; but thanks for the reference.
See: http://isccp.giss.nasa.gov/products/onlineData.html
If the cloud tops are hotter than the cloud bottoms (don’t dispute that I do, it’s the other way round!), …..
I was thinking of a thin (but dense) cloud layer, where the top could be solar heated, while the bottom was somewhat shielded from the sun above. I agree on the tall cloud case.
Phil. (11:15:25) :
George E. Smith (10:29:26) :
“”” Glenn Tamblyn (19:25:01) :
And the known logarithmic radiation behaviour of CO2 is also well known. “””
Well Glenn, you apaprently are the possessor of the Rosetta stone.
I’ve been looking for this “”” And the known logarithmic radiation behaviour of CO2 is also well known “”” …..
For the general behavior you could try here, it’s a long used approach of astronomers. Basically weak absorbers are ~linear (Beer’s Law), moderate absorbers ~log and strong absorbers ~square root (scan down to “Equivalent Width Versus Line Strength”):
http://web.njit.edu/~gary/321/Lecture6.html
You can find a detailed derivation using Voigt profiles in some advanced optics texts.
The GHGs come from all three regimes: CFCs; CO2; CH4 & N2O
See http://www.ipcc.ch/ipccreports/tar/wg1/222.htm “””
Well I’ve always understood the Beer’s law to be the simple non-fluorescing absorption case where t = exp(-alpha.x), and the absorption is somewhat large; which ought to be the case for CO2 inside its absorption band.
And theres’ no question that when the absorption species is sparse, as often in the astronomy case, the absorptance is roughly linear with distance, because the absorptance is low; but then :- exp(x) =1+x +…
But all the texts I’ve read say ” “Climate sensitivity” is the permanent increase in mean global surface Temperature as a result of a doubling of CO2. ” I’m not questioning whether the CO2 (or other GHG) absorption of surface emitted LWIR radiation is logarithmic with CO2 abundance IN A PARTICULAR LOCATION but then the available LWIR to absorb, varies more than an order of magnitude depending on where you are on earth. Then the resulting atmospheric temperature increase is non-linear with respect to that order of magnitude variable LWIR CO2 absorption (in a way that overestimates the Temperature increase), and none of which simply relates to the change in mean global surface temperature, since the effect of that warmer atmosphere on the surface depends on the nature of the surface, and the local thermal processes, so it too is not linear. etc etc.
So there’s a big difference between Beer’s law maybe governing the CO2 absorption of LWIR in the 15 micron band, in some exponential/logarithmic relationship, and the surface temperature following along; with total disregard for anything else going on, such as cloud formation stepping in to interfere with what CO2 is trying to do.
The assumptions are mind boggling. Surface emitted LWIR is uniform and constant all over the earth, and CO2 distribution is the same all over the earth, and nothing else has any say in what the Temperature of the atmosphere is, and somehow the Temperature of the atmosphere sets the surface temperature of the earth. Meanwhile the most common molecule on the planet, just sits by and lets all that happen.
Well I don’t think so.
George E. Smith (10:09:17) :
George: No, Reif doesn’t mangle Kirchhoff. The essential problem in much of this, which I believe to be primarily a misunderstanding (although not entirely so), is that the equality of absorbtivity and emissivity applies to an object in thermal equilibrium at a location. People don’t always adhere to this when doing calculations and drawing conclusions. Fundamentally, I believe that long-term progress in climate analysis will only stem from treating extensive variables like energy. Unfortunately, climate science is really now just in its infancy, and has been hijacked for other purposes before having had a chance to mature.
Regarding the “heat is not a noun” issue, you have no idea how many times I have tried to beat that notion into the heads of my students! HEATING is a process, as you rightly imply.
/dr.bill
Willis Eschenbach (11:43:02) :
“However, I have never seen a scientific explanation for the derivation of that number (3.7 W/m2) ….”
Gee. And here I was done thinkin’ that all them good ole “official science type” boys up at the Un’s IPCC scientific types would of gone up and done did perfectfully and completely esplain’ that little bitty minor point.
Guess they done did fergit doin’ that ‘midst all them other emails ’bout cursing people who really check things ……
Darn. Shucks. Drat. Heck even. 8<)
Phil. (12:13:12) :
Phil., good point taken.
The discussion here does nothing to change my opinion that the
understanding of the quantitative physics on both side of the AGW
debate is pathetic when compared to , eg , undergraduate
electrodynamics . However , this blog may be the only public place
where some of these essentials are being worked thru . Clearly Reed
Coray and I ( and I’ll mention Marty Hertzberg ) , and Phil
are within a fat epsilon of the same understanding
. I again suggest my planetary
temperature page as a better introduction to the most basic
quantitative physics than any I have found , tho I would welcome links
proving me wrong .
Other than terawatts of geothermal and manmade heat , and some
particulate energy from the solar wind and such like , the mean
temperature of the earth is determined by its radiant exchange with the
celestial sphere around it , a tiny fraction of which is at close to
6000k . I find these essentially 1 dimensional energy budgets
such as Eshcenbach apparently adapts from Trenberth rather useless and
misleading . It can’t even model changes in spectrum with latitude or
day and night .
The Stefan-Boltzmann Power = sb *
T ^ 4 relationship , modified by Kirchhoff’s observation 151
years ago that at any frequency , absorptivity
= emissivity
must hold between
our sphere and the celestial sphere . Any conduction and convection
within that shell is constantly driven in the direction of satisfying
that balance . No “feedback” or “runaways” can effect that balance .
Phil’s equations are on the right path ; it is the correlation of the
spectrum of an object with the spectra of its sources and sinks that
counts .
The only definition of gray which is analytically
useful is having a flat spectrum . I think the use
of the term albedo ought to be limited to this case
. In that case , a
= e = ae is a
constant across the spectrum and the correlation with any source or
sink is the same , and the gray value albedo drops out of the equation
leaving the pure ratio of correlations . Thus , as has been
commented , the ubiquitous notion that changing the albedo of a uniform
gray body will change temperature is wrong . ( I believe this was
Kirchhoff’s actual insight . ) And , particularly give the
width of Planck distributions , nothing with an absorptivity less than
1 can have an effective emissivity equal to 1 .
Given any particular object spectrum , and source and sink spectra ,
actual “greenhouse effects” can be calculated . I could
extend my array language implementation to do this in a few lines of
code , but spend more time on all this than I can afford as it is . I’d
rather see someone perhaps translate the code into a more common
language like MatLab and calculate the equilibrium temperatures for
various substances , eg , CO2 and H2O , and scenarios , eg , the
planet’s surface spectrum , its lumped spectrum as seen from space ,
etc . These could then be confirmed in lab experiments , and
after several decades of lysenkoism , we could see the
beginnings of a “Climate Science” deserving of the name .
Incidentally , the massive effect of “greenhouse gases” on the variance
, as opposed to the mean , of our temperature seems
almost to be concept which is alien to the CS community . The GHGs do
facilitate the transfer of heat to and from the air every day and night
, but I have never seen any discussion of that life sustaining effect .
Frank (08:39:47)
Frank, I agree. You can’t just give a single figure. I just went to MODTRAN, which gives the following figures for standard conditions (the ones that MODTRAN opens with except holding relative humidity steady);
Cooling if we remove all GHGs: ~25.2°C
Cooling if we remove CO2: ~12°C
Cooling if we remove H20: ~15.3°C
As you point out, the bands overlap, so removing both gives less than the sum of the individual coolings.
Finally, as you point out this does not include any losses in the system from evaporation or convection.
In other words, on a theoretical Earth, if there were no CO2 in the atmosphere, we’d be about 12°C [Edited to remove typo of “7°C, thanks, Phil] colder. Well, that is, we would be that much colder if there were no thermostatic mechanism regulating the earth’s temperature.
“”
Reed Coray (12:07:06) :
Phil. If the emissivity is a function of frequency, then the total power radiated by a planar unit of surface area at a temperature T is NOT the product of (a) the Stefan-Boltzmann constant, (b) the “frequency averaged emissivity”, (c) the size of the surface area, and (d) T^4. It may be a reasonable approximation, but that remains to be seen. I, therefore request that you provide me with a model of emissivity as a function frequency. For example, you might say “the emissivity is a step function: 1 for frequencies from 0 to X Hz, and 0.4 for frequencies from X Hz to infinity.” Once we have quantified the behavior of the emissivity as a function of frequency, we can compute (at least numerically) (a) the average emissivity using your definition
Similarly for the Earth’s emissivity:
∫ε(ν)Iearth(ν)dν/∫Iearth(ν)dν
Where Iearth is the emission spectrum of the earth
of “average emissivity”, and (b) using Planck’s law modified to include a frequency dependent emissivity, the total power radiated by a square meter of surface area at a temperature T, which we’ll assume is either 255 degrees or 288 degrees Kelvin (your choice). We can then compute the value of the product of (a) the Stefan-Boltzmann constant, (b) one square meter, (c) the average emissivity, and (d) T^4. It may turn out the two powers will be approximately equal, in which case the use of the T^4 law is a reasonable approximation. If they are significantly different, then the T^4 rule is a poor approximation and should not be used.
“”
In the IR, there is little variation of a surface of a solid or liquid from the planck eqn and consequently, compared with a gas spectrum, using a single number, essentially almost 1.0, and stefan’s law is going to do reasonably well, at least to the accuracies of information involved.
Here is a link to some emissivity curves:
http://www.icess.ucsb.edu/modis/EMIS/html/em.html
George E. Smith (11:55:44), you raise interesting issues.
My bible, Geiger’s “The Climate Near The Ground” (Amazon), gives the spectral emissivity of water in the 9 to 12 micron region as 0.96. I don’t believe that it depends on the water depth, as IR is totally absorbed by 1 mm of water. Finally, 9 microns is wavelength of the center frequency corresponding to the radiation from a blackbody at 300K.
Geiger also says “The surface will be treated as a blackbody [for longwave emission/absorption] throughout the remainder of the book since, for the range of natural surface emissivities, the departure of Tr [blackbody radiation temperature] from Ts [true surface temperature] is small.”
Well Willis, it is not to be argumentative; just inquisitive. The case for the deep oceans being a pretty good grey body with alpha about 0.97, simply rests on the belief that only about 35 is reflected in any spectral range of interest (to us), and if it is not reflected, then it at least proceeds into the water, and if the thin surface layer doesn’t absorb it, as you point out happens for the 300K region LWIR, then it proceeds deeper until something else does absorb it, so it is in effect an anechoic chamber, or a cavity absorber. So I think the case for near BB Grey body absorption is pretty “robust” to use the adjective du jour.
But when it comes to the emission from the oceans, I’m not so sure; in the sense that “I don’t knowe.”
If we are prepared to say this is a case where Kirchoff’s law does apply, at least somewhat (for reasons as yet unknown (to me); then yes we would assume that the deep oceans are a pretty good grey body emitter.
But WHY ?! If we can’t ascribe the total absorptance to the surface ( in the solar spectrum visual range) , then what of the emissivity in the same range, which is why the shallow water issue arises.
Now if we assume that the top few microns to mm surface layer sufices to absorb the likely LWIR 300 K spectrum, then the shallow water issue doesn’t arise, and maybe we are safe in expecting a high LWIR emissivity.
I can attest to the fact that very deep (thousands of feet) clean sea water (Sea of Cortez) , when shaded from the blue sky, and seen through good wide band grey polarized glasses, to remove the surface reflection. Yes of course I look at it at the Brewster angle; looks very spookily black to me.
I’m sure planet earth looks very black from outer space, if you filter out the atmospheric scqattered blue light; well talking oceans here; not brown ground.
So it is in the shallow water visible spectrum range, where I don’t know what to expect.
But that very thin surface LWIR trap, is why I blanche at the idea of regarding the downward back radiation from the atmosphere (ok Phil; clouds too) as simply another forcing W/m^2 to store in teh deep oceans along with the solar energy.
The solar insolation on the other hand should only result in weak evaporation, because of the transparency and hence deep penetration of most of the high energy (irradiance) portion of the spectrum, which blows right on by the surface layer.
I have to plead guilty of being a wavelength oriented spectroscopist, rather than frequency; even though I know that the answer to the coffee pot question:- “What’s Nu ?” is E/h
So I’m uncomfortable when thinking about the Planck Law in its frequency oriented version; just force of habit. And that is a bit weird, since when I worked at Tektronix, in the early 1960s, “Frequency” was a swear word; a mere figment of the imagination; real things happened in the time domain.
So maybe I still regard frequency as an upside down view of reality; but I know that chemists, and molecular spectroscopists think in cm^-1;maybe if they used TerraHerz instead, we could be friends.
But regardless of that, or irregardless as the case may be, I don’t think any of that infringes on the Stefan-Boltzmann Law. The total energy still goes as T^4
Perhaps it is that I am comfortable with the Planck function being a function of the single variable T.lambda.
George E. Smith (14:20:12) : edit
Inquisitive is good.
What is the “35 is reflected”? Thirty-five what?
I don’t “expect” a high emissivity for water. It has been measured many times. From the UCSB Modis Emissivity Library:
The plot of water emissivity is here.
On the absorption question, there is a compendium of data here, and a graph here.
Well Willis, if my fingers actually did what my brain tells them to do, you would see that 35 is texting code for 3%; it saves wasting a finger on the shift key.
So you don’t “expect” a high emissivity for water; so 97% isn’t high enough for you ?
I do appreciate those little (here) thingies you scatter around, specially that one for water emissivity; just how beautiful is that absorption edge at 6 microns; I’ll have to see if I can dig up any refractiv index curve corresponding to that wavelength. And too damn bad that your plot stops at 3 microns, because that clearly is the start of another absorption edge, and I just happen to know that water has its very highest (IR) absorptance right at 3 microns, where the alpha is about 10,000 cm^-1, and my handbook source also confirms that edge at 6 microns, where the alpha is maybe 3000 cm^-1.
My IR handbook says “Tilt” below around 160 nm so it doesn’t show that 10^6 UV peak at your last (here).
Thanx.
for visible light, oceans tend to absorb around 96% of the incoming light when at a high angle of incidence wrt the horizon. As I recall, it gets even better further into the IR. This is subject to the state of the surface as well as to the angle of incidence but low angles of incidence (wrt horizon) are not where there’s much incoming power/area. Considering that there is slightly more solar power in the IR (mostly near) than in the visible region (with balance mostly being uV), it should be no surprise that the surface reflects little. We do have the clouds and atmospheric scattering that makes for a rather bright blue planet despite the lack of all that much reflection from the surface.
the net result as willis stated is a rather robust ability to use stefan’s law and the gray body approximation. One can go all the way to using line by line spectral data, molecule by molecule for gas and it provides close to the same results as lesser approaches but that is a whole lot more time consuming and difficult to accomplish considering very little is gained for most of the conceptual discussions associated with this thread.
What’s more, whatever benefit in accuracy occurs is substantially negated by the presence of clouds as well as by the h2o vapor cycle and convection.
This is not to say that agw or cagw is correct. One can even use the simpler stefan’s law approach to provide ‘robust’ ‘proof’ that the Earth is quite insensitive to variations in forcing compared to the cagw claims.