Guest Post by Willis Eschenbach
Did you know that one watt per square metre is equal to one kilogram per cubic second?
I sure didn’t know that, and at first I didn’t believe it, but it’s true.
(Yeah, yeah, I know it’s a second cubed and not a cubic second, but a metre cubed is a cubic metre, so I had to find out just what a cubic second might look like when it stepped out of the shadows … but I digress …)
The thing I like best about climate science is that I am constantly learning new things. For example, I came across that fascinating fact because against my better judgement I decided to take a look at the recent paper, charmingly yclept “Emergent Model for Predicting the Average Surface Temperature of Rocky Planets with Diverse Atmospheres”, by Den Volokin and Lark ReLlez, paywalled here. It has been gathering attention on some skeptical websites, so I thought I’d take a look even though it is just another in the long string of fitted models purporting to reveal hidden truths. As it turns out, it is a fascinating but fatally flawed paper, full of both interesting and wrong ideas.
The Abstract and Highlights say:
Highlights
• Dimensional Analysis is used to model the average temperature of planetary bodies.
• The new model is derived via regression analysis of measured data from 6 bodies.
• Planetary bodies used for the model are Venus, Earth, Moon, Mars, Titan and Triton.
• Two forcing variables are found to accurately predict mean planetary temperatures.
• The predictor variables are solar irradiance and surface atmospheric pressure.
Abstract
The Global Mean Annual near-surface Temperature (GMAT) of a planetary body is an expression of the available kinetic energy in the climate system and a critical parameter determining planet’s habitability. Previous studies have relied on theory-based mechanistic models to estimate GMATs of distant bodies such as extrasolar planets.
This ‘bottom-up’ approach oftentimes utilizes case-specific parameterizations of key physical processes (such as vertical convection and cloud formation) requiring detailed measurements in order to successfully simulate surface thermal conditions across diverse atmospheric and radiative environments. Here, we present a different ‘top-down’ statistical approach towards the development of a universal GMAT model that does not require planet-specific empirical adjustments.
Our method is based on Dimensional Analysis (DA) of observed data from the Solar System. DA provides an objective technique for constructing relevant state and forcing variables while ensuring dimensional homogeneity of the final model. Although widely utilized in other areas of physical science to derive models from empirical data, DA is a rarely employed analytic tool in astronomy and planetary science.
We apply the DA methodology to a well-constrained data set of six celestial bodies representing highly diverse physical environments in the Solar System, i.e. Venus, Earth, the Moon, Mars, Titan (a Moon of Saturn), and Triton (a Moon of Neptune). Twelve prospective relationships (models) suggested by DA are investigated via non-linear regression analyses involving dimensionless products comprised of solar irradiance, greenhouse-gas partial pressure/density and total atmospheric pressure/density as forcing variables, and two temperature ratios as dependent (state) variables. One non-linear regression model is found to statistically outperform the rest by a wide margin.
Our analysis revealed that GMATs of rocky planets can accurately be predicted over a broad range of atmospheric conditions and radiative regimes only using two forcing variables: top-of-the-atmosphere solar irradiance and total surface atmospheric pressure. The new model displays characteristics of an emergent macro-level thermodynamic relationship heretofore unbeknown to science that deserves further investigation and possibly a theoretical interpretation.
Well, that all sounded quite fascinating ,,, except for the part where I didn’t have a clue what dimensional analysis might be. So I went to school on that question. Here’s what I found out.
As we generally know but rarely stop to consider, the various special units that we use in science, like say watts per square metre, can all be expressed in the fundamental SI “base units” of mass (kilograms or kg), length (metres or m), time (seconds or sec or s), temperature (kelvins or k), and the like.
Dimensional analysis is a method of combining the variables of interest to make new dimensionless variables. Let’s say we have N variables of interest, we’ll call them x(1), x(2), x(3), x(4) … x(N). Dimensional analysis combines them in such a clever way that the fundamental dimensions cancel out, and thus what remains are dimensionless variables. This ensures that whatever we do with the variables the units will be correct … because they are dimensionless. Nifty.
Next, I found out that there is a mathematical theorem with the lovely English-sounding name, “The Buckingham Pi Theorem”, which sounds like it should calculate the appropriate dessert amounts when you have tea with the Queen. Anyhow, it states that if you have a system defined by a function involving N dimensioned variables, f(x(1), x(2), x(3), x(4) … x(N)), you can reduce the number of variables. The theorem states that by using dimensional analysis to combine the N dimensioned variables into dimensionless variables, you end up with N – m variables, where “m” is the number of SI base units involved (e.g. kg, m, etc).
So that sounded like a most promising theoretical method, worth knowing. It would seem that almost any model could be simplified by that method. However, at that point, they take their dimensionless sports car out on the autobahn to see how it performs at speed … and that’s where the wheels come off.
They applied dimensional analysis to the modeling of planetary surface temperatures. They decided that the following variables were of interest (sorry for the “MANUSCRIPT” across the page, it’s a samizdat copy):
Since there are six variables and four fundamental units, the Buckingham Pi Theorem says that they can be reduced to two dimensionless variables. A neat trick indeed. Then they used twelve different combinations of those dimensioned variables converted into dimensionless units, and tried fitting them to the data from six rocky celestial bodies using variety of formulas, including a formula of the form:
y = a exp(b x) +c exp(d x)
Out of all of the possible combinations of variables, they looked at 12 different possibilities. After trying various functions including the dual exponential function above, they picked the best function (the dual exponential) and the best combination of variables, and they produced the following graph:
Note that they started out with six celestial bodies, but at the end they couldn’t even fit all six with their model, so they “excluded” Titan from the regression. This is because if they left it in, the fit for Venus would really suck … in scientific circles this is known as “data snooping”, and is a Very Bad Thing™. In this case the data snooping took the form of selecting their data on the basis of how well it fit their theory. Bad scientists, no cookies.
Once they’ve done that, hoorah, their whiz-bang new model predicts the “thermal enhancement” of six celestial bodies with amazing accuracy … well, it does as long as you ignore the celestial body it doesn’t work so well for.
In any case, “thermal enhancement” is defined by them as the actual planetary surface temperature Ts divided by the temperature Tna that the planet would have it were an airless sphere. So “thermal enhancement” is how much warmer the planet is than that reference temperature. And here is the magic equation used to derive the results:
In the formula, P is the atmospheric pressure. Pr is the pressure at the triple point of water, 611.73 pascals. Pr is not important, it is a matter of convention. All that changing Pr does is change the parameters, the answer will be the same. As such, it seems odd that they include it at all. Why not make Pr equal to 1 pascal, and cancel it out of the equation? I have no answer to that question. I suspect they use 611.73 pascals rather than one pascal because it seems more sciencey. But that may just be my paranoia at work, they may have never considered canceling it out.
So there you have their model … what’s not to like about their analysis?
Well, as it turns out … just about everything.
Objection the First—If the formulas don’t fit, you must acquit
Let me start at the most fundamental level. The problem lies their assumption that the surface temperature of a planet with an atmosphere can actually be modeled by a simple function of the form:
Surface Temperature = f(x(1), x(2), x(3), x(4) … x(N))
I find the idea that the climate is that simple to be laughable. As an example of why, consider another much less complex system, a meandering river in the lowlands:
Notice the old river tracks and cutoff oxbows from previous locations of the river. Now, we have variables like gravity, and the slope of the land, and the density of the soil, and the like. But I would challenge anyone to successfully combine those variables in a function like
Average position of river mile 6 = f(x(1), x(2), x(3), x(4) … x(N))
and make the formula work in anything but special situations.
This is because a) the location of the river is always changing, and more importantly, b) the location of the river today is in very large measure a function of the location of the river yesterday.
In other words, the only hope of modeling this system is with an “iterative” model. An iterative model is a model that calculates the river’s position one day at a time, and uses one day’s results as input to the model in order to calculate the next day’s values. Thus, an iterative model MAY be able to calculate the ongoing state of the system. And this is exactly why climate models are iterative models of just that type—because you can’t model such constantly evolving systems with simplistic equations of a form like
Surface Temperature = f(x(1), x(2), x(3), x(4) … x(N))
So that is my first objection. The formula that is at the root of all of this, a simple dual-exponential, is extremely unlikely to be adequate to the task. The surface temperature of the earth is a result of a host of interactions, limitations, physical constraints, inter- and intra-subsystem feedbacks, resonances, thermal thresholds, biological processes, physical laws, changes of state of water, emergent phenomena, rotational speed, the list is long. And while you might get lucky and fit some simple form to some small part of that complexity, that is nothing but brute-force curve fitting.
Objection the Second – Von Neumann’s Elephant
John Von Neumann famously said, “With four parameters I can model an elephant, and with five I can make him wiggle his trunk”.
As near as I can determine there is one parameter used in the calculation of Tna, the hypothetical and unknowable “no atmosphere temperature”, and another four parameters in Equation 10a, for a total of five parameters.
It gets worse … when a parameter has either a very small or a very large value, it indicates a very finely balanced model. When I see a model parameter like 0.000183, as occurs in Equation 10a, it rings alarm bells. It tells me that the model is applying very different formulas to small and large numbers, and that’s a huge danger sign.
Next, they had full choice of formulas for their model. There was nothing limiting him to a double exponential, they could have used any formula they pleased.
Next, they tried no less than twelve different combinations of dimensioned variables before finding this particular fit.
Finally, there are only five data points to be fit. I can guarantee you that when the number of your model’s tuned parameters equals or exceeds the number of the data points you are using for your fit, you’ve lost the plot and you desperately need to trade up to a new model.
So my second objection is to Von Neumann’s elephant, with five parameters fitting the formula to the pathetically small number of only five data points, augmented by twelve variable combinations, and a free choice of formulas. That kind of fitting is not a model. It’s a tailor shop designed to make a form-fitting suit.
Objection the Third—Variable Count
The authors make much of the claim that they can calculate the temperature of five planets using only two variables. From their conclusion:
Our analysis revealed that the mean annual air surface temperature of rocky planets can reliably be estimated across a broad spectrum of atmospheric conditions and radiative regimes only using two forcing variables:TOA stellar irradiance and average surface atmospheric pressure.
But then we look at the calculations for Tna, which is a part of their magic equation 10a, and we find three other variables. Tna is defined by them as “the area-weighted average temperature of a thermally heterogeneous airless sphere”. Here is their equation 4a, which calculates Tna for the various celestial bodies.
So we have as additional variables the albedo, the ground heat storage coefficient, and the longwave emissivity. (Volokin et al ignore the cosmic microwave background radiation CMBR, as well as the geothermal flux.)
In other words, when they say they only use two variables, “TOA stellar irradiance and average surface atmospheric pressure”, that is simply not true. The complete list of variables is:
TOA stellar irradiance
Surface atmospheric pressure
Albedo
Heat storage coefficient
Longwave emissivity
So my third objection is that they are claiming that the model only uses two variables, when in fact it uses five.
Objection the Fourth: Data Snooping
They say in the Abstract:
We apply the DA methodology to a well-constrained data set of six celestial bodies representing highly diverse physical environments in the Solar System, i.e. Venus, Earth, the Moon, Mars, Titan (a Moon of Saturn), and Triton (a Moon of Neptune).
But then they have to throw out Titan, because it doesn’t fit, which is blatant data snooping … and despite that, they claim that their model works wonderfully. And of course, the “six planets” from the Abstract is the number quoted around the blogosphere, including by WUWT commenters.
Objection the Fifth: Special Martian Pleading
While they use standard reference temperature values for five of the six celestial bodies, they have done their own computations for the temperature of Mars. One can only presume that is to give Mars a better fit to their results—if it fit perfectly using the canonical values, there would be no need for them to calculate it differently. Again, data snooping, again, bad scientists, no cookies.
Objection the Sixth: The Oddity of Tna
Immediately above, we see the complete equation 4a for Tna, the area-weighted average temperature of an airless sphere. It depends on three variables: albedo, how much heat the ground soaks up during the day (heat storage fraction), and the emissivity. The authors actually use a simplified version of that formula. After showing the entire formula, they note that they will reasonably ignore the geothermal flux and the cosmic background radiation, because they are quite small for the bodies in question. OK, fair enough, that’s common practice to ignore very minor variables. But then they say:
Since regolith-covered celestial bodies with tenuous atmosphere are expected to have similar optical and thermo-physical properties of their surfaces (Volokin and ReLlez 2014), one can further simplify Equation [4a, see above] by combining the albedo, the heat storage fraction, and the emissivity using applicable values for the Moon to obtain:
Tna = 32.44 S^0.25 (4c)
Equation (4c) was employed to calculate the ‘no-atmosphere’ reference temperatures of all planetary bodies in our study.
I find that to be an unwarranted and incorrect simplification. I say this because it is clear that the reason the temperature of the moon is so low is because it rotates so slowly. It has two weeks of day, then two weeks of night. This increases the day-night swing of the temperature, because it lets the moon’s night-time temperature drop to a rather brisk -180°C or so.
And for a given solar input, whatever increases the surface temperature swings decreases the average temperature. With a day-night temperature swing of 270°C, the average lunar temperature is much, much colder than the S-B blackbody temperature.
But those huge temperature swings are NOT characteristic of the Earth, or Mars. Even without an atmosphere, the surface temperatures of those planets wouldn’t swing anywhere near as much as the moon because they all rotate much faster than the moon. With faster rotation, the days can’t get as hot, and the nights can’t get as cold. This means that their average temperature would not be depressed anywhere near as much as the moon, because the swings are smaller. As a result, while Equation 4c is accurate for the moon, it says that an airless earth rotating once a day would have the same temperature as the moon, and that’s simply not true. And for Venus, the opposite is true. With a rotation period of 116 days, its average surface temperature would be correspondingly lower, again leading to an incorrect result.
CONCLUSIONS:
Well, my conclusion is that this model fails a number of crucial tests. The equations are not physically grounded and are of doubtful simplicity. It is a Von Neumann trunk-wiggling monstrosity with a free choice of formulas, five tunable parameters, and 12 combinations of variables. They have done their fit to a ridiculously small dataset only six planets, and failed at that, only fitting five. As a result, they removed one of the six from their fit, which is blatant data snooping. They claim only two variables when there are actually five. They have calculated their own temperature for Mars. And finally, they erroneously calculate the reference temperature Tna as if the Earth, Venus, and Mars rotate once every 28 days. This last one is critical to their actual result. Their model results report the surface temperature Ts divided by Tna … and since Tna is badly wrong for at least three of their five data points, well, it’s just another in the long list of reasons why their results do not hold water.
You’d think we’d be done there. But nooo … in a final burst of amazing hubris, they use their model results as a basis to claim that they “appear” to have discovered a new unknown thermodynamic property of the atmosphere, viz:
Based on statistical criteria including numerical accuracy, robustness, dimensional homogeneity and a broad environmental scope of validity, the final model (Equation 10) appears to describe an emergent macro-level thermodynamic property of planetary atmospheres heretofore unknown to science.
I’m sorry, but what the authors describe is merely a simple dual-exponential multi-parameter curve fitting exercise that after trying an unknown number of formulas, no less than twelve different variable combinations, and five tunable parameters, finally got it right an amazing five out of five times … by using the wrong values for Tna, re-calculating the temperature of Mars, and throwing out the one data point that didn’t fit. Which is impressive in its own bizarre manner, but not for the reasons they think.
However, who would have guessed that such a curve-fit had such a strong scientific capability that it could reveal a new “emergent macro-level thermodynamic property” that is “heretofore unbeknown to science?
Dang … that’s some industrial-strength trunk-wiggling there.
However, at least the part about dimensional analysis was fascinating, I need to look into it more, and it revealed unknown dimensions to me … a watt per square metre is a kilogram per cubic second? Who knew?
My regards to everyone,
w.
As Always: Let me request that if you disagree with someone, please have the courtesy to quote the exact words you object to. That way, we can all understand the precise nature of your objection.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Willis, your thoughts and comments have always been informative and entertaining to me. Thank you!
In your comment above regarding Mars’ global average temperature, you cite a paper by Michael Klein from 1971. I found the full text of that article here:
http://deepblue.lib.umich.edu/bitstream/handle/2027.42/33679/0000191.pdf?sequence=1
After a quick scan thought it I noticed that Klein provides 2 estimates of Mars global brightness temperature derived from two microwave-band measurements, i.e. 182 K and 200 K. Taking a simple average of these, produces (182 + 200)/2 = 191 K for Mars temperature. Also, on p. 212, M. Klein says:
“The two brightness temperatures measured in 1967 are in good agreement with previously published data, which indicate that the Martian brightness temperature at wavelengths between 1 and 21 cm is approximately 190 K (Hobbs, McCullough, and Waak, 1968). The weighted average of the 1.85- and 3.75-cm temperatures reported in this paper is 193 K ± 10 K. ”
So, microwave measurements of Mars conducted in 1960s and 1970s revealed a mean surface temperature in the range 190 – 193 K. The authors of the paper discussed on this blog, arrived at 190.56 ± 0.7 K (see their Appendix B). Hence, their estimate of Mars global temperature seems to agree pretty well with these earlier calculations quoted by Klein… Volokin and ReLlez may have got it right after all … Your thoughts?
Willis,
What is the formula for planetary rotaion period and average temperature?
It’s stated by you as obviously important …yet It’s not ever calculated or accounted for in earths planetary heat equation.
Instead, an average insolation value is simply assumed as a split to cover 1/2 light and half dark. But, if rotational rate changes the average temperature, how does one correctly account for it?
My first thought was it makes no difference…but you say it is a no brainer and needs to be considered.
Thanks,
Kirk
Willis,
There is no general equation. This is because in the absence of an atmosphere, the night-time heat loss is dependent on how much heat is left unradiated at the end of the day. And this in turn is dependent on the specific heat of whatever the sunlight is warming, along with a variety of other factors.
And this is in a situation with no atmosphere. With an atmosphere plus water vapor, the energy is constantly being redistributed by sensible and latent heat transfers. As a result, both the water vapor and the atmosphere act to reduce the day-night swings in temperature.
Finally, every temperature difference causes a drop in average temperature. The problem is that there are several temperature differences at play—day/night, summer/winter, and pole/equator. As a result, it’s difficult to disentangle all of those to give us a total.
However, it’s not important in the usual range of climate questions, as whatever it is it would be relatively constant. Since we are concerned mostly with changes in temperature, such a constant temperature depression would drop out of any relevant equation.
All the best,
w.
The authors discuss the rotation question extensively in their previous paper:
http://www.springerplus.com/content/3/1/723
See the section starting above Eq24
Takeaway summary: Rotation rates within the range found in the solar system make negligible difference to surface T.
For a body with a surface regolith in vacuum – I should add.
Thanks, tallbloke, good to hear from you. You say:
I took a look at their derivation, and I came away totally unconvinced. Here’s the statement that seems incorrect to me:
First, the Law of Energy Conservation doesn’t apply to temperature, because temperature isn’t conserved. Energy is conserved, but temperature is NOT conserved. So they are incorrect in that part of their claim.
Next, suppose the moon were rotating very rapidly. The day/night temperature differences would be small. However, because it rotates slowly, only once every 28 days, the night side has time to cool way down and the day side has time to heat way up. As a result, the lunar diurnal temperature swing is quite large, about 270°C or thereabouts. You can see this in the graph in the head post.
Now, Dr. Brown points out the following lovely derivation:
But since energy radiated by the lunar sphere is a constant equal to the amount of radiation it is receiving, what this means is a non-uniform distribution of temperature on the lunar surface reduces the mean lunar temperature. The mean temperature has to drop in order to keep the radiated power constant.
And that is why the moon is so cold. Straight Stefan-Boltzmann calculations combined with a lunar albedo of 0.13 and an emissivity of 0.98 give us an (incorrect) calculated lunar temperature of
Input W/m2 = 340 W/m2 (same as earth) * 0.87 (coalbedo) = 296 W/m2
Temperature = (296 W/m2/ (.97*5.67E-8))^(1/4) = 271K = -2°C
However, the moon is much, much colder than that. Its temperature (per the Volokin paper) is 197.25K ≈ -75°C. This is because of the large diurnal swings, which lower the mean temperature.
From the diagram of the moon’s temperature in the head post, we can see that the lunar temperature swing is about ± 135°C. Using Dr. Brown’s formula above with 296 W/m2 of absorbed solar energy gives us a calculated lunar temperature of 190K … which compares very well with the figure from the paper of 197K, given that I’m working off of just one set of lunar temperature measurements and not lunar averages.
So I’m sorry, but I don’t believe Volokin and ReLlez’s other paper is any more credible than this one. They are wrong when they say that
for a simple reason. As Dr. Brown shows, ANY change in diurnal temperature amplitude affects the mean temperature, and the moon is a prime example.
My best to you, and yes, despite the fact that you’ve banned me from commenting on your blog, I do read your blog regularly. Curiously, after all this time I’ve realized that I’m kinda glad to be banned there, because it frees up the time I’d spend in mostly fruitless protest if I could comment there, and I can put that energy into writing posts like this one.
Life works out strangely, and my wish is that it work out well for you.
w.
Oh, yeah, I forgot to add the math. I use a function which I derived from Dr. Brown’s formula in my comment above. The function uses the variables of expected temperature given the incoming power p1 (W/m2), the temperature swing dT (°C), and the emissivity epsilon, to calculate the resulting surface temperature (K).
newt=function(p1,dt,epsilon=1,sigma=5.67e-8) { ((p1 + 18 * dt^4 * epsilon * sigma - 6 * sqrt(dt^4 * epsilon * p1 * sigma + 9 * dt^8 * epsilon^2 * sigma^2))/( epsilon * sigma))^(1/4) }Sigma is the Stefan-Boltzmann constant, 5.67E-8.
For the moon, this function gives us:
w.
Willis: Dimensional analysis is usually extremely useful, but not the way it was carried out in your top equation. In W/m2, the m2 comes from the product of two distances measured on orthogonal axes – area. When Joules refers to kinetic energy and has units of kg-m2/s2, the meters are measured in the direction of motion – along one axis.
However, radiation isn’t a form of kinetic energy and its direction of motion – if it can be equated to kinetic energy – is actually perpendicular to the area. If we attach directional labels to the distances (m_x, m_y and m_z), we get W/(m_x*m_y) for flux and kg-m_z^2/s^2 for kinetic energy.
Thanks for the comment, Frank. Actually, the top equation (as I thought was clear from my explanation but may not have been) is not mine. It is taken directly from the paper in question.
In any case, I was unaware that there were different base SI length units for different directions (m_x, m_y, and m_z in your terminology). Do you have a citation for that?
w.
WIllis: I should have referred to the top equation in this post, not “your top equation”. Sorry.
In the Wikipedia article on dimensional analysis, there is a discussion of the Huntley extension to directed dimensions, but I certainly hadn’t read this article before writing my comment.
https://en.wikipedia.org/wiki/Dimensional_analysis#Huntley.27s_extension:_directed_dimensions
SI base units are very useful, but there are plenty of other units that are useful in a less formal version of dimensional analysis often called the “factor label method”. A simple example: For two similar triangles with sides 2, 3, and 4 meters, and 7, x, and y meters:
2 m short side/7 m short side = 3 m medium side/x m medium side = 4 m long side/y m long side
If one uses units of meters for all lengths, all ratios will be dimensionless, but students often don’t get the right ratios. In chemistry:
y mL * z g/mL / x g/mole = y*z/x moles g/mL = density g/mole = MW
For the reaction 2A —> B, we might have:
[y mL_A * z g_A /mL_A / x g_A/mole_A ] * moles_B/2 moles_A * w g_B/mole_B = y*z*w/2*x g_B
So when I looked at the dubious top equation, I recognized that the meters involved in measuring area and the meters^2 in kinetic energy involve different directions. Years of creating my own labels caused me to create m_x, m_y and m_z. In elementary mechanics, we decompose a force F into F_x and F_y. Then we get more sophisticated and use vector quantities, dot products and cross products to liberate ourselves from any particular coordinate system. work = F dot s. However, every vector quantity has a direction and units associated with it, like the displacement (s) in F dot s
Some radiative physics. Given a radiative imbalance of y W/m2 and a heat capacity of z J/m3/K:
y W/m2 / z J/m3-K = y/z K-m/s
We get units of K/s (a warming rate) multiplied by meters. m2 is area and m3 is volume, so obviously the remaining meters must be perpendicular to the surface area – for example the depth (d) of the ocean mixed layer being heated by the radiative imbalance.
y W/m2 / [z J/m3-K * d m] = y/z*d K/s
In aeronautics, a dimensionless coefficient of drag c_d is used to calculate the force of drag F_d produced by a wing of area A, moving at a velocity v through a fluid of density p:
c_d = 2*F_d / p*v^2*A
You might say that the meters used to measure area and the meters used to measure velocity involve different directions and you would be correct. If you make a wing longer and skinnier, but keep the area constant, the coefficient of drag changes. Since the force of drag and the velocity are measured in the same direction, meters in the direction of motion cancel, but not meters perpendicular to the direction of motion.
m_x, m_y and m_z
============
isn’t space-time analysis done in dimensions involving momentum? something like: x,y,z,mx,my,mz?
Frank, thanks kindly for the link to the “Huntley extension”. As is often the case with Wikipedia, I came away both educated and confused, but your examples clarified the concept nicely.
I’m curious what difference this would make to their analysis … but probably not curious enough to actually do it, because the whole “pick a formula, fit a curve, discover a new thermodynamic property” nonsense is so distasteful to me. Yes, curve fitting has its uses … but planetary average temperatures isn’t one of them.
Most useful, and most appreciated,
w.
I made it this far this time, “The formula that is at the root of all of this, a simple dual-exponential, is extremely unlikely to be adequate to the task.” Not bad, if I do say so my self. My better angel must have been with me.
Just so I don’t have to read any more, could someone tell me if WIllis confused the dimensions in dimensional analysis with the dimensions in a fractal sense or with the derivatives in a Taylor series expansion? I’d be forever indebted if someone could just sum it up for me. Thanks.
Oh, by the way Willis…. some months ago I mentioned the Lewis Number. It would do you a lot of good to figure out the physical significance of Le = 1.
Dino, since you are a man who boasts of not finishing my posts, and then asks others to do your reading for you, I’m amazed that you are under the mistaken impression that I pay the slightest attention to your opinion …
w.
Oh, I wasn’t asking you. I assume you read your own posts and you are sometimes at the limits of your understanding and most times past it. It makes no sense to ask you to explain your posts.
Don’t get too stuck on the Le = 1 thing. It a graduate level question. Let me bring it down…… So have you figured out the relation between the taylor series expansion, auto-correlation and dimensionality?
Dinostratus September 4, 2015 at 10:03 am Edit
Oh, I didn’t say you were asking me, nor did I think you were. It’s obvious from your comment that you were asking others. Reading comprehension is your friend.
And yet I have a peer-reveiwed piece in Nature Magazine (albeit small) and peer-reviewed studies in other scientific journals as well. They obviously think my scientific understanding is perfectly adequate. Call me crazy, but I’ll take their opinion of my abilities and scientific understanding over that of an anonymous internet popup … particularly one like yourself, who seems to specialize in unpleasant personal attacks.
Get “stuck on the Le=1 thing”??? You still don’t seem to get it. I pay absolutely no mind to whatever you might babble about science. It might be wrong, it might be right … but I couldn’t care less.
I make no attempt to ever follow your “scientific” suggestions, Dino. Since you obviously dislike me intensely for unknown reasons, it would be extremely foolish of me to pay the slightest attention to your “scientific” claims and suggestions. Whatever they are, I’m sure they are designed to cause me grief.
Sorry, amigo, but you’ve succeeded in canceling your own vote with me. That’s not easy to do, usually I will pay at least some attention to the non-ad-hominem parts of a comment, because there might be some scientific cheese at the end of the maze.
But with you, it’s always just an unpleasant dead end.
Sadly,
w.
You’re getting angry. Good, I can feel your anger. I am defenseless. Take a math book. Strike me down with all of your hatred and your journey towards the dark side will be complete!
[Snip. David, please use your real name. ~mod.]
Dinostratus September 5, 2015 at 3:21 pm
Angry? Actually, you make me laugh. Dino, if I were to get angry with you, you’d know it. But I only get angry about important things, and on my planet, that doesn’t include you.
We now return you to your fantasy that you are Darth Vader, the all-important Dark Lord … fitting, I suppose.
w.
No, that’s a quote of the Emperor. Darth Vader was the dude in all black who had an asthma problem.
Dinostratus September 6, 2015 at 5:30 pm
I do well in Trivial Pursuits except when it comes to movies. I don’t go often. And in any case, all those extra-terrestrials start to look alike after while. Well, except for the guy with the red horns, whatever his name was, and the big fat dude, Jabba. I saw the movie with Jabba. I didn’t see the one with the red-horn guy.
In any case, let me rephrase my statement to correct previous error:
Regards,
w.
Maybe that’s your problem. Maybe it’s the simple details that you can’t easily assimilate and keep. That is, your mind wanders before all the puzzle pieces are all in place. You then post your mental wanderings as if it is some sort of personal triumph and profound truth. It’s not. It’s a mark of laziness and disrespect for the reader. You make very little progress and in a circuitous manner.
That’s it! It’s your disrespect for the reader that gets on my nerves. Your posts mock those who’ve forced themselves to sit down and learn all the pedantry and formalism of the Buckingham-Pi theorem, asymptotic expansions, etc. etc. It wasn’t fun WIllis. It was hard. Hour after hour of sitting and forcing every distraction out of the mind trying to under stand why MLT is so GD important.
Perhaps you should be taking some sort of medication. Something for your ADHD and save us the insult,
It seems intuitively correct to me to say that solar irradiance and atmospheric pressure alone will determine long term average surface temperature. After all, what else could determine it? The proviso here, though, is that the “average surface temperature” has to be averaged not only in time but also in space; and by that I mean the temperature needs to be averaged throughout the entire depth of the atmosphere. This will correct for lapse rates, phase changes, and atmospheric opacity (i.e. greenhouse effect). Once all that is taken into consideration, the composition of the atmosphere should no longer matter.
Of course, you could say that this approach solves the problem only by deeming it not a problem. However, I think the difficulty lies not with my method, but with our inaccurate notion of “surface.” The atmosphere of a planet simply has to be considered an extension of its surface; or rather, the surface of a planet is a region which includes its atmosphere.
I should also think that this relationship, while straightforward, may well be too chaotic to be fit by any curve.
Willis
In your response (Willis Eschenbach September 2, 2015 at 10:24 pm) to Kirkc (September 2, 2015 at 5:22 pm), You play down the importance of planetary rotation (not because it does not impact upon temperature, but because what ever it is, it is a constant and thereby not important “in the usual range of climate questions”)
On the contrary, whilst planetary rotation will not impact upon whether tomorrows climate/weather will be the same as todays, the impact of planetary rotation is directly relevant as to whether there is any GHE on planet Earth, and if so, its magnitude.
First, one has to know the “no atmosphere temperature” of a planet. This will, as you state be influenced by, a plethora of factors, amongst these being geothermal from the core, the latent heat capacity of the surface, and how the surface responds to incoming solar (which cannot penetrate a rock to any great extent but can penetrate oceans by up to say 100 metres, albeit that most incoming solar is absorbed within about 70 cm to 3 metres), and of course, it will be influenced by planetary rotation. Put simply, has the planet got enough time at night to dissipate the ‘excess’ energy that was built up during the day? Thus if say CO2 impedes photons from the surface on there way out to space, is there enough time during the 12 hours of night for the daytime ‘excess’ energy to be dissipated? Compare Earth with Venus where there is about 243 Earth days for the night to dissipate the energy built up during the Venusian day.
In your article, you state:. “As near as I can determine there is one parameter used in the calculation of Tna, the hypothetical and UNKOWABLE “no atmosphere temperature”” (my emphasis), So we are not off to a good start. The first variable, you state cannot be assessed.
Second, one needs to know what temperature the planet would have with an atmosphere of known density and volume, irrespective of its precise composition (ie., whether it has or does not have radiative gases, and irrespective of their precise concentrations). Since you consider the no atmosphere temperature is unascertainable, I would hazard a guess that you consider the temperature of the planet with an atmosphere (irrespective of its composition) is also unknowable. So this too causes a problem.
Third one would need to know the actual temperature of the planet with its actual atmosphere and to know the precise composition of this atmosphere. Even for planet Earth, we do not know what its actual temperature is, and this is why the temperature is never presented and instead one sees data based upon anomalies of a small number of station measurements unevenly spatially distributed, and ARGO does not even measure the warmest oceans/seas, possibly because these tend to be of modest depth (circ 1000m or less).. I recall reading a post sometime back made by someone who linked about 6 or so different NASA papers putting the temperature of this planet between about 9 degC and 18 degC!
We have no idea, within a few degrees what the temperature of this planet is so we do not know whether the temperature is something other than it theoretically ought to be.
Given the lack of knowledge of these 3 variables, it is impossible to assess whether there is any GHE effect (at least that caused by gases other than water vapour), still less to put a figure on it.
On a water world, such as planet Earth, where water exists in all 3 forms, the natural water cycle will always have a significant impact upon the temperature of the planet (due to heat transport and latent energy in phase changes), but that is not the GHE as is being talked about by the warmists.
Incidentally, there is a lot we do not understand since even on the moon, the coldest temperatures recorded are about 35 degK (Southern hemisphere) and 26 degK (southern Hemisphere). These are in the shadow of craters that have not seen the light of day (ie., have received no solar) for eons, and yet they are not the 3 degK which we are told is the background temperature of space. Why not, if they have been radiating into and have been receiving this ‘cold’ radiation for billions of years and if the moon is itself is cold (ie., whilst it still possesses a very small liquid core, the thermal capacity of the rock, surrounding the core, is such that the core does not heat the surface)? It will be interesting to get some update on Pluto following the recent fly pass.
Further to my comment above,. “and 26 degK (southern Hemisphere)” should have been the Northern Hemisphere.
My main disquiet with this paper is the over extrapolation of sketchy data. We do not know the temperatures of these bodies with sufficient certainty to make the comparisons. Heck, we do not even know the temperature of planet Earth, and all sorts of assumptions are being made as to the temperature of the other bodies in the solar system.
Mars, the Moon and Triton have so little in the way of atmosphere, I question whether they can be realistically be comparable to rocky bodies/planets with an atmosphere.
As others have noted Titan will always be problematic because of the methane cycle, but taking that difficulty into consideration and the inevitable wide errors that there are with accurately accessing temperature and atmospheric composition, Titan did not lie that far off the plot in the papers fig4. I would be concerned if it was lying at say around 3.2 Ts/Tna or 1,2 Ts/Tna, but it is not. Fig4 ought to contain error bounds, the plotted line ought to have been a thick band not a thin line, and had it been a thick band (reflecting realistic error margins) I suspect that Titan would lie within the band.
Willis, I have some problems with the Fig. Moon’s suface temperature.
The SB Avg Earth no Atmosphere surface temperature is not -18 °C but it should be the same as Avg Lunar Temperature – 2.5 °C. The local temperature at the moon surface depends on the thickness of the surface layer because it is influenced by the heat capacity and the thermal conductivity of the surface. So, you should add an information how the temperatures are measured on the moon. I think it is radiation thermometry.
Without borehole data, we have no real knowledge about how much geothermal heating there is, I recall seeing that the temperature at about 1 metre depth is about 238K, but whether that is a function of conduction of warming caused by solar irradiance impacting the surface, or geothermal heat from the core, I do not know.
But my understanding based upon what I have read on Nasa papers is that whilst they consider that the moon has a small liquid core variably cited at about 800 degC to 1700degK, it is so small and so far from the surface that effectively little heat reaches the surface. I do not know what that assumption is based on since I do not know to what depth we have drilled the moon, and over what depth we have real measured temperature data. Since the minimum temperature observed on the moon is circa 30K (in the shadow of craters that have not seen solar irradiance for billions of years), I presume that geothermal of at least that must be making its way to the surface. However, this is all conjecture since without proper bore hole data.
I would not expect the Erath’s no atmosphere temperature to be -2.5 degC due to differences in albedo, rotation, latent heat capacity of the surface (especially that of the oceans which also do not absorb 100% of the solar at the surface but rather a not insignificant quantity at depth) etc.
I would suggest that there are too many differences between the Moon and the Earth to make that comparison.
“Without borehole data, we have no real knowledge about how much geothermal heating there is, I recall seeing that the temperature at about 1 metre depth is about 238K.”
Richard, thanks for your hint. I found a paper from M.G. Langseth 1973 et al.: Revised lunar heat flow values. They present some data of the bore hole experiment. The temperature at a deph of 90 cm is -22 °C and is nearly time independent. At 49 cm the temperature oscillates between -23°C and -25 °C. The mission was Apollo 15 with the coordinates 26N, 4E of the landing point. For these coordinates and an albedo of 0,12, I expect a SB average surface temperature of +5 °C.
“I would not expect the Erath’s no atmosphere temperature to be -2.5 degC due to differences in albedo, rotation, latent heat capacity of the surface (especially that of the oceans which also do not absorb 100% of the solar at the surface but rather a not insignificant quantity at depth) etc.”
You can find this value in many textbooks, for instance W. Roedel, Physik in unserer Umwelt: Die Atmosphäre, Springer, 3. Auflage. No atmosphere means that there are no oceans. Heat capacity and rotation play a minor role in calculating the average, as long as the heat capacity is independent of temperature.
Addendum
” I found a paper from M.G. Langseth 1973 et al.: Revised lunar heat flow values. They present some data of the bore hole experiment. The temperature at a deph of 90 cm is -22 °C and is nearly time independent. At 49 cm the temperature oscillates between -23°C and -25 °C. The mission was Apollo 15 with the coordinates 26N, 4E of the landing point. For these coordinates and an albedo of 0,12 I expect a SB average surface temperature of +5 °C.”
I wondered about the large difference between my transient EBM-calculation (+5°C) and the measurements at a depth of 49 cm (-24°C +/-1°C). The cause is that I used for the effective heat capacity 2.8*e7 W/(m2*K), which is characteristic of the “wet” earth. For the “dry” moon the effective heat capacity should be smaller. With 1*e6 W/(m2*K) I found -25 °C for the landing point which is in better agreement with the experiments
The free publication of Volokin et al 2014 “On the average temperature of airless spherical bodies and the magnitude of Earth’s atmospheric thermal effect” was very helpful for my investigations. A planet without an atmosphere seems to be very different from a planet with an atmosphere. This is not true. The heat transport from a planet to space is always a sum of radiative heat transport and material heat transport (convection, diffusion). An airless planet has a sharp border, the surface, while the transition from material heat transport to radiative heat transport in the atmosphere is gradual.
Correction
The units of the effective specific heat used in my EBM model are J/(m2*K)
What is missing from this discussion is consideration of the effects of convection. We now know that real greenhouses warm by limiting convection. Yet 50 years ago schools taught that real greenhouses warmed due to blocking outgoing IR.
This same fallacy is now being repeated by scientists that we taught this nonsense in school. We are told that CO2 warms via the “Greenhouse Effect”, by blocking outgoing IR radiation.
The warming of the surface is due to convection. This leads to a lapse rate dictated by gravity at 9.8C/km in dry air. Otherwise the atmosphere would be isothermal. The phase change of atmospheric gasses (water) moderates the 9.8C/km to about 6.5C/km.
This lapse rate warms the lower region of convection above that dictated by an isothermal atmosphere, while cooling the upper region. The center of mass of the convecting region is about 5km in height, giving a surface warming of 5km x 6.5C/km = 33C as compared to an isothermal atmosphere.
ferdberple
What’s missing is water/water vapor. A greenhouse w/o water/water vapor is an oven. It’s the water that makes all of the difference. LWIR/SWIR, convection are side shows compared to the water.
ferd,.
I’ve been saying that for ages but to no avail here.
Keep trying though 🙂
The fact is that the surface is warmer than S-B simply because descending warming air in high pressure cells constituting half the atmosphere inhibit convection beneath the descending column so that the surface below is then able to rise above S-B.
Just as in a real greenhouse and nothing to do with GHGs.
It was suggested earlier in this thread that dimensional analysis ought to be a part of high school physics.
Here in the UK, back in the 1980’s, it was.
I still have my copy of Nelkon and Parker, Physics for A level.
And in a book of approx. 950 pages, dimensional analysis is first applied to a problem relating to a pendulum on page 33.
i.e. it was introduced and applied to problems at the very beginning of this course.
Dimensional analysis was useful for confirming that an answer was within the bounds of plausibility.
It was always comforting to have completed a test with 20 minutes to spare, in which case you could run back through your answers and perform a quick check on the correspondence of the dimensions of the question and your proposed answer.
Of course, the magnitude could still be way out of the ball park.
The downside of having been thoroughly familiarized with D.A. is that for the rest of your life you will be repeatedly irked by the jaw-dropping errors made by sci/tech journalists.
The most common of these being references to kilowatts/hour, megawatts/year or similar, as though these were representative of the total energy output of something.
To be honest, I would not encourage a young person to learn physics. Certainly not if they are then expected to live a life subjected to the moronic schemes and delusional thinking of scientifically illiterate government agencies and journalists.
Dumbing down is probably a blessing for most people.
We now need a generation of dumb people who can function happily in the new age of state sponsored mass hysteria.
There is a pdf available the link is below. When you look at the properties of the pdf you find that the author is a K Zeller. The authors of the paper are Den Volokin and Lark ReLlez. Note that the second author’s name is Kral Zeller backwards. Can any one determine who Volokin, ReLlez and Zeller really are. One has only a gmail address (Volokin) and the other has a street address in Salt Lake City with no email address ReLlez. This is pretty strange. I have no idea what it means. There is a Karl Zeller who is a meteorologist at the USDA Forest Service. But it’s late and I’m not going to do any more sleuthing tonight.
I placed the link down here because it was so unwieldy. It points to a files directory at tallbloke.files.wordpress.com.
[NOTE: Link converted to an actual link. -w.]
Note too – if you haven’t already – that Den Volokin can be reversed in like fashion to Ned Nikolov. Search engine yields a number of interesting links:
https://tallbloke.wordpress.com/2011/12/28/unified-theory-of-climate-nikolov-and-zeller/
http://wattsupwiththat.com/2012/01/22/unified-theory-of-climate-reply-to-comments/
http://www.drroyspencer.com/2011/12/why-atmospheric-pressure-cannot-explain-the-elevated-surface-temperature-of-the-earth/
http://classicalvalues.com/2014/07/greenhouse-effect-pvnrt/
Good catch.
Bill, that is absolutely hilarious. Lark ReLlez is Karl Zeller. I was wondering about the bizarre name, but the capital “L” in the name is a lovely bit of misdirection.
I’m more than interested in finding out the reason for this disguise. Why on earth would someone use a fake name on a scientific (or at least “scientific”) paper? All speculation gladly accepted, any actual information muchly desired.
w.
Steve P September 4, 2015 at 10:49 am
Oh, dear heavens, we’re dealing with Nikolov and Zeller once again? This is beyond bizarre, truly. For those who are unaware of the backstory, in 2011 N&Z (as they are known) posted a very similar pile of their misunderstandings here on WUWT.
It got a serious thrashing, first in the comments on that post, then via Ira Glickstein’s post and its comments, and then by my post, “A Matter of Some Gravity“.
So I can understand why Nikolov and Zeller are not publishing this under their own names … after the bollocking they got here, it makes perfect sense to me.
I do note that their current study is exactly the same kind of garbage as they discussed before—a multi-parameter model fitted to a tiny number of data points. In other words, their “new” paper is just a re-hash of the old paper disguised as new research by other authors.
Man, this is sinking low, low, low. In order to puff up their previously falsified theory, N&Z pretend to be a couple of entirely different researchers who, amazingly, come to the same conclusions as in the N&Z paper …
w.
People invariably misunderstand the word ‘fundamental’ in dimensional analysis.
It merely means forming the unitized base of a particular system of measurement.
It does not mean ‘fundamental’ in the sense of having especial physical profundity.
Thus, it is perfectly possible, and indeed very convenient in considering the theory of electro-magnetism, to have the quantity of electricity and the quantity of magnestism, measured in ‘Maxwells’ and ‘Webers,’ as fundamental units.
Read ‘dimensional’ for ‘dimensionless.’
numberer September 4, 2015 at 6:49 am
Fixed.
w.
Sorry. More correction needed to avoid gibberish.
Quantity of electricity is measured in ‘coulombs’
Quantity of magnestism is measured in ‘maxwells.’
The dimension of ‘action’ is then immediately ‘coulomb maxwells’.
And there are Heaviside’s units, which avoid the pesky 4*pi which turns up everywhere.
Sadly, Willis has completely missed the point of the paper that he so roundly criticises.
The paper shows why the Greenhouse Effect, whether it is 33K or some other figure is mass induced and not GHG induced.
The most basic, critical point is that the mass of gases above a solid surface acquires energy by conduction and convection from that surface and in doing so creates atmospheric opacity to outgoing IR.
The opacity that matters is the opacity caused by the presence of that mass and the more densely the mass is compressed the more energy it will acquire from whatever insolation reaching the surface is available.
Compressed gases create opacity (resistance) to outgoing IR without needing to have ANY radiative capability.
That is why the degree of surface heating above S-B is proportionate to surface pressure which is itself purely a consequence of mass and gravity.
The AGW theory treats the atmosphere as being transparent to outgoing IR unless there is radiative capability. Willis seems to agree with that.
That is the fundamental error.
Energy permanently engaged in maintaining constant up and down convection within an atmosphere is energy originally drawn from outgoing IR and to the extent that such energy was removed from the radiative exchange with space that removal represents the atmosphere’s opacity to outgoing IR.
It is that opacity to ouitward IR that provides the upward pressure gradient force that opposes the downward force of gravity to keep an atmosphere in hydrostatic balance.
It is not relevant that energy driving movement upwards matches energy driving movement downwards because it is the total of the two blocks of energy that matters and not the fact that they work in opposing directions. As long as there is constant movement whether it be up or down then energy is required and it is taken from kinetic energy at the surface which then needs to be warmer than S-B to fuel the constant movement tied up in convective overturning.
No GHGs necessary.
For so long as there is both radiative balance with space AND energy in convective ascent equals energy in convective descent then the atmosphere will be retained in hydrostatic balance for as long as insolation continues.
Any permanent imbalance in either the radiative or convective (adiabatic) exchanges will cause the atmosphere to be lost.
The simple presence of an atmosphere forever suspended of the surface is proof that radiative exchanges neutralise convective imbalances and convective changes neutralise radiative imbalances.
Surface temperatures above S-B are a product of ONLY mass, gravity and insolation.
Every weather or climate phenomenon is simply the stabilising process in action..
Stephen Wilde September 5, 2015 at 8:25 am
Say what? The atmosphere doesn’t magically become opaque to outgoing IR because it has acquired “energy by conduction and convection”. That’s hand-waving pseudo-scientific nonsense. To the extent that the atmosphere is “opaque” to IR, it is because of the presence of greenhouse gases.
You get to create your own opinions, Stephen, but you don’t get to create your own facts …
w.
“Here we use a simple, physically
based model to demonstrate that, at atmospheric pressures
lower than 0.1 bar, transparency to thermal radiation allows
short-wave heating to dominate, creating a stratosphere. At
higher pressures, atmospheres become opaque to thermal
radiation, causing temperatures to increase with depth and
convection to ensue. A common dependence of infrared
opacity on pressure, arising from the shared physics of
molecular absorption, sets the 0.1 bar tropopause”
http://faculty.washington.edu/dcatling/Robinson2014_0.1bar_Tropopause.pdf
Note the words ” opaque to thermal radiation” which is IR and the paper makes it clear that it is the mass and not composition of the atmosphere that is opaque to IR.
Furthermore, the more mass there is (greater pressure) the greater the opacity to IR regardless of atmospheric composition.
R.I.P. — GLOBAL WARMING.
R.I.P. — CO2.
All charges have been dropped.
Thanks, Stephen. That paper does NOT say what you said, which was:
Instead it says that IF, note the IF, there are “greenhouse” gases in the atmosphere, the opacity of the atmosphere increases with density … which we all knew, or at least most of us knew. The novel and interesting part of that paper is relating the change in density to the location of the tropopause.
But nowhere does it say that the atmosphere becomes more opaque to IR because it “acquires energy”, whether by conduction or any other means.
w.
Please quote the words in the paper that you are referring to since I saw no indication that the observed phenomenon was dependent on radiative capability within the atmosphere.
They say:
“pressure-broadening or collision-induced absorption applies
generally to thick atmospheres,”
Collision induced absorption is conduction (which then provokes convection).
Conduction occurs even without GHGs.
Stephen Wilde September 6, 2015 at 10:12 am Edit
Glad to. You could start with:
Or this one:
In fact the paper talks about radiative transfer throughout, viz:
If they are talking about the greenhouse effect and radiative transfer, they are talking about greenhouse gases and radiative capacity in the atmosphere.
w.
Thus conduction and convection dominate until pressure falls to 0.1 bar whereupon a radiatively induced stratosphere becomes possible. Furthermore, the dominance of conduction and convection increases with pressure and that pressure (involving atmospheric mass) creates opacity to IR from the surface.
Stephen Wilde September 6, 2015 at 10:25 am
In a word … no. Collision induced absorption is NOT conduction. It is an entirely different process, by which molecules which normally don’t absorb IR can absorb it. This occurs because collisions can induce a dipole moment, making the molecule able to absorb IR.
It has nothing to do with conduction or convection as you claim. Here’s a good description of the process:
I know a drowning man will grasp at a straw, but you don’t even have a straw, you’re making your claims up out of the whole cloth.
w.
“In all of these bodies, the tropopause separates
a stratosphere with a temperature profile that is controlled
by the absorption of short-wave solar radiation, from a region
below characterized by convection, weather and clouds”
Collision induced absorption can also occur with non radiative molecules that acquired energy by conduction from the ground. It is not exclusive to energy acquired by radiative absorption.
The above extract refers to convection dominating in the troposphere and convection is driven by conduction at the surface.
So, we have a region dominated by conduction and convection which shows an increase in IR opacity with increasing pressure.
Why are GHGs required for that ?
Stephen Wilde says (5 September 2015 at 8.25am): Compressed gases create opacity (resistance) to outgoing IR without needing to have ANY radiative capability.
Stephen,
I profoundly wish that your assertion were true. But can you point me to any, ANY, physics textbook that supports such a revolutionary hypothesis?
David
Willis,
I repeat:
“Here we use a simple, physically
based model to demonstrate that, at atmospheric pressures
lower than 0.1 bar, transparency to thermal radiation allows
short-wave heating to dominate, creating a stratosphere. At
higher pressures, atmospheres become opaque to thermal
radiation, causing temperatures to increase with depth and
convection to ensue. A common dependence of infrared
opacity on pressure, arising from the shared physics of
molecular absorption, sets the 0.1 bar tropopause”
http://faculty.washington.edu/dcatling/Robinson2014_0.1bar_Tropopause.pdf
Note the words ” opaque to thermal radiation” which is IR and the paper makes it clear that it is the mass and not composition of the atmosphere that is opaque to IR.
Furthermore, the more mass there is (greater pressure) the greater the opacity to IR regardless of atmospheric composition.
Stephen,
I agree 100% with Willis that you have misinterpreted the Robinson and Catling paper.
They are referring to the fact that a lower (more dense) layer of the atmosphere (which certainly DOES contain GHGs) will, as a consequence, be more opaque to LW radiation (i.e it will be more absorptive) than a higher layer.
A lower denser part of the atmosphere will be more absorptive than a higher less dense level because it receives more conducted and convected energy from the surface even with no GHGs at all.
If Robinson and Catling did not say that then they should have done.
ANY absorption increases opacity. Nearly all absorption is by conduction and convection and that which is caused by GHGs via radiation only alters convection to a negligible degree compared to naturally induced solar and oceanic variability.
Willis said:
“The atmosphere doesn’t magically become opaque to outgoing IR because it has acquired “energy by conduction and convection”. That’s hand-waving pseudo-scientific nonsense”
Is it seriously proposed that kinetic energy at the surface absorbed via conduction by atmospheric mass can nonetheless simultaneously be radiated out to space ?
When the atmosphere first rose off the ground then (assuming current mass, gravity and insolation) the Earth’s temperature viewed from space would have appeared to be 222K but only during the progress of the first convective overturning cycle.
During that first cycle kinetic energy at the surface was diverted to convective overturning via conduction and was therefore not capable of being radiated to space at the same time.
Once the first convective overturning cycle completed then the Earth’s temperature as viewed from space returned to 255K as per S-B because the kinetic energy being taken up was then simultaneously being returned to the surface.
Forever afterwards that same 33K of kinetic energy has been locked into the convective overturning cycle and will remain there for so long as the atmosphere remains suspended off the surface against the constant force of gravity.
Even an 8 year old should be able to grasp such simple concepts.
Stephen Wilde September 6, 2015 at 4:08 am
I haven’t a clue either what that has to do with the statement of mine you quoted, or what it actually means.
You double down on your bad bet when you say:
Er … um … well … ah … I hate to ask, but were you watching this mythical creation of the atmosphere “[rising] off the ground” some billions of years ago with a globally averaging thermometer in your pocket to determine that the Earth’s surface temperature was 222K, and not 220K (but only during the “first convective overturning cycle”)?
w.
If the surface temperature enhancement is 33K and it is attributable to atmospheric mass conducting and convecting then during a first convective cycle that much energy would have been deducted from radiation to space until the convective loop closed. Hence 222K and not 220K.
We all know that there was no actual bodily rising off from an original surface but that is a useful image for understanding the principle (unless you just don’t want to).
It is possible to apply simple logic without having to have been present.
All forms of absorption increase opacity.
Why should radiative absorption increase opacity but not conductive / convective absorption ?
The paper refers to the phenomenon described as applying to a wide range of planets with atmospheres of widely differing compositions with no mention of any effect from different GHG levels.
The paper refers to pressure as the determining factor yet pressure is derived from mass plus gravity. How could radiative capability have any effect ?
Stephen, I have often been curious about the percentage of different sources (conduction/convection vs. radiation) of energy in disparate atmospheres, and how this changes as the percentage of GHG changes.
In general it makes sense to me that in a non GHG atmosphere some or perhaps most if not all of the 33 degree increase would be made up for by additional conduction from the surface until the flow between the atmosphere and the surface equalized. So, in such a non GHG world, as the surface heated the residence time of energy in the planets system would increase proportionate to the amount of atmosphere; the larger the atmosphere, the greater the increase in energy in the system. In a zero atmosphere world the energy would simply radiate away.
Add ANY atmosphere and as the residence time of insolation energy increases, the energy content must rise above a world where the energy simply radiates away. Increase the atmosphere, and you increase the residence time and energy accumulation even further. Add in just a few GHG molecules and they will most likely radiate (some away to space, and some towards the surface) conducting atmospheric energy and NOT surface radiating energy. This is net atmospheric cooling, accelerating the loss of energy from the atmosphere. As you increase the amount of GHG molecules the chances of some GHG molecules intercepting radiation from the surface to the molecules increases. This energy, directed inward would not induce cooling. As GHG molecules increase the ratio between how many receive conducted atmospheric energy (cooling) vs. surface radiate energy (not cooling) would also change.
David A,
You seem to have got the point about the significance of conduction and convection within the mass of an atmosphere creating IR opacity by resisting the transmission of IR through the atmosphere to space.
AGW theory (and Willis) think that opacity to IR arises only from the intervention of radiative capability within the atmosphere which is manifestly wrong. Does
The Robinson and Catling paper shows that for pressures greater than 0.1 bar there seems to be a universal rule related to the common absorption properties (via conduction and convection) of all types of gaseous matter that IR opacity due to conduction and convection takes control.
Once one attributes IR opacity to non radiative gaseous materials via conduction and convection then it becomes clear that any opacity caused by radiative capability counts for nothing relative to the opacity caused by the presence of gaseous matter in an atmosphere that is substantially non-radiative.
You then go on to have a stab at the effect of radiative opacity within an atmosphere the opacity of which is primarily due to conduction and convection but I’m not sure that I follow your exact reasoning.
My view is that such radiative opacity, within an atmosphere dominated by opacity from conduction and convection, simply interferes with the lapse rate slopes above and below the point of hydrostatic balance and within columns of ascending and descending air.
The sign of the interference is reversed above and below the point of hydrostatic balance and within ascending columns as compared to descending columns so that the thermal effect nets out to zero as I explained here:
http://hockeyschtick.blogspot.co.uk/2015/07/erasing-agw-how-convection-responds-to.html
The key to it all is that conduction and convection above an irradiated surface causes IR opacity in the atmosphere above without any recourse to radiative capability and as per Robinson and Catling that form of opacity dominates in atmospheres with a pressure of more that 0.1 bar.
The concept of IR opacity arising from conduction and convection and, moreover, being dominant, has completely passed over the heads of warmists and lukewarmers alike. Many sceptics are aware of the issue but thus far have not been able to articulate it clearly.
The observed surface temperature enhancement is the net result after all confounding factors (including friction and any radiative imbalances) have been dealt with internally by convection to produce hydrostatic equilibrium.
You could look at the ‘raising of the atmosphere’ event as a slow molecule by molecule process or you could envisage the atmosphere as simply being left over material from the initial aglommeration of planetary mass.
Either way it makes no difference to the general principle that less energy left for space than otherwise would have done due to its retention within the process of convective overturning.
Note that convective overturning involves constant movement in both ascending and descending columns. Those two pools of energy cannot just be magicked from nowhere.
Stephen, David, Willis, et. al. where’s water/water vapor, what’s it up to during all of this?
Nicholas:
http://hockeyschtick.blogspot.co.uk/2015/07/erasing-agw-how-convection-responds-to.html
See diagram 4