Guest Post by Willis Eschenbach
Did you know that one watt per square metre is equal to one kilogram per cubic second?
I sure didn’t know that, and at first I didn’t believe it, but it’s true.
(Yeah, yeah, I know it’s a second cubed and not a cubic second, but a metre cubed is a cubic metre, so I had to find out just what a cubic second might look like when it stepped out of the shadows … but I digress …)
The thing I like best about climate science is that I am constantly learning new things. For example, I came across that fascinating fact because against my better judgement I decided to take a look at the recent paper, charmingly yclept “Emergent Model for Predicting the Average Surface Temperature of Rocky Planets with Diverse Atmospheres”, by Den Volokin and Lark ReLlez, paywalled here. It has been gathering attention on some skeptical websites, so I thought I’d take a look even though it is just another in the long string of fitted models purporting to reveal hidden truths. As it turns out, it is a fascinating but fatally flawed paper, full of both interesting and wrong ideas.
The Abstract and Highlights say:
• Dimensional Analysis is used to model the average temperature of planetary bodies.
• The new model is derived via regression analysis of measured data from 6 bodies.
• Planetary bodies used for the model are Venus, Earth, Moon, Mars, Titan and Triton.
• Two forcing variables are found to accurately predict mean planetary temperatures.
• The predictor variables are solar irradiance and surface atmospheric pressure.
The Global Mean Annual near-surface Temperature (GMAT) of a planetary body is an expression of the available kinetic energy in the climate system and a critical parameter determining planet’s habitability. Previous studies have relied on theory-based mechanistic models to estimate GMATs of distant bodies such as extrasolar planets.
This ‘bottom-up’ approach oftentimes utilizes case-specific parameterizations of key physical processes (such as vertical convection and cloud formation) requiring detailed measurements in order to successfully simulate surface thermal conditions across diverse atmospheric and radiative environments. Here, we present a different ‘top-down’ statistical approach towards the development of a universal GMAT model that does not require planet-specific empirical adjustments.
Our method is based on Dimensional Analysis (DA) of observed data from the Solar System. DA provides an objective technique for constructing relevant state and forcing variables while ensuring dimensional homogeneity of the final model. Although widely utilized in other areas of physical science to derive models from empirical data, DA is a rarely employed analytic tool in astronomy and planetary science.
We apply the DA methodology to a well-constrained data set of six celestial bodies representing highly diverse physical environments in the Solar System, i.e. Venus, Earth, the Moon, Mars, Titan (a Moon of Saturn), and Triton (a Moon of Neptune). Twelve prospective relationships (models) suggested by DA are investigated via non-linear regression analyses involving dimensionless products comprised of solar irradiance, greenhouse-gas partial pressure/density and total atmospheric pressure/density as forcing variables, and two temperature ratios as dependent (state) variables. One non-linear regression model is found to statistically outperform the rest by a wide margin.
Our analysis revealed that GMATs of rocky planets can accurately be predicted over a broad range of atmospheric conditions and radiative regimes only using two forcing variables: top-of-the-atmosphere solar irradiance and total surface atmospheric pressure. The new model displays characteristics of an emergent macro-level thermodynamic relationship heretofore unbeknown to science that deserves further investigation and possibly a theoretical interpretation.
Well, that all sounded quite fascinating ,,, except for the part where I didn’t have a clue what dimensional analysis might be. So I went to school on that question. Here’s what I found out.
As we generally know but rarely stop to consider, the various special units that we use in science, like say watts per square metre, can all be expressed in the fundamental SI “base units” of mass (kilograms or kg), length (metres or m), time (seconds or sec or s), temperature (kelvins or k), and the like.
Dimensional analysis is a method of combining the variables of interest to make new dimensionless variables. Let’s say we have N variables of interest, we’ll call them x(1), x(2), x(3), x(4) … x(N). Dimensional analysis combines them in such a clever way that the fundamental dimensions cancel out, and thus what remains are dimensionless variables. This ensures that whatever we do with the variables the units will be correct … because they are dimensionless. Nifty.
Next, I found out that there is a mathematical theorem with the lovely English-sounding name, “The Buckingham Pi Theorem”, which sounds like it should calculate the appropriate dessert amounts when you have tea with the Queen. Anyhow, it states that if you have a system defined by a function involving N dimensioned variables, f(x(1), x(2), x(3), x(4) … x(N)), you can reduce the number of variables. The theorem states that by using dimensional analysis to combine the N dimensioned variables into dimensionless variables, you end up with N – m variables, where “m” is the number of SI base units involved (e.g. kg, m, etc).
So that sounded like a most promising theoretical method, worth knowing. It would seem that almost any model could be simplified by that method. However, at that point, they take their dimensionless sports car out on the autobahn to see how it performs at speed … and that’s where the wheels come off.
They applied dimensional analysis to the modeling of planetary surface temperatures. They decided that the following variables were of interest (sorry for the “MANUSCRIPT” across the page, it’s a samizdat copy):
Since there are six variables and four fundamental units, the Buckingham Pi Theorem says that they can be reduced to two dimensionless variables. A neat trick indeed. Then they used twelve different combinations of those dimensioned variables converted into dimensionless units, and tried fitting them to the data from six rocky celestial bodies using variety of formulas, including a formula of the form:
y = a exp(b x) +c exp(d x)
Out of all of the possible combinations of variables, they looked at 12 different possibilities. After trying various functions including the dual exponential function above, they picked the best function (the dual exponential) and the best combination of variables, and they produced the following graph:
Note that they started out with six celestial bodies, but at the end they couldn’t even fit all six with their model, so they “excluded” Titan from the regression. This is because if they left it in, the fit for Venus would really suck … in scientific circles this is known as “data snooping”, and is a Very Bad Thing™. In this case the data snooping took the form of selecting their data on the basis of how well it fit their theory. Bad scientists, no cookies.
Once they’ve done that, hoorah, their whiz-bang new model predicts the “thermal enhancement” of six celestial bodies with amazing accuracy … well, it does as long as you ignore the celestial body it doesn’t work so well for.
In any case, “thermal enhancement” is defined by them as the actual planetary surface temperature Ts divided by the temperature Tna that the planet would have it were an airless sphere. So “thermal enhancement” is how much warmer the planet is than that reference temperature. And here is the magic equation used to derive the results:
In the formula, P is the atmospheric pressure. Pr is the pressure at the triple point of water, 611.73 pascals. Pr is not important, it is a matter of convention. All that changing Pr does is change the parameters, the answer will be the same. As such, it seems odd that they include it at all. Why not make Pr equal to 1 pascal, and cancel it out of the equation? I have no answer to that question. I suspect they use 611.73 pascals rather than one pascal because it seems more sciencey. But that may just be my paranoia at work, they may have never considered canceling it out.
So there you have their model … what’s not to like about their analysis?
Well, as it turns out … just about everything.
Objection the First—If the formulas don’t fit, you must acquit
Let me start at the most fundamental level. The problem lies their assumption that the surface temperature of a planet with an atmosphere can actually be modeled by a simple function of the form:
Surface Temperature = f(x(1), x(2), x(3), x(4) … x(N))
I find the idea that the climate is that simple to be laughable. As an example of why, consider another much less complex system, a meandering river in the lowlands:
Notice the old river tracks and cutoff oxbows from previous locations of the river. Now, we have variables like gravity, and the slope of the land, and the density of the soil, and the like. But I would challenge anyone to successfully combine those variables in a function like
Average position of river mile 6 = f(x(1), x(2), x(3), x(4) … x(N))
and make the formula work in anything but special situations.
This is because a) the location of the river is always changing, and more importantly, b) the location of the river today is in very large measure a function of the location of the river yesterday.
In other words, the only hope of modeling this system is with an “iterative” model. An iterative model is a model that calculates the river’s position one day at a time, and uses one day’s results as input to the model in order to calculate the next day’s values. Thus, an iterative model MAY be able to calculate the ongoing state of the system. And this is exactly why climate models are iterative models of just that type—because you can’t model such constantly evolving systems with simplistic equations of a form like
Surface Temperature = f(x(1), x(2), x(3), x(4) … x(N))
So that is my first objection. The formula that is at the root of all of this, a simple dual-exponential, is extremely unlikely to be adequate to the task. The surface temperature of the earth is a result of a host of interactions, limitations, physical constraints, inter- and intra-subsystem feedbacks, resonances, thermal thresholds, biological processes, physical laws, changes of state of water, emergent phenomena, rotational speed, the list is long. And while you might get lucky and fit some simple form to some small part of that complexity, that is nothing but brute-force curve fitting.
Objection the Second – Von Neumann’s Elephant
John Von Neumann famously said, “With four parameters I can model an elephant, and with five I can make him wiggle his trunk”.
As near as I can determine there is one parameter used in the calculation of Tna, the hypothetical and unknowable “no atmosphere temperature”, and another four parameters in Equation 10a, for a total of five parameters.
It gets worse … when a parameter has either a very small or a very large value, it indicates a very finely balanced model. When I see a model parameter like 0.000183, as occurs in Equation 10a, it rings alarm bells. It tells me that the model is applying very different formulas to small and large numbers, and that’s a huge danger sign.
Next, they had full choice of formulas for their model. There was nothing limiting him to a double exponential, they could have used any formula they pleased.
Next, they tried no less than twelve different combinations of dimensioned variables before finding this particular fit.
Finally, there are only five data points to be fit. I can guarantee you that when the number of your model’s tuned parameters equals or exceeds the number of the data points you are using for your fit, you’ve lost the plot and you desperately need to trade up to a new model.
So my second objection is to Von Neumann’s elephant, with five parameters fitting the formula to the pathetically small number of only five data points, augmented by twelve variable combinations, and a free choice of formulas. That kind of fitting is not a model. It’s a tailor shop designed to make a form-fitting suit.
Objection the Third—Variable Count
The authors make much of the claim that they can calculate the temperature of five planets using only two variables. From their conclusion:
Our analysis revealed that the mean annual air surface temperature of rocky planets can reliably be estimated across a broad spectrum of atmospheric conditions and radiative regimes only using two forcing variables:TOA stellar irradiance and average surface atmospheric pressure.
But then we look at the calculations for Tna, which is a part of their magic equation 10a, and we find three other variables. Tna is defined by them as “the area-weighted average temperature of a thermally heterogeneous airless sphere”. Here is their equation 4a, which calculates Tna for the various celestial bodies.
So we have as additional variables the albedo, the ground heat storage coefficient, and the longwave emissivity. (Volokin et al ignore the cosmic microwave background radiation CMBR, as well as the geothermal flux.)
In other words, when they say they only use two variables, “TOA stellar irradiance and average surface atmospheric pressure”, that is simply not true. The complete list of variables is:
TOA stellar irradiance
Surface atmospheric pressure
Heat storage coefficient
So my third objection is that they are claiming that the model only uses two variables, when in fact it uses five.
Objection the Fourth: Data Snooping
They say in the Abstract:
We apply the DA methodology to a well-constrained data set of six celestial bodies representing highly diverse physical environments in the Solar System, i.e. Venus, Earth, the Moon, Mars, Titan (a Moon of Saturn), and Triton (a Moon of Neptune).
But then they have to throw out Titan, because it doesn’t fit, which is blatant data snooping … and despite that, they claim that their model works wonderfully. And of course, the “six planets” from the Abstract is the number quoted around the blogosphere, including by WUWT commenters.
Objection the Fifth: Special Martian Pleading
While they use standard reference temperature values for five of the six celestial bodies, they have done their own computations for the temperature of Mars. One can only presume that is to give Mars a better fit to their results—if it fit perfectly using the canonical values, there would be no need for them to calculate it differently. Again, data snooping, again, bad scientists, no cookies.
Objection the Sixth: The Oddity of Tna
Immediately above, we see the complete equation 4a for Tna, the area-weighted average temperature of an airless sphere. It depends on three variables: albedo, how much heat the ground soaks up during the day (heat storage fraction), and the emissivity. The authors actually use a simplified version of that formula. After showing the entire formula, they note that they will reasonably ignore the geothermal flux and the cosmic background radiation, because they are quite small for the bodies in question. OK, fair enough, that’s common practice to ignore very minor variables. But then they say:
Since regolith-covered celestial bodies with tenuous atmosphere are expected to have similar optical and thermo-physical properties of their surfaces (Volokin and ReLlez 2014), one can further simplify Equation [4a, see above] by combining the albedo, the heat storage fraction, and the emissivity using applicable values for the Moon to obtain:
Tna = 32.44 S^0.25 (4c)
Equation (4c) was employed to calculate the ‘no-atmosphere’ reference temperatures of all planetary bodies in our study.
I find that to be an unwarranted and incorrect simplification. I say this because it is clear that the reason the temperature of the moon is so low is because it rotates so slowly. It has two weeks of day, then two weeks of night. This increases the day-night swing of the temperature, because it lets the moon’s night-time temperature drop to a rather brisk -180°C or so.
And for a given solar input, whatever increases the surface temperature swings decreases the average temperature. With a day-night temperature swing of 270°C, the average lunar temperature is much, much colder than the S-B blackbody temperature.
But those huge temperature swings are NOT characteristic of the Earth, or Mars. Even without an atmosphere, the surface temperatures of those planets wouldn’t swing anywhere near as much as the moon because they all rotate much faster than the moon. With faster rotation, the days can’t get as hot, and the nights can’t get as cold. This means that their average temperature would not be depressed anywhere near as much as the moon, because the swings are smaller. As a result, while Equation 4c is accurate for the moon, it says that an airless earth rotating once a day would have the same temperature as the moon, and that’s simply not true. And for Venus, the opposite is true. With a rotation period of 116 days, its average surface temperature would be correspondingly lower, again leading to an incorrect result.
Well, my conclusion is that this model fails a number of crucial tests. The equations are not physically grounded and are of doubtful simplicity. It is a Von Neumann trunk-wiggling monstrosity with a free choice of formulas, five tunable parameters, and 12 combinations of variables. They have done their fit to a ridiculously small dataset only six planets, and failed at that, only fitting five. As a result, they removed one of the six from their fit, which is blatant data snooping. They claim only two variables when there are actually five. They have calculated their own temperature for Mars. And finally, they erroneously calculate the reference temperature Tna as if the Earth, Venus, and Mars rotate once every 28 days. This last one is critical to their actual result. Their model results report the surface temperature Ts divided by Tna … and since Tna is badly wrong for at least three of their five data points, well, it’s just another in the long list of reasons why their results do not hold water.
You’d think we’d be done there. But nooo … in a final burst of amazing hubris, they use their model results as a basis to claim that they “appear” to have discovered a new unknown thermodynamic property of the atmosphere, viz:
Based on statistical criteria including numerical accuracy, robustness, dimensional homogeneity and a broad environmental scope of validity, the final model (Equation 10) appears to describe an emergent macro-level thermodynamic property of planetary atmospheres heretofore unknown to science.
I’m sorry, but what the authors describe is merely a simple dual-exponential multi-parameter curve fitting exercise that after trying an unknown number of formulas, no less than twelve different variable combinations, and five tunable parameters, finally got it right an amazing five out of five times … by using the wrong values for Tna, re-calculating the temperature of Mars, and throwing out the one data point that didn’t fit. Which is impressive in its own bizarre manner, but not for the reasons they think.
However, who would have guessed that such a curve-fit had such a strong scientific capability that it could reveal a new “emergent macro-level thermodynamic property” that is “heretofore unbeknown to science?
Dang … that’s some industrial-strength trunk-wiggling there.
However, at least the part about dimensional analysis was fascinating, I need to look into it more, and it revealed unknown dimensions to me … a watt per square metre is a kilogram per cubic second? Who knew?
My regards to everyone,
As Always: Let me request that if you disagree with someone, please have the courtesy to quote the exact words you object to. That way, we can all understand the precise nature of your objection.