We publish this here, not to confirm that it is correct, but to stimulate the debate needed to determine whether or not it is correct or if it’s simply an exercise in curve fitting. ~ctm
George White, August 2017
Climate science is the most controversial science of the modern era. A reason why the controversy has been so persistent is that those who accept the IPCC as the arbiter of climate science fail to recognize that a controversy even exists. Their rationalization is that the IPCC’s conclusions are presented as the result of a scientific consensus, therefore, the threshold for overturning them is so high, it can’t be met, especially by anyone who’s peer reviewed work isn’t published in a main stream climate science journal. Their universal reaction when presented with contraindicative evidence is that there’s no way it can be true, therefore, it deserves no consideration and whoever brought it up can be ignored while the catch22 makes it almost impossible to get contraindicative evidence into any main stream journal.
This prejudice is not limited to those with a limited understanding of the science, but is widespread among those who think they understand and even quite prevalent among notable scientists in the field. Anyone who has ever engaged in communications with an individual who has accepted the consensus conclusions has likely observed this bias, often accompanied with demeaning language presented with extreme self righteous indignation that you would dare question the ‘settled science’ of the consensus.
The Fix
Correcting broken science that’s been settled by a consensus is made more difficult by its support from recursive logic where the errors justify themselves by defining what the consensus believes. The best way forward is to establish a new consensus. This means not just falsifying beliefs that support the status quo, but more importantly, replacing those beliefs with something more definitively settled.
Since politics has taken sides, climate science has become driven by the rules of politics rather than the rules of science. Taking a page from how a political consensus arises, the two sides must first understand and acknowledge what they have in common before they can address where they differ.
Alarmists and deniers alike believe that CO2 is a greenhouse gas, that GHG gases contribute to making the surface warmer than it would be otherwise, that man is putting CO2 into the atmosphere and that the climate changes. The denier label used by alarmists applies to anyone who doesn’t accept everything the consensus believes with the implication being that truths supported by real science are also being denied. Surely, if one believes that CO2 isn’t a greenhouse gas, that man isn’t putting CO2 into the atmosphere, that GHG’s don’t contribute to surface warmth, that the climate isn’t changing or that the laws of physics don’t apply, they would be in denial, but few skeptics are that uninformed.
Most skeptics would agree that if there was significant anthropogenic warming, we should take steps to prepare for any consequences. This means applying rational risk management, where all influences of increased CO2 and a warming climate must be considered. Increased atmospheric CO2 means more raw materials for photosynthesis, which at the base of the food chain is the sustaining foundation for nearly all life on Earth. Greenhouse operators routinely increase CO2 concentrations to be much higher than ambient because it’s good for the plants and does no harm to people. Warmer temperatures also have benefits. If you ask anyone who’s not a winter sports enthusiast what their favorite season is, it will probably not be winter. If you have sufficient food and water, you can survive indefinitely in the warmest outdoor temperatures found on the planet. This isn’t true in the coldest places where at a minimum you also need clothes, fire, fuel and shelter.
While the differences between sides seems irreconcilable, there’s only one factor they disagree about and this is the basis for all other differences. While this disagreement is still insurmountable, narrowing the scope makes it easier to address. The controversy is about the size of the incremental effect atmospheric CO2 has on the surface temperature which is a function of the size of the incremental effect solar energy has. This parameter is referred to as the climate sensitivity factor. What makes it so controversial is that the consensus accepts a sensitivity presumed by the IPCC, while the possible range theorized, calculated and measured by skeptics has little to no overlap with the range accepted by the consensus. The differences are so large that only one side can be right and the other must be irreconcilably wrong, which makes compromise impossible, perpetuating the controversy.
The IPCC’s sensitivity has never been validated by first principles physics or direct measurements. It’s most widely touted support comes from models, but it seems that as they add degrees of freedom to curve fit the past, the predictions of the future get alarmingly worse. Its support from measurements comes from extrapolating trends arising from manipulated data where the adjustments are poorly documented and the fudge factors always push results in one direction. This introduces even less certain unknowns, which are how much of the trend is a component of natural variability, how much is due to adjustments and how much is due to CO2. This seems counterproductive since the climate sensitivity should be relatively easy to predict using the settled laws of physics and even easier to measure with satellite observations, so what’s the point in the obfuscation by introducing unnecessary levels of indirection, additional unknowns and imaginary complexity?
Quantifying the Relationships
To quantify the sensitivity, we must start from a baseline that everyone can agree upon. This would be the analysis for a body like the Moon which has no atmosphere and that can be trivially modeled as an ideal black body. While not rocket science, an analysis similar to this was done prior to exploring the Moon in order to establish the required operational limits for lunar hardware. The Moon is a good place to start since it receives the same amount of solar energy as Earth and its inorganic composition is the same. Unless the Moon’s degenerate climate system can be accurately modeled, there’s no chance that a more complex system like the Earth can ever be understood.
To derive the sensitivity of the Moon, construct a behavioral model by formalizing the requirements of Conservation Of Energy as equation 1).
1) Pi(t) = Po(t) + ∂E(t)/∂t
Consider the virtual surface of matter in equilibrium with the Sun, which for the Moon is the same as its solid surface. Pi(t) is the instantaneous solar power absorbed by this surface, Po(t) is the instantaneous power emitted by it and E(t) is the solar energy stored by it. If Po(t) is instantaneously greater than Pi(t), ∂E(t)/∂t is negative and E(t) decreases until Po(t) becomes equal to Pi(t). If Po(t) is less than Pi(t), ∂E(t)/∂t is positive and E(t) increases until again Po(t) is equal to Pi(t). This equation quantifies more than just an ideal black body. COE dictates that it must be satisfied by the macroscopic behavior of any thermodynamic system that lacks an internal source of power, since changes in E(t) affect Po(t) enough to offset ∂E(t)/∂t. What differs between modeled systems is the nature of the matter in equilibrium with its energy source, the complexity of E(t) and the specific relationship between E(t) and Po(t). An astute observer will recognize that if an amount of time, τ, is defined such that all of E is emitted at the rate Po, the result becomes Pi = E/τ + ∂E/∂t which is the same form as the differential equation describing the charging and discharging of a capacitor which is another COE derived model of a physical system whose solutions are very well known where τ is the RC time constant.
For an ideal black body like the Moon, E(t) is the net solar energy stored by the top layer of its surface. From this, we can establish the precise relationship between E(t) and Po(t) by first establishing the relationship between the temperature, T(t) and E(t) as shown by equation 2).
2) T(t) = κE(t)
The temperature of matter and the energy stored by it are linearly dependent on each other through a proportionality constant, κ, which is a function of the heat capacity and equivalent mass of the matter in direct equilibrium with the Sun. Next, equation 3) quantifies the relationship between T(t) and Po(t).
3) Po(t) = εσT(t)4
This is just the Stefan-Boltzmann Law where σ is the Stefan Boltzmann constant and equal to about 5.67E-8 W/m2 per T4, and for the Moon, the emissivity of the surface, ε, is approximately equal to 1.
Pi(t) can be expressed as a function of Solar energy, Psun(t), and the albedo, α, as shown in equation 4).
4) Pi(t) = Psun(t)(1 – α)
Going forward, all of the variables will be considered implicit functions of time. The model now has 4 equations and 7 variables, Psun, Pi, Po, T, α, κ and ε. Psun is known for all points in time and space across the Moon’s surface. The albedo α and heat capacity κ are mostly constant across the surface and ε is almost exactly 1. To the extent that Psun, α, κ and ε are known, we can reduce the problem to 4 equations and 4 unknowns, Pi, T, Po and E, whose time varying values can be calculated for any point on the surface by solving a simple differential equation applied to an equal area gridded representation whose accuracy is limited only by the accuracy of α, κ and ε per cell. Any model that conforms to equations 1) through 4) will be referred to as a Physical Model.
Quantifying the Sensitivity
Starting from a Physical Model, the Moon’s sensitivity can be easily calculated. The ∂E/∂t term is what the IPCC calls ‘forcing’ which is the instantaneous difference between Pi and Po at TOA and/or TOT. For the Moon, TOT and TOA are coincident with the solid surface defining the virtual surface in direct equilibrium with the Sun. The IPCC defines forcing like this so that an increase in Pi owing to a decrease in albedo or increase in solar output can be made equivalent to a decrease in Po from a decrease in power passing through the transparent spectrum of the atmospheric that would arise from increased GHG concentrations. This definition is ambiguous since Pi is independent of E, while Po is highly dependent on E, thus a change in Pi is not equivalent to a the same change in Po since both change E, while only Po changes in response to changes in E which initiates further changes E and Po. The only proper characterization of forcing is a change in Pi and this is what will be used here.
While ∂E/∂t is the instantaneous difference between Pi and Po and conforms to the IPCC definition of forcing, the IPCC representation of the sensitivity assumes that ∂T/∂t is linearly proportional to ∂E/∂t, or at least approximately so. This is incorrect because of the T4 relationship between T and Po. The approximately linear assumption is valid over a small temperature range around average, but is definitely not valid over the range of all possible temperatures.
To calculate the Long Term Equilibrium sensitivity, we must consider that in the steady state, the temporal average of Pi is equal to the temporal average of Po, thus the integral over time of dE/dt will be zero. Given that in LTE, Pi is equal to Po, and the Moon certainly is in an LTE steady state, we can write the LTE balance equation as,
5) Pi = Po = εσT4
To calculate the LTE sensitivity, simply differentiate and invert the above equation which gives us,
6) ∂T/∂Pi = ∂T/∂Po = 1/(4εσT3)
This derivation does make an assumption, which is that ∂T/∂Pi = ∂T/∂Po since we’re really calculating ∂T/∂Po. For the Moon this is true, but for a planet with an semi-transparent atmosphere between the energy source and the surface in equilibrium with it, they aren’t for the same reason that the IPCC’s metric of forcing is ambiguous. None the less, what makes them different can be quantified and the quantification can be tested. But for the Moon, which will serve as the baseline, it doesn’t matter.
Define the average temperature of the Moon as the equivalent temperature of a black body where each square meter of surface is emitting the same amount of power such that when summed across all square meters, it adds up to the actual emissions. Normalizing to an average rate per m2 is a meaningful metric since all Joules are equivalent and the average of incoming and outgoing rates of Joules is meaningful for quantifying the effects one has on the other, moreover; a rate of energy per m2 can be trivially interchanged with an equivalent temperature. This same kind of average is widely applied to the Earth’s surface when calculating its average temperature from satellite data where the resulting surface emissions are converted to an equivalent temperature using the Stefan-Boltzmann Law.
If the average temperature of the Moon was 255K, equation 6) tells us that ∂T/∂Pi is about 0.3C per W/m2. If it was the 288K like the Earth, the sensitivity would be about 0.18C per W/m2. Notice that owing to the 1/T3 dependence of the sensitivity on temperature, as the temperature increases, the sensitivity decreases at an exponential rate. The average albedo of the Moon is about 0.12 leading to an average Pi and Po of about 300 W/m2 corresponding to an equivalent average temperature of about 270K and an average sensitivity of about 0.22 C per W/m2.
As far as the Moon is concerned, this analysis is based on nothing but first principles physics and the undeniable, deterministic average sensitivity that results is about 0.22C per W/m2. This is based on indisputable science, moreover; the predictions of Lunar temperatures using models like this have been well validated by measurements.
The 270K average temperature of the Moon would be the Earth’s average temperature if there were no GHG’s since this also means no liquid water, ice or clouds resulting in an Earth albedo of 0.12 just like the Moon. This contradicts the often repeated claim that GHG’s increase the temperature of Earth from 255K to 288K, or about 33C, where 255K is the equivalent temperature of the 240 W/m2 average power arriving at the planet after reflection. This is only half the story and it’s equally important to understand that water also cools the planet by about 15K owing to the albedo of clouds and ice which can’t be separated from the warming effect of water vapor making the net warming of the Earth from all effects about 18C and not 33C. Water vapor accounts for about 2/3 of the 33 degrees of warming leaving about 11C arising from all other GHG’s and clouds. The other GHG’s have no corresponding cooling effect, thus the net warming due to water is about 7C (33*2/3 – 15) while the net warming from all other sources combined is about 11C, where only a fraction of this arises from from CO2 alone.
Making It More Complex
Differences arise as the system gets more complex. At a level of complexity representative of the Earth’s climate system, the consensus asserts that the sensitivity increases all the way up to 0.8C per W/m2, which is nearly 4 times the sensitivity of a comparable system without GHG’s. Skeptics maintain that the sensitivity isn’t changing by anywhere near that much and remains close to where it started from without GHG’s and if anything, net negative feedback might make it even smaller.
Lets consider the complexity in an incremental manner, starting with the length of the day. For longer period rotations, the same point on the surface is exposed to the heat of the Sun and the cold of deep space for much longer periods of time. As the rotational speed increases, the difference between the minimum and maximum temperature decreases, but given the same amount of total incident power, the average emissions and equivalent average temperature will remain exactly the same. At real slow rotation rates, the dark side can emit all of the energy it ever absorbed from the Sun and the surface emissions will approach those corresponding to it’s internal temperature which does affect the result.
The sensitivity we care about is relevant to how the LTE averages change. The average emissions and corresponding average temperature are locked to an invariant amount of incident solar energy while the rotation rate has only a small effect on the average sensitivity related to the T-3 relationship between temperature and the sensitivity. Longer days and nights mean that local sensitivities will span a wider range owing to a wider temperature range. Since higher temperatures require a larger portion of the total energy budget, as the rotation rate slows, the average sensitivity decreases. To normalize this to Earth, consider a Moon with a 24 hour day where this effect is relatively small.
The next complication is to add an atmosphere. Start with an Earth like atmosphere of N2, O2, and Ar except without water or other GHG’s. On the Moon, gravity is less, so it will take more atmosphere to achieve Earth like atmospheric pressures. To normalize this, consider a Moon the size of the Earth and with Earth like gravity.
The net effect of an atmosphere devoid of GHG’s and clouds will also reduce the difference between high and low extremes, but not by much since dry air can’t hold and transfer much heat, nor will there be much of a difference between ∂T/∂Pi and ∂T/∂Po. Since O2, N2 and Ar are mostly transparent to both incoming visible light and outgoing LWIR radiation, this atmosphere has little impact on the temperature, the energy balance or the sensitivity of the surface temperature to forcing.
At this point, we have a Physical Model representative of an Earth like planet with an Earth like atmosphere, except that it contains no GHG’s, clouds, liquid or solid water, the average temperature is 270K and the average sensitivity is 0.22 W/m2. It’s safe to say that up until this point in the analysis, the Physical Model is based on nothing but well settled physics. There’s still an ocean and a small percentage of the atmosphere to account for, comprised mostly of water and trace gases like CO2, CH4 and O3.
The Fun Starts Here
The consensus contends that the Earth’s climate system is far too complex to be represented with something as deterministic as a Physical Model, even as this model works perfectly well for an Earth like planet missing only water a few trace gases. They arm wave complexities like GHG’s, clouds, coupling between the land, oceans and atmosphere, model predictions, latent heat, thermals, non linearities, chaos, feedback and interactions between these factors as contributing to making the climate too complex to model in such a trivial way, moreover; what about Venus? Each of these issues will be examined by itself to see what effects it might have on the surface temperature, planet emissions and the sensitivity as quantified by the Physical Model, including how this model explains Venus.
Greenhouse Gases
When GHG’s other than water vapor are added to the Physical Model, the effect on the surface temperature can be readily quantified. If some fraction of the energy emitted by the surface is captured by GHG molecules, some fraction of what was absorbed by those molecules is ultimately returned to the surface making it warmer while the remaining fraction is ultimately emitted into space manifesting the energy balance. This is relatively easy to add to the model equations as a decrease in the effective emissivity of a surface at some temperature relative to the emissions of a planet. If Ps is the surface emissions corresponding to T, Fa is the fraction of Ps that’s captured by GHG’s and Fr is the fraction of the captured power returned to the surface, we can express this in equations 7) and 8).
7) Ps = εxσT4
8) Po = (1 – Fa)Ps + FaPs(1 – Fr)
The first term in equation 8) is the power passing though the atmosphere that’s not intercepted by GHG’s and the second term is the fraction of what was captured and ultimately emitted into space. Solving equation 8) for Po/Ps, we get equation 9),
9) Po/Ps = 1 – FaFr
Now, we can combine with equation 9) with equation 7) to rewrite equation 3) as equation 3a).
3a) Po = (1 – FaFr)εxσT4
Here, εx is the emissivity of the surface itself, which like the surface of the Moon without GHG’s is also approximately 1, where (1 – FaFr) is the effective emissivity contributed by the semi-transparent atmosphere. This can be double checked by calculating Psi, which is the power incident to the surface and by recognizing that Psi – Ps is equal to ∂E/∂t and Pi – Po.
10) Psi = Pi + PsFaFr
11) Psi – Ps = Pi – Po
Solving 11) for Psi and substituting into 10), we get equation 12), solving for Po results in 13) which after substituting 7) for Ps is yet another way to arrive at equation 3a).
12) Ps – Po = PsFaFr
13) Po = (1 – FaFr)Ps
The result is that adding GHG’s modifies the effective emissivity of the planet from 1 for an ideal black body surface to a smaller value as the atmosphere absorbs some fraction of surface emissions making the planets emissions, relative to its surface temperature, appear gray from space. The effective emissivity of this gray body emitter, ε’, is given exactly by equation 3a) as ε’ = (1 – FaFr)εx.
Clouds
Clouds are the most enigmatic of the complications, but none the less can easily fit within the Physical Model. The way to model clouds is to characterize them by the fraction of surface covered by them and then apply the Physical Model with values of α, κ and ε specific to average clear and average cloudy skies and then weighting the results based on the specific proportions of each.
Consider the Pi term, where if ρ is the fraction of the surface covered by clouds, αc is the average albedo of cloudy skies and αs is the average albedo of clear skies, α can be calculated as equation 14).
14) α = ραc + (1 – ρ)αs
Now, consider the Po term, which can be similarly calculated as equation 15) where Ps and Pc are the emissions of the surface and clouds at their average temperatures, εs is the equivalent emissivity characterizing the clear atmosphere and εc is the equivalent emissivity characterizing clouds.
15) Po = ρεsεcPc + ρ(1 – εc)εsPs + (1 – ρ)εsPs
The first term is the power emitted by clouds, the second term is the surface power passing through clouds and the last term is the power emitted by the surface and passing through the clear sky. GHG’s can be accounted for by identifying the value of εs corresponding to the average absorption characteristics between the surface and space and between clouds and space. By considering Pc as some fraction of Ps and calling this Fx, equation 15) can be rearranged to calculate Po/Ps which is the same as the ε’ derived from equation 3a). The result is equation 16).
16) ε’ = Po/Ps = ρεs εcFx + ρεs (1 – εc) + (1 – ρ)εs
The variables εc, Fx and ρ can all be extracted from the ISCCP cloud data, as can αc and αs., moreover; the data supports a very linear relationship between Pc and Ps. The average value of ρ is 0.66, the average value of αc is 0.37 and αs is 0.16 resulting in a value for α of about 0.30 which is exactly equal to the accepted value. The average value of εc is about 0.72 and Fx is measured to be about 0.68. Considering εs to be 1, the effective ε’ is calculated to be about 0.85.
From line by line simulations of a standard atmosphere, the fraction of surface and cloud emissions absorbed by GHG’s, Fa, is about 0.58, the value of Fr as constrained by geometry is 0.5 and is measured to be about 0.51. From equation 13), the equivalent εs becomes 0.71. The new ε’ becomes 0.85 * 0.70 = 0.60 which is well within the margin of error for the expected value of Po/Ps which is 240/395 = 0.61 and even closer to the measured value from the ISCCP data of 238/396 = 0.60. When the same analysis is performed one hemisphere at a time, or even on individual slices of latitude, the predicted ratios of Po/Ps match the measurements once the net transfer of energy from the equator to the poles and between hemispheres is properly accounted for.
At this point, we have a Physical Model that accounts for GHG’s and clouds which accurately predicts the ratio between the BB surface emissions at its average temperature and predicts the average emissions of the planet spanning the entire range of temperatures found on the surface.
The applicability of the Physical Model to the Earth’s climate system is a hypothesis derived from first principles, which still must be tested. The first test predicting the ratio of the planets emissions to surface emissions got the right answer, but this is a simple test and while questioning the method is to deny physical laws, surely some will question the coefficients that led to this result. While the coefficients aren’t constant, they do vary around a mean and its the mean value that’s relevant to the LTE sensitivity. A more powerful testable prediction is that of the planets emissions as a function of surface temperature. The LTE relationship predicted by equation 3) is that if Po are the emissions of the planet and T is the surface temperature, the relationship between them is that of a gray body whose temperature is T and whose emissivity is ε’ and which is calculated to be about 0.61. The results of this test will be presented a little later along with justification for the coefficients used for the first test.
Complex Coupling
In the context of equation 1), complex couplings are modeled as individual storage pools of E that exchange energy among themselves. We’re only concerned about the LTE sensitivity, so by definition, the net exchange of energy among all pools contributing to the temperature must be zero. Otherwise, parts of the system will either heat up or cool down without bound. LTE is defined when the average ∂E/∂t is zero, thus the rate of change for the sum of its components must also be zero.
Not all pools of E necessarily contribute to the surface temperature. For example, some amount of E is consumed by photosynthesis and more is consumed to perform the work of weather. If we quantify E as two pools, one storing the energy that contributes to the surface temperature Es, and the energy stored in all other pools as Eo, we can rewrite equations 1) and 2) as,
1) Pi = Po + ∂Es/∂t + ∂Eo/∂t
1a) ∂E/∂t = ∂Es/∂t + ∂Eo/∂t
2a) T = κ(Es – Eo)
If Eo is a small percentage of Es, an equivalent κ’ can be calculated such that κ’E = κ(Es – Eo) and the Physical Model is still representative of the system as a whole and the value of κ’ will not deviate much from its theoretical value. Measurements from the ISCCP data suggest an average of about 1.8 +/- 0.5 W/m2 of the 240 W/m2 of the average incident solar energy is not contributing to heating the planet nor must it be emitted for the planet to be in a thermodynamic steady state.
Thus far, GHG’s, clouds and the coupling between the surface, oceans and atmosphere can all be accommodated with the Physical Model, by simply adjusting α, κ and ε. There can be no question that the Physical Model is capable of modeling the Earth’s climate and that per equation 6), the upper bound on the sensitivity is less than the 0.4C per W/m2 lower bound suggested by the IPCC. The rest of this discussion will address why the issues with this model are invalid, demonstrate tests whose results support predictions of the Physical Model and show other tests that falsify a high sensitivity.
Models
The results of climate models are frequently cited as supporting an ‘emergent’ high sensitivity, however; these models tend to include errors and assumptions that favor a high sensitivity. Many even dial in a presumed sensitivity indirectly. The underlying issue is that the GCM’s used for climate modeling have a very large number of coefficients whose values are unknown, so they are set based on ‘educated’ guesses and it’s this that leads to bias as objectivity is replaced with subjectivity.
In order to match the past, simulated annealing like algorithms are applied to vary these coefficients around their expected mean until the past is best matched, which if there are any errors in the presumed mean values or there are any fundamental algorithmic flaws, the effects of these errors accumulate making both predictions of the future and the further past worse. This modeling failure is clearly demonstrated by the physics defying predictions so commonly made by these models.
Consider a sine wave with a gradually increasing period. If the model used to represent it is a fixed period sine wave and the period of the model is matched to the average period of a few observed cycles, the model will deviate from what’s being modeled both before and after the range over which the model was calibrated. If the measurements span less than a full period, both a long period sine wave and a linear trend can fit the data, but when looking for a linear trend, the long period sine wave becomes invisible. Consider seasonal variability, which is nearly perfectly sinusoidal. If you measure the average linear trend from June to July and extrapolate, the model will definitely fail in the past and the future and the further out in time you go, the worse it will get. Notice that only sinusoidal and exponential functions of E work as solutions for equation 1), since only sinusoids and exponentials have a derivative whose form is the same as itself, given that Po is a function of E. Note that the theoretical and actual variability in Pi can be expressed as the sum of sinusoids and exponentials and that this leads to the linear property of superposition when behavior is modeled in the energy in, energy out domain, rather than in the energy in, temperature out domain preferred by the IPCC.
The way to make GCM’s more accurate is to insure that the macroscopic behavior of the system being modeled conforms to the constraints of the Physical Model. Clearly this is not being done, otherwise the modeled sensitivity would be closer to 0.22 C per W/m2 and no where near the 0.8C per W/m2 presumed by the consensus and supported by the erroneous models.
Non Radiant Energy
Adding non radiant energy transports to the mix adds yet another level of obfuscation. This arises from Trenberth’s energy balance which includes latent heat and thermals transporting energy into the atmosphere along with the 390 W/m2 of radiant energy arising from an ideal black body surface at 288K. Trenberth returns the non radiant energy to the surface as part of the ‘back radiation’ term, but its inclusion gets in the way of understanding how the energy balance relates to the sensitivity, especially since most of the return of this energy is not in the form of radiation, but in the form of air and water returning that energy back to the surface.
The reason is that neither latent heat, thermals or any other energy transported by matter into the atmosphere has any effect on the surface temperature, input flux or emissions of the planet, beyond the effect they are already having on these variables and whatever effects they have is bundled into the equivalent values of α, κ and ε. The controversy is about the sensitivity, which is the relationship between changes in Pi and changes in T. The Physical Model ascribed with equivalent values of α, κ and ε dictates exactly what the sensitivity must be. Since Pi, Po and T are all measurable values, validating that the net results of these non radiative transports are already accounted for by the relative relationships of measurable variables and that these relationships conform to the Physical Model is very testable and whose results are very repeatable.
Chaos and Non Linearities
Chaos and non linearities are a common complication used to dismiss the requirement that the macroscopic climate system behavior must obey the macroscopic laws of physics. Chaos is primarily an attribute of the path the climate system takes from one equilibrium state to another and is also called weather, which of course, is not the climate. Relative to the LTE response of the system and its corresponding LTE sensitivity, chaos averages out since the new equilibrium state itself is invariant and driven by the incident energy and its conservation. Even quasi-stable states like those associated with ENSO cycles and other natural variability averages out relative to the LTE state.
Chaos may result in over shooting the desired equilibrium, in which case it will eventually migrate back to where it wants to be, but what’s more likely, is that the system never reaches its new steady state equilibrium because some factor will change what that new steady state will be. Consider seasonal variability, where the days start getting shorter or longer before the surface reaches the maximum or minimum temperature it could achieve if the day length was consistently long or short.
Non linearities are another of these red herrings and the most significant non linearity in the system as modeled by the IPCC is the relationship between emissions and temperature. By keeping the analysis in the energy domain and converting to equivalent temperatures at the end, the non linearities all but disappear.
Feedback
Large positive feedback is used to justify how 1 W/m2 of forcing can be amplified into the 4.3 W/m2 of surface emissions required in order to sustain a surface temperature 0.8C higher than the current average of 288K. This is ridiculous considering that the 240 W/m2 of accumulated forcing (Pi) currently results in 390 W/m2 of radiant emissions from the surface (Ps) and that each W/m2 of input results in only 1.6 W/m2 of surface emissions. This means that the last W/m2 of forcing from the Sun resulted in about 1.6 W/m2 of surface emissions, the idea that the next one would result in 4.3 W/m2 is so absurd it defies all possible logic. This represents such an obviously fatal flaw in consensus climate science that either the claimed sensitivity was never subject to peer review or the veracity of climate science peer review is nil, either of which deprecates the entire body of climate science publishing.
The feedback related errors were first made by Hansen, reinforced by Schlesinger and have been cast in stone since AR1 and more recently, they’ve been echoed by Roe. Bode developed an analysis technique for linear, feedback amplifiers and this analysis was improperly applied to quantify climate system feedback. Bode’s model has two non negotiable preconditions that were not met by the application of his analysis to the climate. These are specified in the first couple of paragraphs in the book referenced by both Hansen and Schlesinger as the theoretical foundation for climate feedback. First is the assumption of strict linearity. This means that if the input changes by 1 and the output changes by 2, then, if the input changes by 2, the output must change by 4. By using a delta Pi as the input to the model and a delta T as the output, this linearity constraint was violated since power and temperature are not linearly related, but power is related to T4. Second is the requirement for an implicit source of Joules to power the gain. This can’t be the Sun, as solar energy is already accounted for as the forcing input to the model and you can’t count it twice.
To grasp the implications of nonlinearity, consider an audio amplifier with a gain of 100. If 1 V goes in and 100 V comes out before the amplifier starts to clip, increasing the input to 2V will not change the output value and the gain, which was 100 for inputs from 0V to 1V is reduced to 50 at 2V of input. Bode’s analysis requires the gain, which climate science calls the sensitivity, to be constant and independent of the input forcing. Once an amplifier goes non linear and starts to clip, Bode’s analysis no longer applies.
Bode defines forcing as the stimulus and defines sensitivity as the change in the dimensionless gain consequential to the change in some other parameter and is also a dimensionless ratio. What climate science calls forcing is an over generalization of the concept and what they call sensitivity is actually the incremental gain, moreover; they’ve voided the ability to use Bode’s analysis by choosing a non linear metric of gain. For the linear systems modeled by Bode, the incremental gain is always equal to the absolute gain as this is the basic requirement that defines linearity. The consensus makes the false claim that the incremental gain can be many times larger than the absolute gain, which is a non sequitur relative to the analysis used. Furthermore, given the T-3 dependence of the sensitivity on the temperature, the sensitivity quantified as a temperature change per W/m2 of forcing must decrease as T increases, while the consensus quantification of the sensitivity requires the exact opposite.
At the measured value of 1.6 W/m2 of surface emissions per W/m2 of accumulated solar forcing, the extra 0.6 W/m2 above and beyond the initial W/m2 of forcing is all that can be attributed to what climate science refers to as feedback. The hypothesis of a high sensitivity requires 3.3 W/m2 of feedback to arise from only 1 W/m2 of forcing. This is 330% of the forcing and any system whose positive feedback exceeds 100% of the input will be unconditionally unstable and the climate system is certainly stable and always recovers after catastrophic natural events that can do far more damage to the Earth and its ecosystems then man could ever do in millions of years of trying. Even the lower limit claimed by the IPCC of 0.4C per W/m2 requires more than 100% positive feedback, falsifying the entire range they assert.
An irony is that consensus climate science relies on an oversimplified feedback model that makes explicit assumptions that don’t apply to the climate system in order to support the hypothesis of a high sensitivity arising from large positive feedback, yet their biggest complaint about the applicability of the Physical Model is that the climate is too complicated to be represented with such a simple and undeniably deterministic model.
Venus
Venus is something else that climate alarmists like to bring up. However; if you consider Venus in the context of the Physical Model, the proper surface in direct equilibrium with the Sun is not the solid surface of the planet, but a virtual surface high up in its clouds. Unlike Earth, where the lapse rate is negative from the surface in equilibrium with the Sun and up into the atmosphere, the Venusian lapse rate is positive from its surface in equilibrium with the Sun down to the solid surface below. Even if the Venusian atmosphere was 90 ATM of N2, the surface would still be about as hot as it is now.
Venus is a case of runaway clouds and not runaway GHG’s as often claimed. The thermodynamics of Earth’s clouds are tightly coupled to that of its surface through evaporation and precipitation, thus cloud temperatures are a direct function of the surface temperature below and not the Sun. While the water in clouds does absorb some solar energy, owing to the tight coupling between clouds and the oceans, the LTE effect is the same as if the oceans had absorbed that energy directly. This isn’t the case for Venus, where the thermodynamics of its clouds are independent from that of its surface enabling clouds to arrive at a steady state with incoming energy by themselves.
Even for Earth, the surface in direct equilibrium with the Sun is not the solid surface, as it is for the Moon, but is a virtual surface comprised of the top of the oceans and the bits of land that poke through. Most of the solid surface is beneath the oceans and its nearly 0C temperature is a function of the temperature/density profile of the ocean above. The dense CO2 atmosphere of Venus, whose mass is comparable to the mass of Earth’s oceans, acts more like Earth’s oceans than it does Earth’s atmosphere thus Venusian cloud tops above a CO2 ocean is a good analogy for the surface of Earth and will be at about the same average temperature and atmospheric pressure.
Testing Predictions
The Physical Model makes predictions about how Pi, Po and the surface temperature will behave relative to each other. The first test was a prediction of the ratio between surface emissions and planet emissions based on measurable physical parameters and this calculation was nearly exact. The values of αc, αs, ρ, and εc in equations 14) and 16) were extracted as the average values reported or derived from the ISCCP cloud data set provided by GISS while εs arose from line by line simulations.
Figures 1, 2, 3 and 4 illustrate the origins of αc, αs, ρ, and εc, where the dotted line in each plot represents the measured LTE average value for that parameter. Those values were rounded to 2 significant digits for the purpose of checking the predictions of equations 14) and 16). Clicking
on a figure should bring up a full resolution version.
The absolute accuracy of ISCCP surface temperatures suffers from a 2001 change to a new generation of polar orbiters combined with discontinuous polar orbiter coverage which the algorithms depended on for consistent cross satellite calibration. This can be seen more dramatically in Figure 5, which is a plot of the global monthly average surface temperature derived from the gridded temperatures reported in the ISSCP. While this makes the data useless for establishing trends, it doesn’t materially affect the use of this data for establishing the average coefficients related to the sensitivity.
Figure 5 demonstrates something even more interesting, which is that the two hemispheres don’t exactly cancel and the peak to peak variability in the global monthly average is about 5C. The Northern hemisphere has significantly more seasonal p-p temperature variability than the Southern hemisphere owing to a larger fraction of land resulting in a global sum whose minimum and maximum are 180 degrees out of phase of what you would expect from the seasonal position of perihelion. To the extent that the consensus assumes the effects of perihelion average out across the planet, the 5C p-p seasonal variability in the planets average temperature represents the minimum amount of natural variability to expect given the same amount of incident energy. In about 10K years when perihelion is aligned with the Northern hemisphere summer, the p-p differences between hemispheres will become much larger which is a likely trigger for the next ice age. The asymmetric response of the hemispheres is something that consensus climate science has not wrapped its collective heads around, largely because the anomaly analysis they depend on smooths out seasonal variability obfuscating the importance of understanding how and why this variability arises, how quickly the planet responds to seasonal forcing and how the asymmetry contributes to the ebb and flow of ice ages.
While Pi is trivially calculated as reflectance applied to solar energy, both of which are relatively accurately known, Po is trickier to arrive at. Satellites only measure LWIR emissions in 1 or 2 narrow bands in the transparent regions of the emission spectrum and in an even narrower band whose magnitude indicates how much water vapor absorption is taking place. These narrow band emissions are converted to a surface temperature by applying a radiative model to a varying temperature until the emissions leaving the radiative model in the bands measured by the satellite are matched and then the results are aligned to surface measurements. Equation 15) was used to calculate Po which was based on reported surface temperatures, cloud temperatures and cloud emissivity applied to a reverse engineered radiative model to determine how much power leaves the top of the atmosphere across all bands. This is done for both cloudy and clear skies across each equal area grid cell and the total emissions are a sum weighted by the fraction of clouds modified by the clouds emissivity. To cross check this calculation, ∂E(t)/∂t can be calculated as the difference between Pi and the derived Po. If the long term average of this is close to zero, then COE is not violated by the calculated Po. Figure 6 shows this and indeed, the average ∂E(t)/∂t is approximately zero within the accuracy of the data. The 1.8 W/m2 difference could be a small data error, but seems to be the solar power that’s not actually heating the surface but powering photosynthesis and driving the weather and that need not be emitted for balance to arise. Note that ∂E/∂t per hemisphere is about 200 W/m^2 p-p and that the ratio between the global ∂E/∂t and the global ∂T/∂t infers a transient sensitivity of only about 0.12 C per W/m^2.
Figure 7 shows another way to validate the predictions as a scatter plot of the relative relationship between monthly averages of Pi and Po for constant latitude. Each little dot is the average for 1 month of data and the larger dots are the per slice averages across 3 decades of measurements. The magenta line represents Pi == Po. Where the two curves intersect defines the steady state which at 239 W/m2 is well within the margin of error of the accepted value. Note that the tilt in the measured relationships represents the net transfer of energy from tropical latitudes on the right to polar latitudes on the left.
The next test is of the prediction that the relationship between the average temperature of the surface and the planets emissions should correspond to a gray body emitter whose equivalent emissivity is about 0.61, which was the predicted and measured ratio between the planets emissions and that of the surface.
Figure 8 shows the relationship between the surface temperature and both Pi and Po, again for constant latitude slices of the planet. Constant latitude slices provide visibility to the sensitivity as the most significant difference between adjacent slices is Pi, where a change in Pi is forcing per the IPCC definition. The change in the surface temperature of adjacent slices divided by the change in Pi quantifies the sensitivity of that slice per the IPCC definition. The slope of the measured relationship around the steady state is the short line shown in green. The larger green line is a curve of the Stefan-Boltzmann Law predicting the complete relationship between the temperature and emissions based on the measured and calculated equivalent emissivity of 0.61. The monthly average relationship between Po and the surface temperature is measured to be almost exactly what was predicted by the Physical Model. The magenta line is the prediction of the relationship between Pi and the surface temperature based on the requirement that the surface is approximately an ideal black body emitter and again, the prediction is matched by the data almost exactly.
For reference, Figure 9 shows how little the effective emissivity, ε’ varies on a monthly basis with a max deviation from nominal of only about +/- 3%. Figure 10 shows how the fraction of the power absorbed by the atmosphere and returned to the surface also varies in a relatively small range around 0.51. In fact, the monthly averages for all of the coefficients used to calculate the sensitivity with equation 16) vary over relatively narrow ranges.


The hypothesized high sensitivity also makes predictions. The stated nominal sensitivity is 0.8C per W/m2 of forcing and if the surface temperature increases by 0.8C from 288K to 288.8K, 390.1 W/m2 of surface emissions increases to 394.4 W/m2 for a 4.3 W/m2 increase that must arise from only 1 W/m2 of forcing. Since the data shows that 1 W/m2 of forcing from the Sun increases the surface emissions by only 1 W/m2, the extra 3.3 W/m2 required by the consensus has no identifiable origin thus falsifies the possibility of a sensitivity as high as claimed. The only possible origin is the presumed internal power supply that Hansen and Schlesinger incorrectly introduced to the quantification of climate feedback.
Joules are Joules and are interchangeable with each other. If the next W/m2 of forcing will increase the surface emissions by 4.3 W/m2, each of the accumulated 239 W/m2 of solar forcing must be increasing the surface emissions by the same amount. If the claimed sensitivity was true, the surface would be emitting 1028 W/m2 which corresponds to an average surface temperature of 367K which is about 94C and close to the boiling point of water. Clearly it’s not once again falsifying a high sensitivity.
Conclusion
Each of the many complexities cited to diffuse a simple analysis based on the immutable laws of physics has been shown to be equivalent to variability in the α, κ and ε coefficients quantifying the Physical Model. Another complaint is that the many complexities interact with each other. To the extent they do and each by itself is equivalent to changes in α, κ and ε, any interactions can be similarly represented as equivalent changes to α, κ and ε. It’s equally important to remember that unlike GCM’s, this model has no degrees of freedom to tweak its behavior, other than the values of α, κ and ε, all of which can be measured, and that no possible combination of coefficients within factors of 2 of the measured values will result in a sensitivity anywhere close to what’s claimed by the consensus. The only possible way for any Physical Model to support the high sensitivity claimed by the IPCC is to violate Conservation Of Energy and/or the Stefan-Boltzmann Law which is clearly impossible.
Predictions made by the Physical Model have been confirmed with repeatable measurements while the predictions arising from a high sensitivity consistently fail. In any other field of science, this is unambiguous proof that the model whose predictions are consistently confirmed is far closer to reality than a model whose predictions consistently fail, yet the ‘consensus’ only accepts the failing model. This is because the IPCC, which has become the arbiter of what is and what is not climate science, needs the broken model to supply its moral grounds for a massive redistribution of wealth under the guise of climate reparations. It’s an insult to all of science that the scientific method has been superseded by a demonstrably false narrative used to support an otherwise unsupportable agenda and this must not be allowed to continue.
Here’s a challenge to those who still accept the flawed science supporting the IPCC’s transparently repressive agenda. First, make a good faith effort to understand how the Physical Model is relevant, rather than just dismiss it out of hand. If you need more convincing after that, try to derive the sensitivity claimed by the IPCC using nothing but the laws of physics. Alternatively, try to falsify any prediction made by the Physical Model, again, relying only on the settled laws of physics. Another thing to try is to come up with a better explanation for the data, especially the measured relationships between Pi, Po and the surface temperature, all of which are repeatably deterministic and conform to the Physical Model. If you have access to a GCM, see if its outputs conform to the Physical Model and once you understand why they don’t, you will no doubt have uncovered serious errors in the GCM.
If the high sensitivity claimed by the IPCC can be falsified, it must be rejected. If the broadly testable Physical Model produces the measured results and can’t be falsified, it must be accepted. Falsifying a high sensitivity is definitive and unless and until something like the Physical Model is accepted by a new consensus, climate science will remain controversial since no amount of alarmist rhetoric can change the laws of physics or supplant the scientific method.
References
1) IPCC reports, definition of forcing, AR5, figure 8.1
AR5 Glossary, ‘climate sensitivity parameter’
2) Kevin E. Trenberth, John T. Fasullo, and Jeffrey Kiehl, 2009: Earth’s Global Energy Budget. Bull. Amer. Meteor. Soc., 90, 311–323. Trenberth
3) 2) Bode H, Network Analysis and Feedback Amplifier Design
assumption of external power supply and active gain, 31 section 3.2
gain equation, 32 equation 3-3
real definition of sensitivity, 52-57 (sensitivity of gain to component drift)
3a) effects of consuming input power, 56, section 4.10
impedance assumptions, 66-71, section 5.2 – 5.6
a passive circuit is always stable, 108
definition of input (forcing) 31
4) Jouzel, J., et al. 2007: EPICA Dome C Ice Core 800KYr Deuterium Data and Temperature Estimates.
5) ISCCP Cloud Data Products: Rossow, W.B., and Schiffer, R.A., 1999: Advances in Understanding Clouds from ISCCP. Bull. Amer. Meteor. Soc., 80, 2261-2288.
6) Hansen, J., A. Lacis, D. Rind, G. Russell, P. Stone, I. Fung, R. Ruedy, and J. Lerner, 1984: Climate sensitivity: Analysis of feedback mechanisms. In Climate Processes and Climate Sensitivity, AGU Geophysical Monograph 29, Maurice Ewing Vol. 5. J.E. Hansen, and T. Takahashi, Eds. American Geophysical Union, 130-163.
7) M. E. Schlesinger (ed.), Physically-Based Modeling and Simulations of Climate and Climatic Change – Part II, 653-735
8) Michael E. Schlesinger. Physically-based Modelling and Simulation of Climate and Climatic Change (NATO Advanced Study Institute on Physical-Based Modelling ed.). Springer. p. 627. ISBN 90-277-2789-9
9) Gerard Roe. Feedbacks Timescales and Seeing Red, Annual Review of Earth Planet Science 2009, 37:93-115
10) Stefan, J. (1879), “Über die Beziehung zwischen der Wärmestrahlung und der Temperatur” [On the relationship between heat radiation and temperature] (PDF), 79: 391–428
11) Boltzmann, L. (1884), “Ableitung des Stefan’schen Gesetzes, betreffend die Abhängigkeit der Wärmestrahlung von der Temperatur aus der electromagnetischen Lichttheorie” 258 (6): 291–294








isn’t the oregon
petition an attempt
to establish consensus?
if not, why quote the number
of signatories ~ 30k?
I have used spectral calculations for calculating the GH phenomenon in the atmosphere. For me it is a good enough evidence that these calculations based on something, which happen when a photon hits a GH gas molecule and this basic process, can be used to calculate the radiation fluxes of the Earth. As I told, I can check or validate my calculations by comparing my results to the real observations like: the outgoing LW radiation at the top of the atmosphere (238 W/m2) , the downward LW radiation at the surface (310 W/m2), and the total absorption in the atmosphere (395-238 = 157 W/m2). This last figure is a concrete result, why the surface emitted radiation of 395 W/m2 is only 238 W/m2, when it finally escapes from the Earth into space. By making variations in the composition of the atmosphere, I can calculate the warming effects of the individual GH gas concentrations. I just say, if the total absorption is the same as observed quantity, then the portions of the GH gases are very probably correct.
/
Here is a figure showing the absorption effects of GH gases:
Your absorption is consistent with what I get, although I do see a little more from ozone. For my calculations, I consider most GHG’s to be well mixed vertically, except H2O and O3 which have altitude dependent concentrations. Gridded column concentrations of both are in the ISCCP data, although the vertical profiles had to be ‘guessed’ at.
From your plot, 100% of the surface emissions between 5u and 7u as well as those between 13u and 18u are absorbed by GHG’s, moreover; given how saturated these bands are, nearly every photon that exists in those bands will be absorbed by a GHG molecule, all the way up to and above TOT. Above TOT there are no clouds and only air, yet this must be the origin of all this power in the saturated absorption bands.
In the clear sky, the emitted spectrum observed at TOA has a color temperature corresponding to the surface temperature below and has significant energy in the absorption bands, where the emissions attenuation is only about 3db (1/2 the power) from the ideal Planck spectrum of a BB and only a slight bit more where there’s a lot of overlap between CO2 and H2O. In order for this energy to leave TOA, it must be in the form of photons containing half the energy of the photons in those bands that were emitted by the surface.
I claim that the only possible origin for that much energy in those bands at that high an altitude is re-emissions from GHG molecules as they return to the ground state, but the consensus claims that all energy absorbed by GHG’s is quickly ‘thermalized’. If this was the case, the narrow band emissions would be converted into broad band Planck spectrums and we should expect to see almost no energy in the absorption bands and additional energy in the transparent regions, but that is not what we observe.
Do you know of a better explanation?
Are you aware that CO2 is very poorly mixed at low altitudes, and varies greatly locally.
This is why the IPCC rejects the Beck reanalysis of the 19th and 20th Century Chemical Analyses which showed CO2 at more than 400 ppm in the 1930 and 1940s.
At low altitude, one can see CO2 varying from around 280 ppm to over 700 ppm.
CO2 is only a well mixed gas (ie., varying by around +/- 10 ppm) at high altitude. Indeed, this is also one reason why Mauna Loa was chosen.
Richard,
“Are you aware that CO2 is very poorly mixed at low altitudes, and varies greatly locally.”
Yes, especially near power plants and jungles. But over most of the planet (i.e. oceans), it’s less variable.
None the less, CO2 is only about 1/4 of the GHG effect and the variability is limited to relatively low altitudes so second order effects like this have little or no impact on the sensitivity nor does if matter at all for the sensitivity derivation I’ve presented.
The article is shows how a simplified model of the atmosphere as a gray body, coupled with a simplified model of the surface as a black body combine to predict the behavior at the boundaries of the atmosphere (Pi, Po and T) to a level of accuracy and certainty that no GCM is capable of. Knowing how these measurable metrics behave across all T, Pi and Po found on the planet, the sensitivity, dT/dPi, can also be predicted based on the predicted behavior. Moreover; the predicted behavior is not ‘curve fit’ to an arbitrary mathematical abstraction, but is the required consequence of the SB Law and COE. The measurements happen to correspond to very well, even over time scales less then a month and for average spanning time scales of years (the larger dots in figure 8), it matches to within a couple of percent.
Here is something I should of included in the article and explains figure 8 in more detail.
http://www.palisad.com/co2/layers/
I agree that there is far less variability of CO2 over oceans, but then again this is where one sees the most water vapour and this can in turn be somewhat variable since evaporation does not simply depend upon temperature but also winds, and winds are very variable.
Whilst I would expect you to defend the model you postulate, it does concern me how inflexible and blinkered you appear to be.
Richard,
“it does concern me how inflexible and blinkered you appear to be.”
Sorry, but I’m a fervent believer in the scientific method and as hard as I’ve tried, I haven’t been able to identify any test that falsifies my hypothesis as every test I’ve tried (more than just the two I talked about in the article) confirms it.
Meanwhile, every test I’ve applied to the predictions of a sensitivity as high as claimed by the IPCC fails.
What you perceive as inflexibility is certainty.
Here is a snapshot of ocean temperatures which will give some insight into evaporation patterns and hence water vapour, but there will be much more local variability dependent upon prevailing local wind conditions.
http://geosci.sfsu.edu/courses/geol102/graphics/kareng/world.temp_07895.gif
Richard,
The ocean snapshot is exactly what I would expect.
What I’ve been trying to get someone to do is come with a test that can falsify my hypothesis. Thus far, all anybody has been able to do is arm wave complexity and claim that the system is too complex to model as simply as I say, yet nobody has been able to quantify how the referenced complexity affects my analysis or falsifies my hypothesis which I will restate in more general terms:
The macroscopic thermodynamic behavior of all planets and moons must obey the same laws of physics. For the Moon, the only two laws that apply, relative to its sensitivity, are COE and the SB Law and that the Earth must obey these laws as well, moreover; these are the ONLY laws of physics that matter for establishing, with high precision, the sensitivity of the average temperature of Earth (T) to changes in forcing (Pi).
My tests confirm that no other laws of physics are required to quantify the LTE behavior of Pi, Po and T for either the Moon or the Earth to a relatively high degree of certainty.
To falsify my hypothesis, someone must craft a test that shows how SB and CO2 alone are insufficient to determine the LTE averages of Pi, Po and T and that other laws are required, moreover; whatever additional laws are required need to be identified.
Did you look at this yet?
http://www.palisad.com/co2/layers
co2isnotevil
Thanks your further comments.
I am very pleased that you have posted this article and I applaud you for doing that. I consider that CTM made the right decision to publish. It is very interesting.
I am pleased that you are a firm believer in the scientific method, but the problem here is that no amount of modelling will answer the question whether CO2 is a GHG (at levels above 200 ppm) or not, in a convective atmosphere on a water world such as planet Earth. This can only be established by empirical observational data, and to date we have been unable to wean out the temperature signal of CO2 over and above the natural variability of temperature. We just do not know whether it is a GHG or not (for sure it is a radiative gas the laboratory properties are known, but Earth;s atmosphere is about as far removed from laboratory conditions as one can possibly get).
We have all but no understanding how this planet works. And that is a major handicap when seeking to model something. That is why so many assumptions are made and one simply ends up modelling the assumption, and nothing more than that.
Take a very small vertical slice of ocean and atmosphere. Say the top 10 microns of the ocean and the first 50 metres of the atmosphere above. What are the energy flows in that cross section. In the top 10 microns of the ocean there is no incoming solar but there is all of the DWLWIR because of its omnidirectional nature, plus of course, such upwelling IR from the depths of the ocean (from solar irradiance that was absorbed at depth). Even though there is no incoming solar irradiance in the top 10 microns of the ocean there is a huge amount of energy (if DWLWIR is absorbed and capable of performing sensible work in the medium in which it finds itself)..There is enough energy to result in about 16 metres of rainfall annually, which of course is not happening.
So how does this energy get sequestered to depth, and hence dissipated and diluted by volume, at a rate fast enough to prevent the extreme evaporation of the oceans? It cannot be by conduction since the energy flow is upwards, and unless we misunderstand energy flows, the energy cannot swim against the tide. it cannot be by the action of the wind, waves and swell since these are slow mechanical processes and all but non existent at wind conditions of say less than BF3, certainly at less than BF2. Ocean overturning is again a slow mechanical process and in any event it is a diurnal event so does not operate for 12 hours of the day. So why are the oceans not being burnt off from the top down?
Consider a caldera lake, these are particularly sheltered from wind (due to the windbreak of the crater), and any overturning is a seasonal thing. Why are caldera lakes not boiled off from the top down if they are absorbing all the DWLWIR in the top few micron of the waters?
Consider how dew can linger on a calm winter’s day. Dew on the sunny side of a hollow can be burnt off within an hour or so of sun rise (even though the solar irradiance is weak in winter) but the dew droplets can linger all day on the shady side of the hollow notwithstanding the absorption of 12 hours worth of DWLWIR. Why is all this DWLWIR not evaporating and burning away the dew?
Returning to the first 50 metres of the atmosphere, CO2 is variable because of the variable nature if the biosphere not because there happens to be the odd power station or cement works. CO2 at low altitude can be as much as 1.5 doublings of pre-industrial levels of CO2? So how is the GHE working in the first 50m metres, 100 metres, 500 metres etc. It is not until one reaches several thousand metres that CO2 becomes well mixed.
Over the oceans, of course, not only are there variable CO2 pockets because of plankton blooms and the like, water vapour is equally variable at low altitude. So how is the GHE working here over the first 50 metres, 100 metres, 500 metres, 1000 metres etc?
What does the photonic capture/exchange/re-radiation, thermalisation look like between the top 10 microns of the ocean and the first 50 metres of the atmosphere?
Until we understand how this water world of ours works and the precise interaction at the ocean/atmosphere interface, there is no point modelling anything. It is our lack of understanding of this interface and the formation and effect of clouds that prevents any worthwhile model (ie., one that will still be probably wrong, but one that is useful).
Finally, I would note that it is far from certain that there has been any temperature rise since the late 1930s/1940s notwithstanding that man has emitted some 95% of all manmade emissions during this period, and CO2 has risen from ~300 to ~400 ppm. Our data has become so horribly corrupted that we can have no confidence in it, and it may be no coincidence that the area of land that has been most sampled and has the best historic record (ie., the contiguous US) shows that the temperature today is no warmer than it was in the late 1930s/early 1940s. Whilst I would not claim that the US is a proxy for the Northern Hemisphere, there is no geographical or topographical reason why it should be an outlier. It covers many climatic zones, has mountains, valleys, plains, deserts, coastal regions, and is not unduly influenced by a particular oceanic current.
We have no worthwhile data on the SH, it is simply too sparsely sampled with little historic uninterrupted records such that Phil Jones correctly noted that SH temps are largely made up. He is right in that there is simple no worthwhile data fit for purpose. this means that we have no global data worth a pinch of salt and are left with just the NH to consider and study. .
.
Whether CO2 is a GHG above 200 ppm, we will have to agree to disagree about. The effect of incremental CO2 from 200 ppm to current levels, as established by the change in Fa that would result in the Physical Model is quite small to begin with and hard to measure, but none the less, it’s still finite.
“That is why so many assumptions are made and one simply ends up modelling the assumption, and nothing more than that. ”
I’ve only made one assumption which I expressed in the form of a hypothesis which is how science turns as assumption or even a wild guess into theory. The ‘assumption’ was the planet’s macroscopic behavior must conform to the laws of physics.
As far as the ocean is concerned, it would be valid to consider just 3 regions, the thermocline, the cold region below it and the warm region above it. A 10u slice of the warm region isn’t going to be particularily representative of the whole.
There is also not as much downward radiation from the atmosphere as the consensus believes. All that’s required is to offset the difference between the incoming solar energy and the emissions of the surface (390-240)=150 W/m^2. Anything more than this is an illusion and 150 W/m^2 isn’t enough to evaporate dew …
“We have no worthwhile data on the SH, it is simply too sparsely sampled ”
This is incorrect. We have about 3 decades of nearly complete satellite coverage of it. I have no confidence in surface measurements to do anything but calibrate satellite calculations and only by daily readings, not averages, and in regions of the world that are homogeneous over the surface being examined (pixel size, generally 10-30km on a side) and have good records. All you need to do is calibrate this in a few places across the globe and the rest of the temperatures track quite well because the same sensors are making all of the measurements.
richard, you need to
read the work of
daniel feldman
You have made more than one assumption, for example that the Earth can be compared to the Moon and that if the Earth had an average temperature of 270K there would be no clouds, and no water vapour, and that the speed of rotation makes no difference. In my opinion (and I might have been the first to question that assumption) that is a fundamental error. The Moon is essentially inert, and it may be possible to get a reasonable ballpark figure for what is going on with the Moon from a series of averages. The Earth is very different, and that is one problem with the K&T energy budget cartoon. If the Earth was like that, it would not have any Arctic sea ice. There would be no seasonal variability in Arctic Sea Ice. The Antarctic would still probably have some ice because of the altitude and because the interior is well insolated from the oceans.
The reason why so many scientists incorrectly predict an ice free Arctic is that they look at the K&T energy budget cartoon and incorrectly extrapolate short lived trends on a linear basis, and fail to appreciate the geometry of the planet (the axial tilt). The axial tilt does not show up in the K&T energy budget cartoon and therein lies another problem with the over simplification of planet Earth (I have already pointed out that the planet is not immersed in a bath with equal energy being received uniformly over its entire surface 24/7 all year long).
In one of your earlier comments I seem to recall that you asserted that the atmosphere of this planet has no (or little) energy. That is wrong. The Atmosphere has huge amounts of energy but it is fairly insignificant with the energy stored in the oceans. The reason why our planet is warm is because the atmosphere has a huge thermal mass with significant thermal inertia, There are plenty of CO2 molecules on Mars interrupting the flow of upwelling LWIR from the surface of Mars, and there is plenty of DWLWIR from these CO2 molecules bathing the surface of Mars. but Mars is cold and without any observed radiant GHE because it does not have a dense atmosphere with large thermal mass and large thermal inertia. Although Mars receives a lot less solar irradiance at TOA, because it does not have much in the way of clouds to reflect incoming solar, it receives at its surface about 60% of the amount of solar irradiance as does planet Earth. The problem with Mars is not that it lacks CO2 molecules, but rather it lacks a dense atmosphere with thermal mass and thermal inertia.
Having a dense atmosphere with thermal mass and thermal inertia is particular important as can be seen from the Earth, from Venus and the lack of it on Mars and the Moon.
It is fair enough to try and simplify matters but over simplification can become a significant problem Until any model can reproduce the Holocene, it has little value.
You say your model cannot reproduce the Holoocene, it is therefore not solid and reliable. Whether it has any worth can only be seen when the results of prediction for say the next 30 years of temperature trends is compared with actual observation of the next 30 years of temperature trends. Please post your model’s predicted year on year temperature trends for the next 30 years on the basis of various CO2 emission scenarios.
You are right that we have 30 years data on the SH from satellite data, but that is way too short to be useful. That data suggests that there is no correlation whatsoever with rising levels of CO2, and temperature apart from season to season annual variability is essentially simply a matter of ENSO and volcano events. That record suggests that GHGs have no material impact. You will see that if you plot CO2 against the satellite SH record.
Any way, lets see the predictions so we can start comparing the predictions with the temperature as it unfolds. Please post the year on year predictions for the next 30 years.
Richard,
What you are calling assumptions is what I call predictions of a hypothesis, which I proceeded to test where the tests confirm those predictions and thus the hypothesis. If you want to disqualify what you consider an assumption, you need to identify a test at least as rigorous as the tests that confirm it.
It looks like you reversed the causal relationship between an Earth with no clouds or GHG’s and the 270K temperature. The 270K temperature is the result, not the cause.
Regarding the length of the day, I’ve explained many times why this makes no difference, but it doesn’t seem to be getting through, so I’ll have to yell.
I CALCULATE THE AVERAGE TEMPERATURE AS THE EQUIVALENT TEMPERATURE OF AVERAGE PLANET EMISSIONS. IN LTE, AVERAGE PLANET EMISSIONS == AVERAGE SOLAR INPUT AND WITHOUT CLOUDS OR GHG’S, THE AVERAGE SURFACE EMISSIONS == AVERAGE PLANET EMISSIONS. THE LENGTH OF THE DAY HAS NO INFLUENCE ON THE AVERAGE SOLAR INPUT, ONLY HOW SOLAR ENERGY IS DISTRIBUTED ACROSS THE SURFACE. THEREFORE, IT HAS NO INFLUENCE ON THE AVERAGE EMISSIONS OR THE AVERAGE TEMPERATURE CALCULATED FROM IT.
Regarding Mars, it’s 141M miles from the Sun vs. 93M files for Earth. Energy drops of as 1/r^2, thus Mars receives only 43.5% as much energy as Earth, or about 149 W/m^2 averaged across the planet which when reduced by an albedo of .16 becomes only 125 W/m^2. Even at the 1.6 gain of Earth, 125 is only increased to 200 W/m^2 which is an average temperature of only 244K or about -29C. This is larger than the average temperatures reported which seems to be a linear average of temperature and not the equivalent temperature of average emissions, which given the wide range between min and max, is significantly warmer than the average of temperatures. Consider a min of 150K and a max of 290K. The average of temperature of min/max is (148+293)/2 = 220K or about -53C, which is even more then the reported average of about -60C. The average of T^4 is 30C warmer at 250K or about -23C which would actually seems to imply about the same gain as Earth of about 1.6. This shows how misleading a linear average of temperature is, relative to the equivalent temperature of average emissions which is more representative of the actual average. In any event, the gain is certainly greater than 1 suggesting that there definitely is a significant radiant GHG effect at work.
I don’t recall saying that the atmosphere contains no energy, but the energy it does contain is small, relative to the whole, and insignificant relative to the sensitivity which is strictly a function of the behavior of Pi at the top boundary with space and T at the bottom boundary with the surface. The only point I remember making is that once the atmosphere has stored all the energy it can (the LTE condition), the flux entering the atmosphere is equal to the flux exiting the atmosphere. In other words, the atmosphere is not an infinite source or sink of energy.
No model will be able to predict the Holocene until we have a lot more certain information about what the Sun and aerosols were doing at the time. But what I did say is that when my model is applied independently to each hemisphere, it does predict the kind of changes we see as a result of the precession of perihelion even as the total yearly solar input remains constant. Consensus climate science basically assumes the precession of perihelion has little to no effect since the total forcing arriving from the Sun is not changing and only the distribution across seasons is changing.
The 3 decades of weather satellite data is not sufficient to determine trends, but it is more than adequate to determine the climate systems response to change, or the transfer function between incident energy and the surface temperature. The transfer function doesn’t vary all that much, even between winter and summer which is a larger difference than we see between ice ages and interglacials.
Relative to what will happen over the next 30 years depends mostly on what the Sun does. If sunspot activity remains low, it will cool, otherwise, it will warm slightly or see no change at all. And BTW, there’s at least a degree or so of natural variability around the mean and over the next 30 years, a few tenths of a degree either way will be buried in the natural variability so predicting it to the level of accuracy that would be relevant is an exercise in futility. I can also say with absolute confidence, any effects from increasing CO2 concentrations will be small and similarly buried in the noise.
As I pointed out earlier, climate science has been horribly poisoned by 3 decades of ‘science’ from both sides that doesn’t conform to the requirements of the scientific method. As a result. it has become difficult for many to tell the difference between science that does conform from science that doesn’t.
Incidentally, and you probably know this, NASA/GISS in their 1971 paper (Schneider et al Science Volume 173) when they assessed Climate Sensitivity to CO2, on the basis of a model produced by Hansen, assessed that an 8 fold increase in CO2 would cause less than 2degC of warming. See their figure 1 which is illuminating.
As 8 fold increase is 3 doublings (ie., about 300 ppm to about 2,400 ppm) so NASA/GISS were assessing Climate Sensitivity to CO2 at around 0.6degC per doubling.
I attach a reference to the NASA/GISS paper.: http://vademecum.brandenberger.eu/pdf/klima/rasool_schneider_1971.pdf
” … assessed that an 8 fold increase in CO2 would cause less than 2degC of warming.”
This is also consistent with the calculated absorption and how that would change by doubling CO2. To get to 2C after 3 doublings, the first results in about 0.8, the second about 0.7 and the third is about 0.6 for a total of 2.1C. A sensitivity of 0.8C per W/m^2 for doubling is almost in the middle of the range I predict for the sensitivity factor of about 0.25C +/- 0.05 per W/m^2 which at 3.7 W/m^2 of equivalent forcing from the first doubling give us a range of about 0.9 +/- 0.2 C for the first doubling of CO2.
It was the insanity about feedback, again coming from GISS (Hansen et all), that caused them to increase the sensitivity factor up to 0.8C +/- 0.4C per W/m^e since absorption physics along could not support a sensitivity as high as they needed to justify the formation of the IPCC.
co2isnotevil said
>> We have about 3 decades of nearly complete satellite coverage of it. I have no confidence in surface measurements to do anything but calibrate satellite calculations and only by daily readings <<
how do you feel about satellite
data models having
to calibrate over 12 satellites since 1979?
or is it now up to 13? or their much larger
adjustments compared to
surface models?
“how do you feel about satellite data models having to calibrate over 12 satellites”
In the past, this wasn’t as good as it can be now, but the technology has matured. Just look at how Google merges multiple views of satellite imagery with areal imagery into a seemless zoom. If you have good info on the specific sensors, and this is available, it’s not all that difficult.
ISCCP relied on continuous coverage by a least one polar orbiter as other satellites changed. The only exception to this resulted in the relatively large temperature discontinuity around 2001 (figure 5 in the article). I would have done it differently and instead relied on continuous coverage by any satellite which includes the geosynchronous orbiters.
not evil: we aren’t talking about
google merging images, or new
technology, we’re
talking about the fact that satellite groups
must stretch their calibrations over
about a dozen satellites in their models.
that’s a lot of uncertainty. it’s one reason, but
not the only reason, why
the anomaly changes were so big when UAH
went from v5.6 to v6. their adustments
are much larger than the surface datasets, which
have internal consistency checks the satellite
models do not.
notice now the relatively large differences between
uah v6 and rss v4. and neither is upfront about
their measurement errors.
crackers345,
No doubt that there’s room for errors and fudging when it comes to cross satellite calibration and the ISCCP data is a case in point. In principle, it’s relatively easy since the satellites return pixel data as images and image processing software is widely available and tools are available which allow you to roll your own. Computers have gotten so much faster that this kind of processing can even be done in real time with the right hardware. The problem of merging imagery from a dozen overlapping satellites is quite a bit easier than what is done for maps imagery.
The weather satellites overlap each other quite a lot where each polar orbiter overlap the entire field seen by all geosynchronous satellite which themselves, overlap at the edges. There’s enough overlap to know that a pixel count of X in one satellite corresponds to Y on another. The pixel voltage is mostly linear to received power and after the first pass of calibration, all the satellites are matched to the average linearity across the satellites. A second pass sets the absolute values and corrects for what little non linearity is still present by comparing calculated temperatures to hourly surface measurements for a number of high quality surface stations in rural areas across the world.
Satellites generally change one at a time, so new satellites can be calibrated to the remaining satellites, potentially even including the one its replacing. The cross calibration is checked periodically to identify sensor drift which can be subsequently corrected.
co2isnotevil – you don’t seem aware of how
temperatures are modeled by satellites. it doesn’t involve images
it involved microwave
sensors, so between satellites correlations are
not done pixel-by-pixel, image-by-image. it’s about
how one microwave sensor compares to another, not much
differernt than how one thermometer compares to another
on the surface. Extrapolating
over 12 or so satellites
introduces significant error, and that’s
hardly the only errors involved. this is one reason
why the head of rss said he considers surface measurements to
be more reliable than satellite measurements.
Microwave sensors are a secondary sensor, but are not available on most of the satellites used for the last 3 decades. The primary sensors shared by all weather satellites are an infrared image sensor sensitive to 1 or 2 bands, an optical image sensor and a narrow band infrared channel for measuring water vapor. Knowing the transfer function between the surface and space (or clouds and space), the clear sky and partly cloudy temperatures of the surface and the cloud tops can be readily determined from emissions at TOA. All three of these images are returned as pixel maps and is basically what you see on the nightly weather report. I have this data myself and can view any of the sensors from any satellite or the aggregation of all satellites as a movie covering 3 decades of weather where each frame is a snapshot of weather satellite data taken every 3 hours. The IR data is particularly cool took at as there is little difference between night and day and you watch the whole life cycle of storm systems and hurricanes.
co2, you’re still way off base.
atmospheric temperatures are not measured
by ir sensors, but by sensors that measure
microwaves emitted by oxygen molecules, then
converted into temperatures via a model.
and, no, they’re not available on all satellites. extensive
correlation calculations are involved to get a long time series.
about a dozen
with nothing
to do with any images
or pixels.
why should i believe
a correlation done over
a dozen satellites gives useful
results?
ever wonder why uah never attaches
error bars to their
monthly numbers?
Pyrgeometers can´t measure “backradiation”. They consist of a thermopile measuring a gradient across itself and extend that into the atmosphere. 25 meters.
They can only measure heat transfer -from- the device. The manual even says that it gives a negative value but shows a calculation converting it to positive. It works with the Stefan-Boltzmann law, which means the only transfer measured is from higher to lower temperature. The atmosphere is always colder. Inversions are exceptions.
By the way, how do you fit 1360.8W/m^2 with surface temperature? Try 1360.8-395. The solution is transferred heat through one square meter from solar heating, add the necessary inverse square law and what do you get?
ALL radiation is measured
as a heat transfer.
either photons or waves
Wow. That was really not a smart comment.
“ALL radiation is measured
as a heat transfer.
either photons or waves”
Please explain the difference between photons and waves, and how it is possible to tell one from the other.
Why?
What is it you don’t understand?
Another trial with the figure:
where was this figure
published?
it looks unlike to have
passed peer review
no
answer?
Yesterday’s eclipse gives a big insight into the effectiveness of so called GHGs.
GHGs impede the passage of photons of LWIR emitted from the surface finding their way to TOA and thence being radiated to the void of space. They do not prevent that journey. GHGs are not a brick wall creating a barrier that cannot be crossed.
Thus the issue is a simple one. <b.Does the planet during the hours of darkness have sufficient time to shed all the energy that it received during the day? If there is not sufficient time during the hours of darkness to dissipate the energy received during the day, then temperatures will slowly rise.
Under the eclipse, temperatures fell by up to 20 deg F, with 10 to 12 degF being typical. The planet was able to dissipate and get rid of a lot of heat in a very short period of time. After all, totality only lasted approx 2 1/2 minutes.
The experience under the eclipse suggests that GHGs such as CO2 may change the temperature profile of the day, and put back slightly the timing of the coldest period of the 24 hour cycle. It may be that if there was no CO2, the coldest period would be say 02:30 hrs, but with CO2 it is 03:00. Perhaps with more CO2, it will become 03.20 hrs etc. But it would appear that there is no build up of temperature since the planet has sufficient time during the hours of darkness to get rid of all the heat generated during the hours of sunlight.
of course further study of eclipses is required since these provide a real opportunity to test the effectiveness of GHGs as operating in the real world condition of Earth’s atmosphere. (not laboratory conditions)
“If there is not s ufficient time during the hours of darkness to dissipate the energy received during the day, then temperatures will slowly rise.”
This is exactly what happens during the spring and summer, while the opposite happens during fall and winter.
the eclipse was too short
and local to say anything at
all about greenhouse gases.
sorry.
I thought this might be useful as a more detailed description of figure 8 and its ramifications.
http://www.palisad.com/co2/layers
RW,
The emphasis is that it precludes thermalization, which was brought up as a complication that made the planet not capable of looking like a gray body from space and/or somehow invalidated my analysis about the LTE transfer function between the surface temperature and emissions into space being the Stefan-Boltzmann relationship with an emissivity of about 0.61. The conversation did get into a rat hole though …
Well, I don’t think that atmospheric thermalization of absorbed photons was the issue or source of their objection to that.