By Christopher Monckton of Brenchley
In a previous post, I explained that many of the climate-extremists’ commonest arguments are instances of logical fallacies codified by Aristotle in his Sophistical Refutations 2300 years ago. Not the least of these is the argumentum ad populum, the consensus or head-count fallacy.
The fallacy of reliance upon consensus, particularly when combined with the argumentum ad verecundiam, the fallacy of appealing to the authority or reputation of presumed experts, is more likely than any other to mislead those who have not been Classically trained in mathematical or in formal logic.
To the Classicist, an argument founded upon any of the Aristotelian logical fallacies is defective a priori. Nothing more need be said about it. However, few these days are Classicists. Accordingly, in this post I propose to explain mathematically why there can be no legitimate consensus about the answer to the central scientific question in the climate debate: how much warming will occur by 2100 as a result of our sins of emission?
There can be no consensus because all of the key parameters in the fundamental equation of climate sensitivity are unknown and unknowable. Not one can be directly measured, indirectly inferred, or determined by any theoretical method to a precision sufficient to give us a reliable answer.
The fundamental equation of climate sensitivity determines how much global warming may be expected to occur once the climate has settled back to a presumed pre-existing state of equilibrium after we have perturbed it by doubling the atmospheric concentration of CO2. The simplifying assumption that temperature feedbacks are linear introduces little error, so I shall adopt it. For clarity, I have colored the equation’s principal terms:
Climate sensitivity at CO2 doubling (blue) equals the product of the CO2 forcing (green), the Planck parameter (purple) and the feedback gain factor (red).
The term in green, ΔF2x, is the “radiative forcing” that the IPCC expects to occur in response to a doubling of the concentration of CO2 in the air. Measurement and modeling have established that the relation between a change in CO2 concentration and a corresponding change in the net down-minus-up flux of radiation at the top of the climatically-active region of the atmosphere (the tropopause) is approximately logarithmic. In other words, each additional molecule of CO2 exerts less influence on the net radiative flux, and hence on global temperature, than its predecessors. The returns diminish.
To determine the radiative forcing in response to a CO2 doubling, one multiplies the natural logarithm of 2 by an unknown coefficient. The IPCC’s first and second Assessment Reports set it at 6.3, but the third and fourth reduced it by a hefty 15% to 5.35. The CO2 forcing is now thought to be 5.35 ln 2 = 3.708 Watts per square meter. This value was obtained by inter-comparison between three models: but models cannot reliably determine it. Both of the IPCC’s values for the vital coefficient are guesses.
The term in purple, , denominated in Kelvin per Watt per square meter of direct forcing, is the Planck or zero-feedback climate-sensitivity parameter. This is one of the most important quantities in the equation, because both the direct pre-feedback warming and separately the feedback gain factor depend upon it. Yet the literature on it is thin. Recent observations have indicated that the IPCC’s value is a large exaggeration.
The Planck parameter is – in theory – the first differential of the fundamental equation of radiative transfer about 3-5 miles above us, where incoming and outgoing fluxes of radiation are equal by definition. The measured radiative flux is 238 Watts per square meter. The radiative-transfer equation then gives us the theoretical mean atmospheric temperature of 255 Kelvin at that altitude, and its first differential is 255 / (4 x 238), or 0.267 Kelvin per Watt per square meter. This value is increased by a sixth to 0.313 because global temperatures are not uniformly distributed. However, it is also guesswork, and the current Lunar Diviner mission suggests it is a considerable overestimate.
Theory predicts that the Moon’s mean surface temperature should be around 270 Kelvin. However, Diviner has now found the mean lunar equatorial temperature to be 206 K, implying that mean lunar surface temperature is little more than 192 K. If so, the theoretical value of 270 K, and thus the lunar Planck parameter, is a 40% exaggeration.
If the terrestrial Planck parameter were similarly exaggerated, even if all other parameters were held constant the climate sensitivity would – on this ground alone – have to be reduced by more than half, from 3.3 K to just 1.5 K per CO2 doubling. There is evidence that the overestimate may be no more than 20%, in which event climate sensitivity would be at least 2.1 K: still below two-thirds of the IPCC’s current central estimate.
If there were no temperature feedbacks acting to amplify or attenuate the direct warming caused by a CO2 doubling, then the warming would simply be the product of the CO2 radiative forcing and the Planck parameter: thus, using the IPCC’s values, 3.708 x 0.313 = 1.2 K.
But that is not enough to generate the climate crisis the IPCC’s founding document orders it to demonstrate: so the IPCC assumes the existence of several temperature feedbacks – additional forcings fn demonimated in Watts per square meter per Kelvin of the direct warming that triggered them. The IPCC also imagines that these feedbacks are so strongly net-positive that they very nearly triple the direct warming we cause by adding CO2 to the atmosphere.
The term in red in the climate-sensitivity equation is the overall feedback gain factor, which is unitless. It is the reciprocal of (1 minus the product of the Planck parameter and the sum of all temperature feedbacks), and it multiplies the direct warming from CO2 more than 2.8 times.
Remarkably, the IPCC relies upon a single paper, Soden & Held (2006), to establish its central estimates of the values of the principal temperature feedbacks. It did not publish all of these feedback values until its fourth and most recent Assessment Report in 2007.
The values it gives are: Water vapor feedback fH2O = 1.80 ± 0.18; lapse-rate feedback flap = –0.84 ± 0.26; surface albedo feedback falb = 0.26 ± 0.08; cloud feedback fcld = 0.69 ± 0.38 Watts per square meter per Kelvin. There is also an implicit allowance of 0.15 Kelvin for the CO2 feedback and other small feedbacks, giving a net feedback sum of approximately 2.06 Watts per square meter of additional forcing per Kelvin of direct warming.
Note how small the error bars are. Yet even the sign of most of these feedbacks is disputed in the literature, and not one of them can be established definitively either by measurement or by theory, nor even distinguished by any observational method from the direct forcings that triggered them. Accordingly, there is no scientific basis for the assumption that any of these feedbacks is anywhere close to the stated values, still less for the notion that in aggregate they have so drastic an effect as almost to triple the forcing that triggered them.
Multiplying the feedback sum by the Planck parameter gives an implicit central estimate of 0.64 for the closed-loop gain in the climate system as imagined by the IPCC. And that, as any process engineer will tell you, is impossible. In electronic circuits intended to remain stable and not to oscillate, the loop gain is designed not to exceed 0.1. Global temperatures have very probably not departed by more than 3% from the long-run mean over the past 64 million years, and perhaps over the past 750 million years, so that a climate system with a loop gain as high as two-thirds of the value at which violent oscillation sets in is impossible, for no such violent oscillation has been observed or inferred.
Multiplying the 1.2 K direct warming from CO2 by its unrealistically overstated overall feedback gain factor of 2.8 gives an implicit central estimate of the IPCC’s central estimate of 3.3 K for the term in blue, , which is the quantity we are looking for: the equilibrium warming in Kelvin in response to a doubling of CO2 concentration.
To sum up: the precise values of the CO2 radiative forcing, the Planck parameter, and all five relevant temperature feedbacks are unmeasured and unmeasurable, unknown and unknowable. The feedbacks are particularly uncertain, and may well be somewhat net-negative rather than strongly net-positive: yet the IPCC’s error-bars suggest, quite falsely, that they are known to an extraordinary precision.
It is the imagined influence of feedbacks on climate sensitivity that is the chief bone of contention between the skeptics and the climate extremists. For instance, Paltridge et al. (2009) find that the water-vapor feedback may not be anything like as strongly positive as the IPCC thinks; Lindzen and Choi (2009, 2011) report that satellite measurements of changes in outgoing radiation in response to changes in sea-surface temperature indicate that the feedback sum is net-negative, implying a climate sensitivity of 0.7 K, or less than a quarter of the IPCC’s central estimate; Spencer and Braswell (2010, 2011) agree with this estimate, on the basis that the cloud feedback is as strongly negative as the IPCC imagines it to be positive; etc., etc.
Since all seven of the key parameters in the climate sensitivity equation are unknown and unknowable, the IPCC and its acolytes are manifestly incorrect in stating or implying that there is – or can possibly be – a consensus about how much global warming a doubling of CO2 concentration will cause.
The difficulties are even greater than this. For the equilibrium climate sensitivity to a CO2 doubling is not the only quantity we need to determine. One must also establish three additional quantities, all of then unmeasured and unmeasurable: the negative forcing from anthropogenic non-greenhouse sources (notably particulate aerosols); the warming that will occur this century as a result of our previous enrichment of the atmosphere with greenhouse gases (the IPCC says 0.6 K); the transient-sensitivity parameter for the 21st century (the IPCC implies 0.4 K per Watt per square meter); and the fraction of total anthropogenic forcings represented by non-CO2 greenhouse gases (the IPCC implies 70%).
Accordingly, the IPCC’s implicit estimate of the warming we shall cause by 2100 as a result of the CO2 we add to the atmosphere this century is just 1.5 K. Even if we were to have emitted no CO2 from 2000-2100, the world would be just 1.5 K cooler by 2100 than it is today. And that is on the assumption that the IPCC has not greatly exaggerated the sensitivity of the global temperature to CO2.
There is a final, insuperable difficulty. The climate is a coupled, non-linear, mathematically-chaotic object, so that even the IPCC admits that the long-term prediction of future climate states is not possible. It attempts to overcome this Lorenz constraint by presenting climate sensitivity as a probability distribution. However, in view of the uncertainty as to the values of any of the relevant parameters, a probability distribution is no less likely to fail than a central estimate flanked by error-bars.
If by this time your head hurts from too much math, consider how much easier it is if one is a Classicist. The Classicist knows that the central argument of the climate extremists – that there is a (carefully-unspecified) consensus among the experts – is an unholy conflation of the argumentum ad populum and the argumentum ad verecundiam. That is enough on its own to demonstrate to him that the climate-extremist argument is unmeritorious. However, you now know the math. The fact that not one of the necessary key parameters can be or has been determined by any method amply confirms that there is no scientific basis for any assumption that climate sensitivity is or will ever be high enough to be dangerous in the least.