By Christopher Monckton of Brenchley
Mainstream climate scientists have been busy in the last two years, publishing updated climatological data in time for IPCC’s forthcoming Sixth Assessment Report. The availability of those recent mainstream data provides an opportunity to derive from them the midrange equilibrium climate sensitivity to doubled CO2 (ECS) that IPCC ought to be predicting on the basis of those data.
As Monckton of Brenchley et al. (2015) pointed out in a paper for the Chinese Academy of Sciences in 2015, one does not need a complex, multi-billion-dollar computer model that gobbles up a small town’s-worth of electricity and topples a dozen polar bears every time it is turned on if all one wants to know is ECS. ECS is a useful standard yardstick because the doubled-CO2 forcing is roughly equal to the total anthropogenic forcing we might expect to see this century on a business-as-usual scenario. That paper, incidentally, has been downloaded from the Chinese Academy journal’s website more often than any other in its 60-year history, by an order of magnitude.
Here is a handy, do-it-yourself ECS calculator based on the latest data.
IPCC (1990) had predicted midrange medium-term anthropogenic warming equivalent to 0.34 K decade–1. In the real world, however, the least-squares trend over the 30 years 1991-2020 on the mean anomalies in two surface (GISS and HadCRUT) and two lower-troposphere (RSS and UAH) monthly temperature datasets is 0.2 K decade–1, of which 70% (Wu et al. 2019), or 0.14 K, was down to us.
Therefore, IPCC’s original midrange medium-term lower-atmosphere warming has proven overstated 2.4 times over. John Christy (2021), in a fascinating online talk, has recently shown (Fig. 1) that the CMIP6 models have also overstated midrange mid-troposphere warming 2.4-fold.

One can gain a first ballpark estimate of midrange ECS by taking the CMIP6 mean 3.7 K ECS prediction (Meehl et al. 2020) and dividing it by 2.4. Answer: 1.5 K: not enough to worry about.
To derive ECS ΔE2 more precisely by developing the ideas in the Chinese Academy paper, just seven readily-obtainable and respectably-constrained mainstream midrange quantities are needed.
1: The Planck sensitivity parameter P is the first derivative of the Stefan-Boltzmann equation: i.e., the ratio of surface temperature to 4 times the albedo-adjusted incoming top-of-atmosphere radiative flux density (Schlesinger 1988). Thus, P= 288 / (4 x 241), or 0.3 K W–1 m2. That uncontroversial value varies with surface temperature: but, from 1850 to doubled CO2 compared with today’s temperature, it is close enough to 0.3 to make little difference.
2: Doubled-CO2 radiative forcing ΔQ2 was given as 3.45 W m–2, the mean of 15 CMIP5 models, in Andrews (2012). For CMIP6, Zelinka et al. (2020) give 3.52 W m–2. Since we are using the latest mainstream data, we shall go with the latter value.
3: The exponential-growth factor H of unit feedback response with reference sensitivity is here taken, for caution, as equal to the 1.07 K–1 given as the Clausius-Clapeyron increase in specific humidity with warming in Wentz (2007). This quantity, too, varies with temperature, but can safely be taken as constant over the narrow temperature interval of relevance here. In reality, the exponential growth in specific humidity is offset by the logarithmic temperature response to that growth, and IPCC (2013) estimates that at midrange all other feedbacks self-cancel. In reality, there is probably little or no growth in unit feedback response under today’s conditions. However, even if one were to assume H= 1.2, well above reality, ECS would barely change.
4: Anthropogenic forcing ΔQ1 from 1850-2020 was 2.9 W m–2, the sum of the 3.2 W m–2 accumulated-greenhouse-gas forcing and the 0.4 W m–2 ozone, –0.8 W m–2 aerosol and 0.1 W m–2 black-carbon forcings (NOAA AGGI; Gaudel+ 2020; Dittus+ 2020; IPCC 2001, p. 351).
5: The anthropogenic fraction M of warming and radiative imbalance from 1850-2020 was 0.7 (Wu et al., 2019; Scafetta 2021). The Wu paper has Gerald Meehl as a co-author.
6: Transient warming T1 from 1850-2020 was 1.07 K (HadCRUT5: Morice et al. 2020). Based on Wu et al., only 70% of this, or 0.75 K, was anthropogenic.
7: The Earth’s energy imbalance ΔN1 from 1850-2020 takes account of delay in onset of warming after a forcing. Schuckmann et al. (2020) give the current mainstream midrange estimate 0.87 W m–2.

With these seven quantities (Fig. 2), all midrange, all up to date, all from mainstream climatological sources, one may not only derive a reliable midrange estimate of observational ECS directly without resorting to over-complex, insufficiently-falsifiable and error-prone computer models but also falsify the tenability of the currently-projected ECS interval 3.7 [2.0, 5.7] K (midrange Meehl et al., 2020; bounds Sherwood et al., 2020). Calculations are in Fig. 3. That simple table spells doom for the profiteers of doom.

How it works: We have now influenced climate for 170 years since 1850. Before then, our influence was negligible. From the seven quantities in Fig. 2, a vital quantity is derivable – the unit feedback response, the additional warming from feedback per degree of reference sensitivity. With that, the unit feedback response for the period from now until doubled CO2 can be found with the help of the exponential-growth factor H, whereupon ECS ΔR1 may bederived.
1850-2020: The period unit feedback response U1 is 1 less than the ratio of equilibrium sensitivity ΔE1 to reference sensitivity ΔR1: i.e., 1 less than the ratio of period warming including feedback response to period warming excluding feedback response).
Period reference sensitivity ΔR1, the direct warming before adding any feedback response, is 0.865 K, the product of the 0.3 K W–1 m2 Planck parameter Pand the 2.9 W m–2 period anthropogenic forcing ΔQ1.
Period equilibrium sensitivity ΔE1, the eventual warming after all short-timescale feedbacks have acted and the climate has resettled to equilibrium, is a little more complicated. It is the product of two expressions: the anthropogenic fraction M ΔT1 of observed period transient warming ΔT1 and the energy-imbalance ratio.
The energy-imbalance ratio is the period anthropogenic forcing ΔQ1 divided by the difference between ΔQ1 and the anthropogenic fraction M ΔN1 of the period Earth energy imbalance ΔN1. At equilibrium there would be no energy imbalance: the divisor and dividend would both be equal to ΔQ1. In that event, ΔE1 would be equal to M ΔT1. However, where (as at present) an energy imbalance subsists, further warming will occur even without further radiative forcing after 2020, so that ΔE1 is the product of M ΔT1 and the energy-imbalance ratio: i.e., 0.975 K.
The unit feedback response U1, the feedback response per degree of period reference sensitivity, is 1 less than the system-gain factor ΔE1 / ΔR1. It is just 0.127. Contrast this straightforward, real-world, observationally-derived midrange value with the 3.0 implicit in the following passage from Lacis et al. (2010), which encapsulates the erroneous official position:
“Noncondensing greenhouse gases, which account for 25% of the total terrestrial greenhouse effect, … provide the stable temperature structure that sustains the current levels of atmospheric water vapor and clouds via feedback processes that account for the remaining 75% of the greenhouse effect” (Lacis et al., 2010).
2020 to doubled CO2: As with 1850-2020, so with doubled CO2 concentration compared with the 415 ppmv in 2020, begin with –
Period reference sensitivity ΔR2, the direct warming before adding any feedback response. ΔR2 is 1.054 K. It is the product of the 0.3 K W–1 m2 Planck parameter Pand the 3.52 W m–2 period anthropogenic forcing ΔQ2.
Next, feedback response is allowed for, so as to obtain ECS ΔE2. The method is to increase the 1850-2020 unit feedback response U1 in line with the exponential-growth factor H.
The unit-feedback-response ratio X is equal toexp(P ΔQ2 ln H), i.e., exp(ΔR2 ln H), or, more simply, but offensively to math purists, HΔR2, which is 1.074.
The unit feedback response U2 is the product of U1 and X, i.e., 1.136.
ECS ΔE2 is the product of reference sensitivity ΔE2 to doubled CO2 and the system-gain factor U2 + 1: i.e., 1.2 K. Not 3.7 K (CMIP6: Meehl et al. 2020). Not 3.9 K (CMIP6: Zelinka et al. 2020). Just 1.2 K midrange anthropogenic global warming in response to doubled CO2, or to all anthropogenic forcings across the entire 21st century. Not much of a “climate emergency”, then, is there?
Falsifying ECS predictions via the response ratio X: Knowing that the observationally-derived unit feedback response U1 for 1850-2020 was 0.127, it is possible to derive the value of XP implicit in any ECS prediction ΔE2P: XP = (ΔE2P / ΔR2 – 1). For instance, the 3.7 [2.0, 5.7] K ECS predicted by Meehl et al. (2020) and Sherwood et al. (2020) implies XP on 20 [7, 35]. Even the lower-bound X = 7 would suggest, untenably, that the feedback response per degree of direct warming after 2020 was an absurd seven times the feedback response per degree before 2020. The high-end ECS of 10 K predicted in several extreme papers is still more impossible, implying X= 67.
Uncertainties are small, since by now climatology has settled on the values of the seven key parameters that are all that is needed to find ECS. If the 40 years’ rather more rapid warming from 1980-2020 were used as the basis for calculation, rather than 1850-2020, midrange ECS would rise to just 1.4 K. Even if all of the industrial-era warming were anthropogenic, ECS would be only 2 K, but it would no longer be midrange ECS based on current mainstream data.
What they got wrong: How, then, did climate scientists ever imagine that global warming would be about thrice as much as real-world observation reflected in Their latest midrange data would lead a dispassionate enquirer to expect?
Climate models do not embody feedback formulism directly. However, their ECS predictions reflect the error in that they show 2.4 times as much medium-term midrange anthropogenic warming as has been observed over the past 30 years, and they are predicting 3 times the realistic midrange ECS.
In 2006, in preparation for my first article on global warming, I wrote to the late Sir John Houghton, then chairman of IPCC’s science working group, to ask why it was thought that eventual global warming would be about three times the direct warming. He replied that the natural greenhouse effect – the difference between the 255 K emission temperature without any greenhouse gases and the 287 K measured temperature in 1850 – comprised 8 K reference sensitivity to greenhouse gases and 24 K feedback response thereto.
It was this expectation of 3 K feedback response to every 1 K of direct warming, making 4 K eventual warming in all, that led the modelers to expect 3 or 4 K midrange ECS.
Climatologists had forgotten the Sun was shining (Fig. 4). What they had missed, when they borrowed feedback formulism from control theory in the mid-1980s, was that the 24 K preindustrial feedback response was not solely a response to the 8 K direct warming by greenhouse gases. A large fraction that 24 K of it was response to the 255 K emission temperature that would have obtained on Earth even without any greenhouse gases.

In reality, the preindustrial reference temperature was the sum of the 255 K emission temperature and the 8 K reference sensitivity to preindustrial greenhouse gases: i.e., somewhere in the region of 263 K. Given that the 255 K emission temperature is 32 times the 8 K preindustrial reference sensitivity to greenhouse gases, a substantial fraction of the 24 K total preindustrial feedback response was due to the former, correspondingly reducing the fraction due to the latter.
Feedback is a generally-applicable property of dynamical systems (systems that change their state over time), from electronic circuits to climate. If and only if the entire preindustrial reference temperature were 8 K, with no feedback response at all to emission temperature, would it be permissible to imagine that the unit feedback response was as great as 3. Even then, it would not follow automatically that today’s unit feedback response could be anything like as great as 3.
IPCC repeated the error in its 2013 Fifth Assessment Report and is about to do so again in its forthcoming Sixth Assessment Report. It defines “climate feedback” as responding only to perturbations (mentioned five times in the definition), but is silent on the far larger feedback response to emission temperature itself. It should replace its multi-thousand-page reports with the single monster equation (Fig. 5) that consolidates the stepwise calculations in Fig. 3:

Would you be willing to put your name to a report to IPCC, under its Error-Reporting Protocol, notifying it that ECS has been grossly overstated and requesting correction? If so, contact me via the first word of my surname [at] mail [dot] com and let me know. For the latest mainstream midrange data on which IPCC must perforce rely rule out the rapid, dangerous warming that it has so long, so confidently, so profitably but so misguidedly predicted.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
The question is, how much “global warming” does the ipcc think they can get away with “predicting”?
As long as it is far enough into the future to avoid falsification within any of the authors’ professional lifetimes…. as much as they want.
In response to Mr Cobb, IPCC will only be consistent with the recent, mainstream, midrange data cited here if it predicts 1.2 K ECS. But that would lead to its abolition.
chorus: “as much as possible”
Unfortunately the CMIP6 models ran hotter than ever before, due to an increase in the positive cloud feedback in about half of them. Many now give an ECS over 5 degrees. The IPCC will almost certainly raise its ECS estimate, not lower it. Science has nothing to do with it. Modelers dominate the field.
Mr Wojick should not despair. One of the virtues of our approach is that, after years of work, it is devastatingly simple. A single equation informed by just seven readily-available and quite well constrained quantities allows reliable estimation of ECS: and it is plain from the unit-feedback-response ratio test that ECS above 2 K is impossible. Our best estimate is 1.2 K. I shall be requiring HM Government to pass our paper to its scientists for reply (being a Lord, albeit merely by carefully choosing my parents) does have its uses. Once they realize they can’t easily answer our simple but robust calculations, they are going to become just a little less enthusiastic for the rampant destruction of capitalism that is their present policy.
“they are going to become just a little less enthusiastic for the rampant destruction of capitalism that is their present policy.”
I applaud your work and your optimism.
Judicial review of administrative action – which we and subsequently the USA inherited from the courts of equity in the 14th century – is a powerful way to get governments to pay attention. I have used it on many occasions, and am about to use it again. If HM Government cannot or will not answer our paper – which we have deliberately shortened to 1400 words to minimize official scientists’ labors – the judges will take a look.
UAH for January out
a further drop to +0.12C from a base period adjusted +0.15C in December
They have also changed the base period, so all previous graphs etc will need to be redrawn.
Trends, of course, not affected
M of B says: “The question is how much (or, rather, how little) warming all those tiny radiators in the atmosphere will cause.”
The answer is zero. The above idea is in contradiction of thermodynamics in the area of specific heat which says the energy to raise the temperature of a mass can be in “any form”. You say that if IR is included then a higher temperature will be attained.
There is no mention of this in specific heat tables, the Shomate equation, nor the NIST data sheet for CO2.
The forcing equation fails to account for the increased mass of air when the amount of CO2 is increased. The specific heat property of CO2 precludes it from causing warming.
Mr Kelly is entitled to his opinion, but I have consulted two of the world’s leading climatologists – Professor Richard Lindzen and Professor Will Happer, on the question of whether there is a greenhouse effect. They have provided plenty of evidence that there is, though it is smaller than official climatology would have us believe. We have therefore decided, as a matter of policy, not to challenge the existence of the greenhouse effect, which is repeatedly demonstrated to exist, but instead to challenge official climatology’s implementation of feedback formulism, which a) is plainly erroneous both in theory and in observation; and b) contributes thrice as much warming to current global-warming predictions as the direct warming from CO2 itself.
mkelly: If you had even basic experience in thermodynamic problems, you would understand that specific heat affects rates of change of temperature, but not the final steady-state temperature the system is trending towards.
Specific heat determines heat content. Not just the rate of of change of temperature but the actual heat content, i.e. the enthalpy. And it is the enthalpy that determines the temperature.
h = h_a + (m_v/m_a)(h_g)
Tim: For a given power input, the temperature of an object will change until the output matches the input, so it is in steady state condition.
The factors that govern the object’s output as a function of temperature — emissivity, etc. — are NOT related to the object’s thermal capacitance (specific heat times mass).
An object with higher thermal capacitance, even if higher specific heat for a given mass, will take more time and energy change to get to the new temperature, but the steady state temperature will be the same regardless of the specific heat.
Ed,
“The factors that govern the object’s output as a function of temperature — emissivity, etc. — are NOT related to the object’s thermal capacitance (specific heat times mass).”
But they *are* related to the enthalpy.
“But they *are* related to the enthalpy.”
Nope. Look in a heat transfer text. All heat transfer modes — conduction, convection, and radiation — are functions of temperature. NONE are functions of enthalpy.
I don’t think you are listening!
“Sandipan Chowdhury
Oregon Health and Science University
The temperature dependence of enthalpy is determined by a parameter called the specific heat capacity (at constant pressure), Cp. If Cp is > 0, then enthalpy will increase with increasing temperature, whereas if it is < 0, enthalpy will decrease with increasing temperature. This is described by Kirchoff’s law of thermodynamics. One thing to remember is that Cp itself might be temperature dependent (i.e. not necessarily a constant) in which case without knowing the functional dependence of Cp on T, it might be difficult to predict beforehand how H will change with T.” (bolding mine, tpg)
If enthalpy is dependent on temperature then temperature is also dependent on enthalpy. It’s now a one-way equivalance.
You are still missing the point completely.
Two objects of the same temperature will have the same heat transfer characteristics, even with different specific heat (and so enthalpy) values.
Actually not necessarily. The emissivities and radiative surface areas must be the same for this to hold true.
P= εσΑ(Τ1^4 – Τ2^4)
ε – emissivity
σ – SB constant
Α – radiation surface area
Nothing about enthalpy in the equation you cite! (Which has been my point all along)
I mentioned up thread: “The factors that govern the object’s output as a function of temperature — emissivity, etc. — are NOT related to the object’s thermal capacitance (specific heat times mass).” So I have dealt with the point you make. And I obviously was talking about “otherwise equal” bodies with different specific heat values.
The general case for net radiative transfer is FAR more complex than the equation you give, which is not even correct for simple cases of non-unity emissivity and large parallel plates. You also have to get into view factors, etc. It was really a grind in when taking heat transfer in the pre-personal computer age…
Ed,
You are lost in the forest because of the trees. The original comment (in part) was: “The above idea is in contradiction of thermodynamics in the area of specific heat which says the energy to raise the temperature of a mass can be in “any form”. You say that if IR is included then a higher temperature will be attained.”
You tried to refute that and now you are trying to create a smoke screen.
Enthalpy certainly is directly related to temperature. And enthalpy is also directly related to specific heat. And *any* kind of heat energy can raise the heat content (enthalpy) of a mass and therefore the temperature.
The basic fact is that CO2 in the atmosphere can only re-radiate IR that has left the earth – and that radiation leaving the earth cools the earth. Thus the CO2 in the atmosphere can *never* raise the temperature of the earth back t where it was before the earth cooled by radiating toward space because the CO2 doesn’t capture all the IR from the earth and doesn’t re-radiate everything back.
You can get lost in all fine details if you want but none of the details will refute the basic fact.
Tim:
I’m afraid it’s you that is lost. Let’s review the bidding, shall we?
mkelly objected to Monckton’s steady-state analysis by bringing up the utterly irrelevant point of “specific heat”. First of all, all matter has some specific heat, meaning there must be a non-zero transfer of energy to change that matter’s temperature.
He cannot grasp (and seemingly you as well) that IR radiation can be one source of that transfer of radiation. It’s as if he and you have never seen an IR heat lamp at a restaurant warming station.
But the bigger point is that differences in specific heat, other things being equal, can affect the rate of change of temperature toward a new steady-state temperature, but it will not affect what that steady-state temperature is. That is why I say his issue is irrelevant.
You posted a pretty standard form of the radiative transfer equations, which included the factors of temperature and emissivity, as I claimed. It did NOT include the factor of enthalpy. The equation YOU posted showed that there is NO net transfer between objects of the same temperature, even if they have different specific heats, and so enthalpies. That is why I keep saying that specific heat and enthalpy are irrelevant to this analysis.
Let’s use your equation to examine your bigger point. Let’s compare a completely transparent atmosphere to one with IR radiatively active gases. We’ll take T1 as the surface temperature, and T2 as the temperature of the matter radiating back toward the surface.
With a transparent atmosphere, radiation passes both ways through the atmosphere unobstructed. That is, all of the thermal radiation from the surface passes directly to space, and the only downward radiation comes from space.
In this case T2 is 3K (-270C), which radiates back virtually nothing. In the case of our real atmosphere with IR absorbing/emitting gases that absorb some (not all) of the upward radiation, the effective value of T2 is much higher, reducing the net transfer from the surface. For a given solar power input, this results in a higher surface temperature.
When I formally studied engineering heat transfer, we were told to use as a good approximation for the effective radiating temperature of a clear night sky in temperate zones a value of 253K (-20C). This is the T2 in your equation.
You can use a simple kitchen infrared thermometer pointed up at a clear sky to determine the concentration of these gases (it’s water vapor that is variable) as documented here:
https://journals.ametsoc.org/view/journals/bams/92/10/2011bams3215_1.xml
The more water vapor, the higher temperature the thermometer registers — and it registers it because its sensing element really does have a higher temperature.
I appreciate the effort here assuming that “official climatology” is correct. But let me try and make a few points.
Unfortunately, many of Mr Haas’ points are incorrect, which is why we decided not to challenge official climatology on the radiative properties of greenhouse gases. That is a specialist task for physicists expert in that field.
Mr Haas’ point 1 is incorrect in that the water vapour feedback is necessarily small in the boundary layer, where CO2/H2O spectral overlap is significant. Immediately above the boundary layer, there has been no change in specific humidity in 80 years (Kalnay et al. 1996, updated), and in the mid-troposphere, where spectral overlap is not a significant phenomenon, specific humidity has declined for 80 years, probably by subsidence drying (Paltridge et al., 2009). Therefore, there is no reason to expect large enough water-vapour feedback to induce instability.
As to the dry adiabatic lapse-rate (Mr Haas’ point 2), the expected change is small enough to fall within measurement error.
Mr Haas’ point 3 is incorrect in that only 9-11 K of the 33 K greenhouse effect is attributable to direct warming forced by greenhouse gases. The rest is feedback response – but the feedbacks respond chiefly to emission temperature itself, which is about 25 times larger than the reference sensitivity to the preindustrial noncondensing greenhouse gases.
Mr Haas’ point 4 is incorrect in that the radiative forcing from douibled CO2 only drives 1 K reference sensitivity. As the head posting shows, the additional, indirect warming from feedback response is very small, so that one would not expect a large warming from CO2. Mr Haas is correct insofar as one would indeed expect much warmer weather worldwide if official climatology’s predictions were correct.
Mr Haas’ point 5 does not accord with theory. CO2 does not act either like a blanket or like a sheet of glass in a greenhouse. When a photon in one of its characteristic wavebands interacts with it, a dipole moment arises in the bending vibrational mode of the molecule, so that a quantum resonance occurs. That oscillation is by definition heat. It is as though a tiny radiator had been turned on. I have consulted the world’s ranking expert – Professor Will Happer – on this point, and I had the privilege of editing a paper by him on the subject. The matter is not in doubt.
Mr Haas’ sixth point is that compared with water vapour the warming effects of CO2 must be small. That is incorrect, because the Clausius-Clapeyron increase in specific humidity (about 7% per Kelvin: Wentz+ 2007) does not occur above the boundary layer, and specific humidity is declining in the mid-troposphere, so that the predicted hot spot is not found. One would, therefore, only expect a modest water-vapour feedback: and that is indeed all that has occurred, because the total industrial-era feedback response to anthropogenic reference sensitivity 0.87 K is only 0.11 K.
Now, even if I were incorrect on all these points, there is evidence for the position I have stated in the published journals. Therefore, if I were to try to rely on Mr Haas’ arguments, reviewers would dismiss my paper out of hand.
That is why we have kept the focus narrow. We have shown that feedback response in the real world has been very small and that, even after allowing for some Clausius-Clapeyron growth in unit feedback response with reference sensitivity, ECS will only be 1.2 K – not enough to do harm. We simply have no need to get into arguments with climatology on points such as those made by Mr Haas, all of which are, alas, at best debatable.
Approximately 50% of the sun’s energy arrives as SWIR or near IR. H2O has some significant absorption in this area whereas CO2 has none. I would suggest that most of the feedback assigned to CO2 via H2O increase is really H2O’s absorption of this part of the sun’s radiation.
Also, I am not certain the effect of “back radiation” can even warm the earth. I know the warmists think that the total radiation equals the “sun+back radiation” but this does not take into account that the earth also cools when it radiates the initial energy that CO2 absorbs. That leads one to the conclusion that CO2 can at best, have no role in raising the temperature. Thermodynamically, because of T^4, hot bodies are not heated by cooler ones. In fact, just the opposite, cool bodies are heated by hot ones until equilibrium is achieved and both bodies heat/cool each other equally. This also assumes equal radiating surface of both bodies which I’m not sure applies to the earth – CO2 relationship.
On a cold day, with a body temperature of 36 C, I reach for a blanket with a temperature of 10 C. I wrap it around me and – hey presto – a colder body (the blanket) has warmed a warmer body (me).
The thing I liked was the notation on the increases saying that it may take 65 years to occur.
Let’s revisit several topics I raised in July of 2019 concerning an earlier version of Lord Monckton’s paper.
Topic #1 — The role of Soden & Held’s water vapor feedback mechanism in IPCC Climate Modeling
Soden and Held’s postulated feedback mechanism serves a need to explain how an increase in surface temperature of 1C to 1.5C over some period of time can be amplified into a projected 2.5C to 3C increase, thus turning an uncomfortable outcome into a disastrous outcome for the earth and for all humanity. (Or so the climate activists say.)
It was said in late July of 2019 that the state of science is such that it is currently impossible to directly observe a temperature feedback mechanism operating in real time inside the earth’s atmosphere, in the same way we would observe a feedback mechanism operating inside an electronic circuit on a test bed in a laboratory.
It was also said that the presence and characteristics of such atmospheric feedback mechanisms, if they actually exist, must be inferred from other kinds of observations.
This raises a question concerning the latest Monckton paper. Is using a test circuit of the kind that Lord Monckton’s team developed an appropriate means means of assessing the true nature and quantity of the real world amplification mechanism? (Assuming that mechanism exists in some form.) Why or why not?
At any rate, because their postulated feedback mechanism cannot be observed directly in the real-world atmosphere, Soden and Held use output from the climate models as one source of data among several in quantifying the theoretical sensitivity of earth’s climate system to the continuous addition of CO2 and methane to the atmosphere.
That Soden and Held take this approach raises another obvious question.
If the climate models take account of their postulated feedback mechanism, either directly or indirectly — and if Soden and Held are using model outputs as inputs into their sensitivity analysis — then is circular logic being used in characterizing and quantifying their theoretical mechanism?
So I ask the question, if Soden and Held’s feedback mechanism is being incorporated into the IPCC models in some way, either directly or indirectly, then how is this being done?
Is it being accomplished directly through inclusion of feedback modeling algorithms operating within the main model’s dynamic core, or is it being done indirectly through the choice of values being assumed for the model’s physical parameterizations?
If it being accomplished directly through inclusion of feedback modeling algorithms operating within the main model’s dynamic core, then on what basis in atmospheric physics is the algorithm being formulated?
On the other hand, if it is being done indirectly through the choice of values being assumed for the model’s physical parameterizations, then on what basis are the assumed values being chosen?
It would be very useful if someone having extensive knowledge of how the IPCC climate models are designed and written could explain to us how Soden & Held’s feedback mechanism is being incorporated into the model designs, and also how that incorporation is being accomplished; i.e., through direct or indirect means, or possibly through some combination of the two.
Topic #2 — Can positive feedback amplification can be activated by natural processes?
Let’s get back to another issue I raised earlier: the possible existence of processes other than the continuous addition of CO2 and methane to the atmosphere which can raise the temperature at the earth’s surface and thus activate a positive water vapor feedback mechanism.
Back in July of 2019, both Joe Born and Nick Stokes said that Soden and Held’s postulated mechanism can be activated by other processes which can raise the surface temperature.
If Soden & Held’s water vapor feedback mechanism does in fact exist, but sources of a rise in surface temperature other than CO2 and methane can in fact cause it to become active, then what are the implications for the IPCC models if positive feedback amplification can be activated by natural processes?
For one example, if the addition of CO2 and methane to atmosphere is amplifying water vapor’s GHG effects, and if some natural process is also amplifying those GHG effects at the same time, then on what basis does one quantify and allocate the respective effects of each possible source?
This is why a non-physics based model is questionable at best. All kinds of assumptions must be made in creating the model if the physics are not well known. How do you judge the appropriateness of the assumptions? The wide spread in model results indicate that the assumptions in each model vary significantly, so how to judge which ones are right and which ones aren’t?
It is a well-respected principle of debate — or at least it was before the 21st Century came around — that the more assumptions which have to be made to prove the case for an argument, the weaker is the case for that argument.
Let’s take a look at how this principle might be applied to today’s climate science enterprise.
What I did with my Year 2100 GMT Prediction Envelope graphical analysis was to condense a few smaller assumptions into a single large assumption, one which appears quite reasonable on its face.
The three smaller assumptions behind the graphical analysis, the ones which have been combined into the single larger assumption shown on the illustration, are these:
(1) HADCRUT4 GMT is a reasonably accurate index for past trends in global warming between 1850 and 2019. Not perfect, but good enough for the purpose intended.
(2) The HADCRUT4 GMT record reflects the combined effects of all natural and anthropogenic climate change processes as these have evolved through time between 1850 and 2019.
(3) If the physical processes which influence the HADCRUT4 GMT record remain operative, the patterns of change which might be seen between 2020 and 2100 will be similar to those which occurred between 1850 and 2019.
These three smaller assumptions are visible on the graphic as visual elements. They have been combined into a single larger assumption: “The HADCRUT4 Global Mean Temperature record includes the combined effects of all natural and anthropogenic climate change processes as these have evolved through time, and that similar processes will operate from 2020 through 2100.”
How does my deliberately simplistic analytical approach contrast with the IPCC’s massively complex and expensive analytical approach?
Today’s IPCC climate models contains tens if not hundreds of physical assumptions, some of which can have a very significant impact on the output of the models, but which are highly speculative as to their physical reality.
Moreover, as far as we can tell, subjective judgement often plays a greater role in deciding which IPCC model runs look reasonable, and which don’t, than does a science-based qualitative and quantitative confidence in the physical assumptions.
On the other hand, the approach used for my Year 2100 graphical analysis is a drastically more simple alternative to the hundreds if not thousands of model runs the IPCC produces for their AR series of climate reports.
In total, the IPCC model runs produce a prediction envelope which contains a wide range of possible outcomes for 2100. My consciously simplistic graphical analysis also delivers a prediction envelope, but with a narrower range of possible outcomes.
In evaluating the most likely outcomes for the year 2100, the IPCC approach relies upon a number of partially or wholly subjective decisions which occur at various points within their analytical process.
In contrast, in looking at the range of possible values for that year, my own analysis relies on only one subjective judgement made at the conclusion of the analytical process; i.e., that a + 2 C rise above pre-industrial by 2100 is the most likely scenario, simply because it is the one which most closely follows the GMT trend pattern of 1850-2019.
What is the bottom line?
Which of the two analytical methods, mine or the IPCC’s, produces a more credible predictive outcome, if we apply this criteria:
— Conformance with long-established principles of scientific debate.
— The numbers and impacts of the physical assumptions being made.
— The transparency of the analytical process being employed.
— The validity of the overall scientific methodology being employed.
In my own subjective opinion, I win. Hands down. With that said, I shall reward myself with another slice of cherry pie, secure in the knowledge it was kept cool with highly reliable nuclear generated electricity.
If you could give me a credible reason for assuming that ocean temperatures were measured prior to 2004, I could give you another thumbs up.
EdB, I use the HADCRUT4 global mean surface temperature record as a quantification tool of convenience for producing what is deliberately and consciously intended to be a drastically simplified approach to predicting a range of Year 2100 surface temperature outcomes.
Every one of the hundreds of lower tier physical assumptions anyone could ever make about why the earth has been warming at a moderate pace for the last one-hundred and seventy years — all of those lower tier assumptions are subsumed within my Assumption #2, “The HADCRUT4 GMT record reflects the combined effects of all natural and anthropogenic climate change processes as these have evolved through time between 1850 and 2019.”
That said, some facets of the analysis might be cause for further discussion depending upon one’s own knowledge of how carbon emissions have increased since 1950.
Massive volumes of CO2 have been added to the atmosphere since 1950, and yet the rate of global warming as measured by HADCRUT4 isn’t that much more than what was being experienced before 1950.
That the increase in global mean surface temperature isn’t that much faster raises the obvious questions about the credibility of the IPCC’s models.
Cherry pie is indicated, but you have made more assumptions than you may think. Hadcrut 4 has a huge number of assumed parameters and probably even code divergences which depend on the model partial output temperature. Clearly the atmosphere does not work this way, the real parameters are of Christopher’s form, a single equation (however complex) which behaves in an entirely predictable manner. We are in a place where “feedbacks” are assumed, they are not, and are difficult or impossible, measured. The usual descriptions of these feedbacks are generally not in accordance with thermodynamics, eg cooler bodies passing sensible heat to hotter bodies etc. and are therefore very dubious from square one. An exponential response to real heat is possible, but has yet to be shown and probably very nearly linear. This is all contained in Christopher’s equation, forget the “feedback” model, an exponential response is not really positive feedback anyway, it is only seen that way be badly written computer programs, as it is mathematically simple to implement. The reason is that the algorithm is not defined first, and then turned into software, it is taht the programmer does not know the correct algorithm so uses the output as an input to get the result he wants.
“If Soden & Held’s water vapor feedback mechanism does in fact exist”
It actually goes back to Arrhenius 1896. It was a big part of his calculation.
“For one example, if the addition of CO2 and methane to atmosphere is amplifying water vapor’s GHG effects”
No, that is backwards. Water vapor amplifies the driving effects of CO2 and methane. And yes, it will amplify any other driver.
“It would be very useful if someone having extensive knowledge of how the IPCC climate models are designed and written could explain to us how Soden & Held’s feedback mechanism”
It isn’t done explicitly at all. But the GCMs model physics. More warmth evaporates more water, which more hinders IR, etc. It happens in the GCMs, just as in reality. No intervention is needed to make that happen.
But there is a lot less water vapour feedback than the models have imagined, probably because specific humidity is only increasing at the Clausius-Clapeyron rate in the boundary layer, above which there has been no trend in 80 years. In the mid-troposphere specific humidity has been declining for 80 years, which is why there is not much of a tropical mid-troposphere hot spot, without which water vapour feedback is likely to be small.
And Arrhenius realized ten years after his 1896 paper that he had overegged the water-vapour feedback, and divided his warming prediction by 3. Time for IPCC et hoc genus omne to do the same.
Nick, thanks for your comments. My reply follows:
—————
Beta Blocker said: If Soden & Held’s water vapor feedback mechanism does in fact exist ….
Nick Stokes replied: It actually goes back to Arrhenius 1896. It was a big part of his calculation.
For my response to that comment by Nick, I refer to Lord Monckton’s comment:
Monckton of Brenchley: But there is a lot less water vapour feedback than the models have imagined, probably because specific humidity is only increasing at the Clausius-Clapeyron rate in the boundary layer, above which there has been no trend in 80 years. In the mid-troposphere specific humidity has been declining for 80 years, which is why there is not much of a tropical mid-troposphere hot spot, without which water vapour feedback is likely to be small.
And Arrhenius realized ten years after his 1896 paper that he had overegged the water-vapour feedback, and divided his warming prediction by 3. Time for IPCC et hoc genus omne to do the same.
—————
Beta Blocker said: For one example, if the addition of CO2 and methane to atmosphere is amplifying water vapor’s GHG effects, and if some natural process is also amplifying those GHG effects at the same time, then on what basis does one quantify and allocate the respective effects of each possible source?
Nick Stokes replied: No, that is backwards. Water vapor amplifies the driving effects of CO2 and methane. And yes, it will amplify any other driver.
My question should have been written as follows: “If the effects of the addition of CO2 and methane to the atmosphere are being amplified by the water vapor feedback mechanism, and if some natural process is also enabling that feedback mechanism at the same time, then on what basis does one quantify and allocate the respective effects of each possible driver source?”
—————
Beta Blocker said: It would be very useful if someone having extensive knowledge of how the IPCC climate models are designed and written could explain to us how Soden & Held’s feedback mechanism is being incorporated into the model designs, and also how that incorporation is being accomplished; i.e., through direct or indirect means, or possibly through some combination of the two.
Nick Stokes replied: It isn’t done explicitly at all. But the GCMs model physics. More warmth evaporates more water, which more hinders IR, etc. It happens in the GCMs, just as in reality. No intervention is needed to make that happen.
I take your response to mean that modeling of the feedback mechanism’s postulated effects are being handled among a variety of internal algorithms whose outputs are affected in one way or another by the choice of values being assumed for the model’s physical parameterizations.
As someone who has spent part of my career working as a software QA specialist for projects in the nuclear industry, I would very much like to examine the software design documentation which currently exists for the IPCC’s climate models.
In response to Beta Blocker, we have taken the radical approach of actually deriving the feedback strength of the climate system from 170 years of real-world measurements. It is so small that one can actually leave it out of the climate-sensitivity calculation without much error.
For myself, I view the IPCC’s climate models as being very expensive video games whose primary function is to produce pretty graphics which appear to the uninformed person to have the look and feel of real science.
In that same regard, my own Year 2100 prediction envelope graphic is as much a commentary on how mainstream climate science arguments are being formulated and presented as it is an analysis of where GMT trends are heading.
I think what Lord Monckton has done in chasing this error baked into the IPCC thinking is really, really good!
By using the IPCC formula and showing its errors will be a game changer.
I understand that some people here do not like the concept of using data from the IPCC and argue about how it all works. But what better way of showing how catastrophically wrong the IPCC and its scientists are, than by using their data against them!
However, when trying to use the Hansen feedback analogy which was flawed at the outset, means that it is still flawed. Positive feedback can not be used.
If Lord Monckton would be so kind as to ask his learned control expert to jot a few lines and formula about how a system with positive feedback could exist, we would all be most greatful.
Perhaps multiple conflicting systems: the basic flaw as described in the head post and feedback systems can not be described sufficiently clearly is such a simple manner.
More clarity would be welcome.
Keep up the good work LoB!
Steve Richards is very kind. Our calculations are simple, but it has taken years to refine them to the point where just about anyone with a little effort and goodwill can follow the argument. The scientific paper now in draft in case our present version is rejected is only 1400 words long. But, if there is any rationality in the world, those 1400 words spell doom for the doomsters.
It is astonishing how mnay people imagine that feedback amplifiers with positive feedback cannot be stable. If the feedback fraction of the output signal is well below unity, the system can of course be stable.
To convert the feedback fraction f to the system-gain factor A, use this equation:
A = 1 / (1 – f).
Obviously, as f approaches unity, instability will arise. But if f is, say, below 0.5 (and we calculate from observation that it is not much above 0.1), then A is the sum of the infinite convergent series {f ^0, f ^1, … }, under the convergence criterion 0 < f < 1.
Indeed, historically, this particular infinite convergent series was the first to be summed in closed form.
It is quite easy to prove the sum of the series using nothing more complex than a little linear algebra. In one of our annexes, we prove it by two different methods, and we also tested it at a government laboratory, just to make assurance doubly sure.
I’ve pointed this out in other messages. In the case of positive feedback:
Output = input x [G/(1-GH)]
Where H is the feedback and G is the open loop gain.
If the product of GH is > 0 then you will have a runaway condition no matter how small you make H.
Making H small only determines how fast the runaway happens. If you will, “H” determines the slope of the increasing output.
If you want to say that making H small makes the slope so small it can’t be detected then I might buy that. But that also means that the increase in temperature we will see will also be so small that it will be undetectable since it would be within the uncertainty interval.
In my response to Nick Stokes above, I remarked that having spent part of my career as a software QA specialist in the nuclear industry, I would really like to examine whatever software design documentation exists for the IPCC’s climate models. I have to suspect there isn’t very much of it that is ‘nuclear grade’ in how well it does the job of describing and documenting how the models do what they do.
I am absoloutely certain of that Beta, just look at the Covid model, it is so bad as to be laughable. It is easy for a student to write some poor software and get the result he wants. It is hard to write good software which produces a genuine result, the underlying algorithm has to be correct first! Software nonsense leads to instant PhDs, which is very sad, and shows the state of the Universities, and the lack of basic ability in the departments. Publish or be damned has a lot to answer for, as does the unworkable “Peer review” process where one incapable programmer passes another very bad scientist. The clue is easy, present the exact algorithm first, then the alleged results. None of them can do this! That is exactly where Christopher has it right.
I think that the biggest change to thinking is the realisation that thermodynamics does not work on current temperature, it works on absolute temperature. Thus the idea that the temperature at some point in the past, perhaps 1850 is important, is shown to be flawed. Once one starts with absoloute temperature all the new ideas which Christopher and his group have discovered become much clearer, very well done, progress against the mass view is possible!
Christopher Monckton:
I have greatly enjoyed your articles on WUWT over the years including this one, but I want to raise a potential quibble with your observation that “climatologists had forgotten the Sun was shining”. Perhaps your explanation-in-a-nutshell was not meant to apply specifically to the Hansen et al 1984 paper, but if it was I’d like to offer an alternative (and admittedly speculative) explanation of the thinking going into that paper (and follow on thinking) that goes like this:
A1) Hansen et al (HEA) knew the sun was shining but did not have the theory and/or computer resources and/or patience to shmoo its effect over the entire range of insolation (0-1361 W/m^2) corresponding to a temperature range of 0-287 (assuming no internal heating from redioactive decay) nor to shmoo the effect of CO2 from 0-200% of the 1980 value.
A2) Lacking these things, they opted for the two simpler tasks of estimating the result of two forcings relative to 1980 conditions: a 2% increase in insolation and a doubling of CO2. To estimate the results they simulated the earth’s climate using a 3D climate model called model II. For each of the two simulations, they started the model with 1980 conditions, applied one of the forcings, and then ran the simulation through 35 simulated years.
A3) The outputs of the model II simulations were roughly a 4 degree warming even though only about 1.2 degrees of warming would be expected on the theoretical basis of radiative energy balance as a direct result of either of the two forcings alone without follow-on effects from increased water vapor, cloudiness, and snow/ice cover (albedo).
A4) Based on (A3), they concluded that ECS was between 3 and 4 (4/1.2=3.3).
Assuming the model II simulator correctly modeled reality (which clearly it did not), I don’t see anything wrong with the HEA thinking and I don’t think it can be said that the team had forgotten the sun was shining when doing that paper.
In your correspondence with Sir John Houghton, he outlines ECS thinking something like this:
B1) There is 8 K direct warming (reference sensitivity).
B2) There is a 32 K difference between emission temperature with greenhouse gases (287 K) and without (255 K).
B3) This implies an ECS of 4 (=32/8)
I wonder if this thinking actually developed the other way around, like this:
C1) HEA gave an ECS of 4.
C2) There is a 32 K difference between emission temperature with and without greenhouse gases (same as B2 above).
C3) At this point climatologists did forget the sun was shining and use (C1) and (C2) to derive 8 K direct warming.
C4) Climatologists then begin tweaking their models to give the “known correct” 8 K direct warming and ECS of 4.
Ric