The New Pause Pauses

The New Pause pauses

By Christopher Monckton of Brenchley

The New Pause has paused. As in July, so in August, there has been a zero trend in global warming for 7 years 11 months according to the UAH satellite lower-troposphere dataset –

For completeness, here is the whole dataset since it went live in December 1978 –

Woe, woe and thrice woe! [Britain has had Mediterranean weather, and we like it, which is why we holiday there: I am about to give a piano recital in Malta]. Lake Mead is drying up!! [through over-extraction, not global warming]. The whole of Europe, including the customarily rain-sodden United Kingdom, is in the worst drought evaaah!!! [since the medieval warm period]. Rivers are running dry!!!! [nothing new there: it’s called summer]. Temperatures are higher than climate scientists ever done thunk!!!!! [warming is little more than a third of the originally-predicted rate]. To keep the lights on in London during a recent heatwave, the British grid authority had to pay more than $11,300 per MWh to Putin’s profit!!!!!! [compared with $30 per MWh at the coal-fired power stations wastefully and needlessly torn down to “Save The Planet”]. And it’s all because global heating!!!!!!! [actually weather]. And it’s a mast year for oaks!!!!!!!! [such heavy crops of acorns occur every few years]. Even the trees are alarmed!!!!!!!!! [Nope].

The Grand Master’s Oak at Harrietsham, Kent, is laden with acorns in this mast year

It’s not global warming. It’s regional weather, resulting chiefly from the prolonged la Niña that has contributed in no small part to the New Pause in global warming over the past eight years, and from a sudden southerly airflow from the Sahara.

The Marxstream media uncritically blamed global warming for the drought, just as a few years ago they blamed it for the floods. Come on, comrades – it’s one or the other but not both. In reality, floods are more likely than droughts when the weather warms, because more water is evaporated from the ocean by the warmer weather, whereupon, by the Clausius-Clapeyron relation, the capacity of the atmospheric space to carry water vapor increases with temperature, moistening the air. As far back as 1981 it was already being reported (Nicholson et al.) that the Sahara had shrunk by 300,000 km2 as moister air allowed desert margins to bloom in areas where humans had not been able to settle in living memory.

Another reason for lingering floods and droughts is that the very tall offshore windmills now being installed at crippling expense to taxpayers in subsidies grossly interfere with the laminar flow of the wind at altitudes now exceeding that of the spire of Salisbury Cathedral, Britain’s tallest, slowing both high-pressure and low-pressure systems down and leading to more prolonged and intense weather of all kinds.

What, then, is the simplest possible demonstration that global warming was, is and will continue to be small, slow, harmless and net-beneficial? Anything complicated will either baffle the 99% of the population uncomfortable with equations or allow the usual suspects to make it look more complicated still, just to make the bafflement certain.

As the repeated long Pauses suggest, despite the continuing perturbation by anthropogenic greenhouse gases the climate is in near-perfect thermostasis, or temperature equilibrium. The chief sensitivity-relevant direct forcings by noncondensing greenhouse gases and the chief indirect or feedback forcings, particularly by additional water vapor in the air, operate on timescales of hours, days or years at most. Even IPCC admits this.

Therefore, we may obtain a simple, first-order estimate of the likely rate of global warming by assuming that because all sensitivity-relevant forcings act on timescales of years at most there is little or no unrealized global warming in the pipeline as a result of our past sins of emission. On that assumption (which will be qualified later) such further warming as we can expect will chiefly arise not from the influence of our past sins of emission but from our future emissions.

Assuming solar irradiance of 1363.5 W m–2, mean surface albedo 0.29, mean surface emissivity 0.94 and the Stefan-Boltzmann constant 5.6704 x 10–8 W m–2 K–4, the emission or sunshine temperature that would obtain near the surface if there were no greenhouse gases in the air at the outset would be 259.58 K by the Stefan-Boltzmann equation. That is the naïve method favoured by climatologists, who exclude both the fact that albedo would be half of the current 0.29 if there were no greenhouse gases (for there would be no clouds) and the somewhat countervailing fact of Hölder’s inequalities between integrals, which climatologists too often neglect.

Based on the changes in greenhouse-gas concentrations in Meinshausen (2017) and the formulae for greenhouse-gas forcings in IPCC (2007, table 6.2), directly-forced warming by naturally-occurring noncondensing greenhouse gases to 1850 was 7.52 K.

In 1850, the 267.1 K reference temperature (before allowing for feedback response) was the sum of 259.58 and 7.52 K. However, the HadCRUT5 observed temperature at the equilibrium in 1850 (there would be no trend in global-warming trend thereafter for 80 years) was 287.5 K. Therefore, in 1850 the system-gain factor – the ratio of equilibrium temperature after feedback response to reference temperature before it – was 287.5 / 267.1, or 1.0764.

Compare the system-gain factors in 1850 and today. There has been 1.04 K HadCRUT5 warming since 1850. The anthropogenic forcing since 1850 was 3.23 W m–2 (NOAA AGGI). The period reference sensitivity was then the product of that period anthropogenic forcing and the Planck parameter 0.3 K W–1 m2: i.e., 0.97 K.

Therefore, today’s reference temperature (the temperature before accounting for feedback response) is 267.1 + 0.97 = 268.07 K, and the current equilibrium temperature is 287.5 + 1.04 = 288.54 K. The current system-gain factor is then 288.54 / 268.07, or 1.0764, just as it was in 1850. Nonlinearity bores, take note: on the basis of mainstream, midrange data, it is possible that nonlinearity in feedback response over time in the industrial era is zero.

Now, it may be argued that no allowance has been made for the delay in the realization of warming caused by the vast heat capacity and slow overturning of the oceans. However, the delay is to a large extent already allowed for in IPCC’s estimates that the principal forcings and feedbacks are short-acting – over periods of years at most. Indeed, the principal feedback – the water-vapor feedback – has a timescale of hours only.

Furthermore, the ocean, like any heat-sink, acts as a buffer. It delays the return of warming to the atmosphere from the mixed layer, so that any warming in the pipeline will be distributed harmlessly over centuries to millennia.

To first order, then, feedbacks in the industrial era are not growing stronger with temperature. The Le Chatellier principle applies: there are checks and balances – such as Eschenbach earlier tropical afternoon convection with warming, or the growth of Antarctic ice extent, ditto – that tend to keep the climate near-perfectly thermostatic.

Therefore, even if anthropogenic greenhouse-gas forcing were to continue to increase in a near-perfect straight line over the next 78 years at the rate of about 0.033 Watts per square meter per year that has prevailed over the past three decades (NOAA AGGI), the reference temperature in 2100 would be today’s 268.07 K plus (78 x 0.033 x 0.3) = 268.85 K. Applying the so-far constant system-gain factor 1.0764 gives equilibrium temperature of 289.4 K for 2100.

The bottom line is that we can perhaps expect as little as 289.4 – 288.5 = 0.9 K more warming in the rest of this century to 2100. Not exactly planet-threatening. And that is on the basis of midrange, mainstream data suggesting that – very much as one might expect a priori of a system bounded by the atmosphere and the ocean – in the presence of very small direct warming the system-gain factor, the measure of the potency of all the feedback processes acting on the climate system, has not changed and will not change in the industrial era.

Climatology gets its elementary control theory wrong in neglecting the fact – which, though it annoys certain trolls infesting the comments section, is nonetheless objectively true – that the feedback processes subsisting at any moment must perforce respond equally to each Kelvin of the reference temperature then obtaining. Feedbacks do not respond solely to that tiny fraction of reference temperature directly forced by greenhouse gases.

Climatology’s error has many serious consequences. Not the least of these is the notion that one may usefully express individual feedback strengths in Watts per square meter per Kelvin of the change in reference temperature that is reference sensitivity rather than of the absolute reference temperature, which is the sum of emission temperature and all natural and anthropogenic reference sensitivities.

There are two problems with climatologists’ approach. The first is that, like it or not, feedback processes respond to the entire reference temperature. The second is that for calculus to succeed it is necessary to know the equation either of the absolute system-gain factor, whereupon differential calculus will yield its first derivative, or of the first derivative itself, whereupon integral calculus will yield the original equation.

However, the form of the potentially relevant equations is unknown and unknowable. Worse, the underlying data informing the potentially relevant equations are neither known nor knowable to anything like a sufficient precision to provide any legitimate scientific basis whatsoever for making the various profitably exaggerated global-warming predictions spewed out by the computer models of climate.

Pat Frank, in one of the most important papers ever published in global-warming climatology, demonstrated that fact definitively in 2019 after Professor Karl Wunsch had reviewed it and had not been able to find any error in it sufficient to prevent publication. The paper has stood unrefuted in the scientific literature since then, though online there have been some spectacularly half-baked attempts to overthrow it, chiefly on the part of climate Communists but even, in one or two unfortunate instances, on the part of skeptics unfamiliar with the fact that they are unfamiliar with the relevant math.

Dr Frank’s admirable paper, which was rejected 13 times by the climate-Communist gatekeepers of the once-learned journals before it met an honest and competent reviewer, proves that the published uncertainty in a single one of the thousands of input variables informing the general-circulation models – the low-cloud fraction – is so large that, when that uncertainty is propagated over this century, any predicted global warming or cooling of less than 15 K compared with the present is mere guesswork. Dr Frank’s paper formally proves that result using standard and well-established statistical methods.

By a different method, we may likewise demonstrate the incompetence of the general-circulation models to predict global warming. We begin with table 7.10 of IPCC (2021).

The table lists the principal sensitivity-relevant temperature feedbacks. IPCC, thanks to climatology’s asinine error of physics, denominates feedback strengths λ (here in purple) in Watts per square meter per Kelvin of reference sensitivity, rather than of reference temperature.

Such a choice might be pardonable if, as is often the case in electronics, the perturbation signal were a very large fraction of the entire input signal. Today, however, the perturbation signal is minuscule: it is just 7.5 + 0.97 = 8.47 K in 268.07 K, or 3% of the input signal.

Derivation of ECS via the differential system-gain factor is in red, while derivation of ECS based on IPCC’s data but via the absolute system-gain factor is in green. The two methods, of course, both show identical values of ECS. However, there are several problems with IPCC’s method, as the derivations therefrom show.

Problem 1: Though at midrange the +2.06 W m–2 sum of the individual feedback strengths and the Planck parameter p,expressed by IPCC as though it were a “feedback” in W m–2 K–1, is equal to the published –1.16 W m–2 K–1 net feedback strength, its lower and upper bounds do not sum to the published totals. No doubt there are good reasons, but the discrepancy adds to the already enormous uncertainty in the interval of feedback strengths.

Problem 2: The Planck parameter  stands part of the reference frame for derivation of equilibrium temperatures: it should, therefore, more properly be expressed in K W–1 m2 of reference temperature.  is the first derivative of the Stefan-Boltzmann equation with respect to absolute surface temperature (288 K today) and top-of-atmosphere radiative flux density (242 W m–2 today): P = 298 / (4 x 242) = 0.3 K W–1 m2, close enough to the reciprocal of IPCC’s current midrange p= 3.22 W m–2 K–1.

Problem 3: The Planck parameter is known to a far lesser uncertainty than the ±6.5% imagined by IPCC, for it is derived from a ratio of absolute quantities whose values are well constrained. Take today’s surface temperature as 288 ± 2 K, and the top-of-atmosphere net forcing as 242 ± 2 W m–2.  Then the Planck parameter , using IPCC’s reciprocal form,falls on 3.36 [3.31, 3.41] W m–2 K–1, an interval of less than ±1.5%, and not IPCC’s ±6.5%. Rectifying that error would correct one of the many daftnesses evident in the table, by which it appears that, on the absolute basis, smaller feedback strengths engender larger ECS values.

Problem 4: Until IPCC (2021), it had long been thought that the CO2 forcing was known to within ±10%. It was thus thought to be reasonably constrained. However, though Andrews (2012), based on 15 then models, concluded that the midrange CO2 forcing was 3.45 W m–2, IPCC now says it is 3.93 W m–2, an increase of 14%, well outside what had been thought to be the interval of doubled-CO2 forcing. If the uncertainty in the CO2 forcing is as large as IPCC’s increase compared with previous reports implies, a fortiori the uncertainty in the strength of the feedback forcing is greater.

Problem 5: IPCC shows the cloud feedback as positive. However, the primary effect of the increased cloud cover that is to be expected with warming – i.e., an increase in the Earth’s albedo – is of course a cooling effect, more than enough to overwhelm the warming effect of clouds inhibiting radiation to space at night.

Problem 6: The absolute total feedback strengths implicit in IPCC’s ECS interval actually decline as its estimate of ECS increases. The reason is that the absolute feedback strengths are a great deal smaller than the grossly uncertain and hence meaningless differential feedback strengths, and are accordingly smaller in relation to the Planck parameter than the differential feedback strengths.

Problem 7: The checksum lower-bound and midrange values of ECS derived from IPCC’s feedback strengths by the standard control-theoretic method confirm its stated lower-bound 2 K and midrange 3 K. The method shown is accordingly a fair representation of IPCC’s method. However, the upper-bound value 11.5 K thus calculated in the table is more than double IPCC’s stated 5 K value. The reason is that the shape of the response curve of ECS in the presence of feedback is rectangular-hyperbolic, so that, at imagined closed-loop gain factors (feedback responses as fractions of ECS) exceeding 0.5, runaway warming would be expected. But runaway warming does not arise, or we should certainly have noticed by now. Instead, there has been a succession of long Pauses with brief bursts of el Niño-driven warming in between. These Pauses, then, provide readily-comprehensible evidence that the runaway warming confidently predicted by the climate Communists is simply not occurring. Hence the shrieks of the Kremlin’s shills in comments.

Problem 8: The runaway global warming arising from the rectangular-hyperbolicity of the response curve combined with IPCC’s excessive estimate of feedback strength at the upper bound renders ECS unconstrainable by models. For instance, the upper-bound estimate, far too large to be credible, elevates the implicit closed-loop gain factor hC = 1 – 1 / AC to 0.83, implying that five-sixths of ECS is forced by feedbacks, and only one-sixth by reference sensitivity directly forced by the noncondensing greenhouse gases. In an essentially thermostatic system, any such conclusion is so inherently implausible as to be nonsense.

Problem 9: The interval of the system-gain factor as derived on the differential basis is 2.7762 [1.8783, 5.8824], but that interval is meaningless. When deriving the uncertainty in feedback strength and thus in the system-gain factor, it is necessary to do the sums on the basis that the Sun is shining and that, therefore, feedbacks respond to the entire input signal and not just to any perturbation therein. Climatology’s method does not take explicit account of the fact that feedbacks respond to the entire reference temperature. The interval of the absolute system-gain factor implicit in IPCC’s table takes that fact into account. It is 1.0781 [1.0822, 1.0872].

Problem 10: Very small changes in the total feedback strength and hence in the system-gain factor would deliver the very large ECS interval imagined by IPCC. The bounds differ from the midrange estimates by little more than 0.5%: yet they would be enough to generate the absurdly elevated and absurdly broad 3.4 [2.2, 11.5] K interval of ECS implicit in IPCC’s overwrought data for feedback strengths. However, given the uncertainties in the data and the propagation of those uncertainties over time, climatologists cannot constrain the bounds either of the feedback strength or of the system-gain factor anything like as tightly as within 0.5%. This is one of the most serious problems with the GCMs’ global-warming predictions. It would have been spotted decades ago if it had not been for climatology’s error of physics.

Consider the minuscule interval of IPCC’s implicitly-predicted absolute system-gain factor (the entire interval is only about 1% of the midrange estimate) in the light of the very large (±15 K) ECS uncertainty envelope given in Dr Frank’s paper, and it becomes all too evident that, whatever other purposes the general-circulation models may have, they are of no value whatsoever in attempting to constrain ECS. In this respect, they are costly guesswork machines that could be inexpensively replaced with a set of dice without any significant loss of rigor.

These numerous problems cannot be brushed aside by maintaining that one can do feedback calculations by the differential just as well as by the absolute method. As we have seen, doing those calculations by the differential method has the effect of concealing many of the problems briefly described above. IPCC’s method, then, provides no satisfactory basis for the decision of scientifically-illiterate governments panicked by fear of Rufmord (reputational death) at the hands of climate Communists to commit the economic and political hara-kiri that is now occurring.

Eventually the Marxstream media will realize that they can no longer get away with concealing the fact that the root cause of the surge in Siberian gas and Chinese lithium-carbonate prices, and of the consequent dangerous spike in Western energy prices, was the careless abandonment of the free market in energy and the foolish and wasteful closure of the West’s coal-fired power stations, which generated power at just $30 per MWh. Europe is now fatally dependent upon Siberian gas, a costly strategic error.

It is clear both from dozens of papers on climate sensitivity and from discussions with climatologists on both sides of the debate, and with the Kremlin’s witting, unwitting or witless shills here, that they had not realized that any feedback processes extant at a given moment must respond to the entire input signal, and not just to any perturbation thereof.

Had they known that, they would not have dreamed of trying to make predictions based on the differential rather than absolute feedback strengths. They would have realized that, because their very small uncertainties in feedback strength would, if real, lead to very large and consequently unconstrainable changes in equilibrium sensitivity, attempting to diagnose feedbacks at all from the models is necessarily doomed to failure.

That begs the question what is the soundest method of deriving climate sensitivities? We favor the corrected energy-budget method, of which the very simple but quite robust version earlier in this column showed that, on business as usual, we can expect only 0.9 K further global warming all the way to 2100. A more sophisticated version generates much the same result. Midrange equilibrium doubled-CO2 sensitivity is just 3.45 x 0.3 x 1.0764 = 1.1 K. Hardly life-threatening, now, is it?

5 25 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

279 Comments
Inline Feedbacks
View all comments
John Hultquist
September 3, 2022 8:57 am

Nice photo of the Oak. None grow where I live (central Washington State) but the local university has dozens and during a “mast year” the trees and then the ground and sidewalks are covered. I need to get a photo, ’cause I failed to the last time I noticed.

Bruce Cobb
September 3, 2022 9:00 am

You can tell how “unimportant” the idea of a Pause is by how viciously it gets attacked by Pause Deniers.

NeedleFactory
September 3, 2022 10:00 am

Viscount Moncton:

Could you provide a link to “the whole dataset”?
I’d like to use it in some simulations.

Monckton of Brenchley
Reply to  NeedleFactory
September 4, 2022 10:27 am

Just about every graph I publish here has the data source clearly shown on the graph itself.

September 3, 2022 10:23 am

” Problem 5: IPCC shows the cloud feedback as positive. However, the primary effect of the increased cloud cover that is to be expected with warming – i.e., an increase in the Earth’s albedo ”

— The author probably does not know the difference between
cooling cloud radiative effect (CRE) = 19W/m²
and warming cloud feedback = 0.42W/m² * °K.
Our cloud cover is shrinking ~2-2,5% in the last.

If you wonder why – here is an explaination

https://wattsupwiththat.com/2022/09/02/uah-global-temperature-update-for-august-2022-0-28-deg-c/#:~:text=In%20addition%20to,climateprotectionhardware.wordpress.com/

a3.png
Monckton of Brenchley
Reply to  macias
September 4, 2022 10:26 am

The furtively pseudonymous “macias” should not assume ignorance on the part of its betters. One of my earliest published peer-reviewed papers on the climate question concerned the naturally-occurring reduction in global cloud cover between 1984 and 2001, which on its own explained most of the warming over that period. However, the fluctuations in global cloud cover bear scant relation to the near-perfectly monotonic increase in anthropogenic forcing. And the primary effect of more water vapor in the atmosphere should be more clouds. Like it or not, the primary effect of increased cloud cover is to reflect more sunlight back into space.

September 3, 2022 11:06 am

Looking at the graph of the entire UAH record, around 2012 the anomaly was -0.4 and by 2016 or so it was at +0.7, and then back down around zero by 2018. Within four years the anomaly swung 1.1 C. There are several 0.7 degree swings in that record.

How can anyone look at that record and think that CO2 has the power to raise the entire planet’s temperature, but can’t prevent 0.7-1.0 deg swings in a five year period?

Did anyone even try to come up with an alternate hypothesis for the warming since the late 1800s that didn’t involve GHGs?

bdgwx
Reply to  James Schrumpf
September 3, 2022 1:03 pm

JS said: “How can anyone look at that record and think that CO2 has the power to raise the entire planet’s temperature, but can’t prevent 0.7-1.0 deg swings in a five year period?”

The reason is because CO2 (and other polyatomic gas species) only impedes the escape of upwelling infrared radiation. It does not impede other agents from also providing a contribution to the energy flux into and out of the atmosphere. Even the most trivial model as show below is consistent with both CO2 providing a small but persistent systematic upward effect and with other random/cyclic phenomenon providing a large but transient chaotic effect.

comment image

JS said: “Did anyone even try to come up with an alternate hypothesis for the warming since the late 1800s that didn’t involve GHGs?”

There have been attempts. They just don’t match the evidence as well. My challenge to anyone reading this is to provide a monthly timeseries from 1880 to present from a no-GHG model that has a lower root mean square error than CMIP5 and which is also consistent with the cooling stratosphere, the magnitude of the Quaternary Period glacial cycles, the brightening of the Sun, the PETM and other ETMx events, and countless other lines of evidence in the contemporary and paleoclimate record.

Reply to  bdgwx
September 3, 2022 2:41 pm

CO2 can’t account for the warming from the late 1800s until human-produced CO2 became a noticeable component of the atmosphere sometime after the 1940s. There’s nearly a century of warming since the Ice Fairs on the Thames to 1940, but man-made CO2 wasn’t nearly abundant enough to have any effect.

bdgwx
Reply to  James Schrumpf
September 3, 2022 6:18 pm

CO2 and only CO2 can’t account for many climatic change events including the contemporary warming period. That does not mean that CO2 can’t participate in some of the events including the contemporary warming period with a spectrum of contribution.

MarkW
Reply to  bdgwx
September 4, 2022 2:52 pm

There is no evidence that CO2 has played a measurable role in any warming period.

Reply to  bdgwx
September 3, 2022 5:20 pm

Why limit your time period? No one can account for the warming of the Roman period or Renaissance either. Or the cooling that followed both. We already know it wasn’t human generated CO2 or the lack of it, even the IPCC says that. So it must be something else entirely natural. There is also no reason to suspect that whatever the “something else” was has ceased to be operative.

It really isn’t helpful to claim that modern warming is all from human released CO2 because “we can’t think of anything else” when there is lots of evidence that there is something else.

bdgwx
Reply to  Doonman
September 3, 2022 6:26 pm

I’m not limiting the period. Quite the contrary. I want whatever theory is presented to be consistent with all the evidence over all time period. And assuming there is an adequate response to the first challenge the second challenge will be to see if the theory can match the skill of the Willeit et al. 2019 model of the last 3 million years. But let’s take it one step at a time.

Monckton of Brenchley
Reply to  bdgwx
September 4, 2022 10:21 am

Even if one attributes all of the warming since 1850 to anthropogenic forcing, the observed warming rate is a very long way below what was and is predicted at midrange, and the effects of the mildly warmer weather are proving handsomely net-beneficial.

bdgwx
Reply to  Monckton of Brenchley
September 4, 2022 12:31 pm

CMoB said: “Even if one attributes all of the warming since 1850 to anthropogenic forcing, the observed warming rate is a very long way below what was and is predicted at midrange,”

Let’s talk about that. You have repeated said the IPCC overestimated the warming by a significant amount. I have posted what the IPCC actually predicted multiple times and I will continue to repost this every time you make the claim.

Here are the scenarios put forth by the IPCC in 1990. As of 2020 there was 413 ppm of CO2 which puts us a hair above scenario B. There was 1900 ppb of CH4 which puts us right on scenario C. And there was 225 ppt of CFC11 which puts us well below scenario C/D.

comment image

I then compared the scenarios with the actual RF of all GHGs up to 2020. At 3.2 W/m2 we are still under scenario D.

comment image

I think based on these graphics we can reasonably say the actual course humans selected was close to scenario C. The warming the IPCC predicted for scenario C is 0.55 C. A blend of HadCRUT and UAH shows that it actually warmed about 0.53 C from 1990 to 2020. Based on this it looks like the IPCC did not overestimate the warming by a factor of 3x or even 2x. It was nearly spot on.

comment image

Monckton of Brenchley
Reply to  bdgwx
September 4, 2022 2:43 pm

IPCC predicted warming equivalent to 0.34 K/decade by 2030, as its midrange case. Playing with various graphs won’t help: that is what it predicted.

bdgwx
Reply to  Monckton of Brenchley
September 4, 2022 3:23 pm

CMoB said: “IPCC predicted warming equivalent to 0.34 K/decade by 2030″

Prove me wrong. Show me where the IPCC predicted +0.34 K/decade for the emission pathway that humans chose.

Monckton of Brenchley
Reply to  bdgwx
September 5, 2022 4:26 am

That’s an old climate-Communist dodge. Predict more forcing than one is now willing to admit to, base the early warming predictions on that exaggeration, be proved wrong by events and then say that if one had predicted far lower forcing one would have been right.

What IPCC actually said, rather than the ingenious but misguided re-interpretation offered by bdgwx, was that by 2030 there would have been enough warming to indicate a warming rate of 0.34 K compared with 1990. That was the central prediction, like it or not, and it was a vast exaggeration.

bdgwx
Reply to  Monckton of Brenchley
September 5, 2022 6:16 am

CMoB said: “What IPCC actually said, rather than the ingenious but misguided re-interpretation offered by bdgwx, was that by 2030 there would have been enough warming to indicate a warming rate of 0.34 K compared with 1990. That was the central prediction, like it or not, and it was a vast exaggeration.”

There it is again! Last chance. PROVE IT. Quote the verbiage and exactly what page number it appears on where the IPCC predicted +0.34 K/decade for the emission scenario that humans choose. And just to be clear, if you think scenario A (business-as-usual) is the emission scenario that humans chose then post the CO2, CH4, and CFC11 concentrations for that scenario as they appear in the text and compare them to what is observed and let’s see just how closely they match.

Monckton of Brenchley
Reply to  bdgwx
September 5, 2022 10:26 am

Asked and answered.

Carlo, Monte
Reply to  Monckton of Brenchley
September 5, 2022 10:41 am

bgwxyz thrives on playing “stump the professor” where lays what he thinks are traps with questions for which he thinks he already knows the answer. This one is a perfect example of this silly game.

bdgwx
Reply to  Monckton of Brenchley
September 5, 2022 12:02 pm

CMoB said: “Asked and answered.”

Let it be known that I gave you the benefit the doubt and the opportunity to prove your case and prove the graphs I posted above are wrong multiple times over the last several months. Instead you deflected and diverted each time. And now you’re claiming you answered the question even though it’s no where to be found. I have no other choice at this point but to believe that you know the IPCC didn’t predict +0.34 K/decade for the emission pathway humans chose and that you’re continued proclamation as such is disinformation meant to intentionally deceive your audience.

Monckton of Brenchley
Reply to  bdgwx
September 5, 2022 12:35 pm

Asked and answered.

Old Cocky
Reply to  bdgwx
September 4, 2022 2:49 pm

That’s an interesting analysis. It seems to indicate that the main drivers to date have been CH4 and CFC11

JBP
September 3, 2022 4:41 pm

Thanks LMoB. What is your take on the article here a few days ago put forth by Dr Javier and Andy May? With climate shifts? Does that provide potential model inputs?

https://wattsupwiththat.com/2022/08/23/the-sun-climate-effect-the-winter-gatekeeper-hypothesis-iv-the-unexplained-ignored-climate-shift-of-1997/

R, JBP

Izaak Walton
September 3, 2022 6:14 pm

so now we are blaming windmills for climate change without any evidence at all
Another reason for lingering floods and droughts is that the very tall offshore windmills now being installed at crippling expense to taxpayers in subsidies grossly interfere with the laminar flow of the wind at altitudes now exceeding that of the spire of Salisbury Cathedral, Britain’s tallest, slowing both high-pressure and low-pressure systems down and leading to more prolonged and intense weather of all kinds.”

Is there any evidence for this assertion at all?

Richard Page
Reply to  Izaak Walton
September 3, 2022 8:10 pm

Yes there have been numerous studies highlighting the ‘wind stilling’ effect of wind turbines; the more energy you remove to power the turbine, the less energy remains. Eventually, given a dense enough turbine field, you will get enough energy removed that the air will move very slowly, if at all. It’s a very well known and researched phenomenon, resulting in recent guidance on spacing of new wind turbines to ensure greater efficiency. Honestly, I’m shocked that you weren’t aware of this, given the amount of time you’ve been visiting WUWT – it comes up again and again.

Izaak Walton
Reply to  Richard Page
September 3, 2022 9:10 pm

The fact that wind turbines change the flow is not the issue. What is new is the claim that this small effect can cause a drought hundreds of kilometres away.

Richard Page
Reply to  Izaak Walton
September 4, 2022 8:15 am

Who says it’s a small effect? Europe has the highest density and number of wind turbines on the planet. If you look at the map of where the windfarms are located alongside a map of the recent ‘heat dome’ over central Europe you will notice that there is a certain amount of correlation there. While I need to point out that correlation is not causation, it IS a very interesting coincidence, isn’t it?

Izaak Walton
Reply to  Richard Page
September 4, 2022 7:19 pm

That is a coincident. Rainfall arise due to evaporation of water over the oceans that is then transported by the atmosphere at altitudes significantly above the height of the tallest wind turbine. If the rain clouds form at heights over 1km then wind turbines will have no effect.

Monckton of Brenchley
Reply to  Izaak Walton
September 4, 2022 10:18 am

“Izaak Walton” cannot seriously imagine that there are no windmills within “hundreds of kilometres” of the Rhine, to take one example. He may care to study a textbook of elementary physics, with particular reference to the consequences of disrupting a laminar flow.

Izaak Walton
Reply to  Monckton of Brenchley
September 4, 2022 11:55 am

And where is the evidence that it is a large effect? The atmosphere contains roughly 5 x10^21 J of available energy (see:
“Atmospheric Available Energy” at ttps://doi.org/10.1175/JAS-D-12-059.1 for details). Compared to that the total installed wind turbine capacity is just over 800 GW so even if all the wind turbines were working 100% of the time they would only extract less than 0.001% of the available energy each day during which time the sun would have injected a lot more energy into the atmosphere.

Monckton of Brenchley
Reply to  Izaak Walton
September 4, 2022 2:42 pm

It really would be helpful if “Izaak Walton” were to read a textbook of physics on the disproportionate influence of disruption of a laminar flow.

Ireneusz Palmowski
September 4, 2022 3:42 am

The blocking of highs in the Arctic is a bad forecast for autumn for Europe. At the same time, a blocked low in the Atlantic brings heavy precipitation to the British Isles.
https://earth.nullschool.net/#2022/09/04/1800Z/wind/surface/level/overlay=mean_sea_level_pressure/orthographic=14.84,69.99,709

Ireneusz Palmowski
Reply to  Ireneusz Palmowski
September 5, 2022 12:17 am

The looping jet stream, cut off in the Atlantic, is evident in the strong drop in the NAO index, which is measured at 500 hPa.

September 4, 2022 3:09 pm

Thankyou for yet another great post.

I must politely point out in your energy budget calculation you have not included their estimates of aerosols which is -1.1W/m2 in AR6.  Additionally the NOAA AGGI does not include Ozone which has been a forcing of 0.5 W/m2 in AR6.

To my understanding your forcing now becomes 3.23 -1.1 +0.6 = 2.73W/m2.

Therefore the system gain factor becomes.
1.04/(2.73×0.3)=1.269

A little higher but certainly not catastrophic.

I highly suspect that they have firstly overstated the warming of 1.04 degrees since 1850 due to their dishonest tampering and not making sufficient correction for the urban heat island effect.  Connolly et al 2021 has corrected for UHI and it eliminates quite a lot of warming and it shows a more cyclic signature in the temperature record.

Secondly Wu et al shows only 70% of the warming since 1850 is man made.  In addition the Connolly et al 2021 paper shows some evidence that the warming since 1850 is due to an increase in solar activity as we were recovering beneficially from the cold and miserable Little Ice Age.

Finally I agree with you with the so called missing heat that some how the heat disappears mysteriously in the ocean is incorrect simply because feedback operates in timescales of years. We are looking a inter decadal timescales so we should get the fully realised warming in the climate system with little to no warming hiding in the oceans.

Thankyou for reading my comment

Best Regards 
TheSunDoesShine

bdgwx
Reply to  TheSunDoesShine
September 4, 2022 6:50 pm

Good post. Here is the IPCC AR6 WG1 forcing table for those interested.

comment image

Monckton of Brenchley
Reply to  bdgwx
September 5, 2022 4:15 am

Very helpful table. if one allows for further growth in forcing in the three years since 2019, and one also corrects the exaggeratedly negative aerosol forcings in accordance with prevailing opinion in the peer-reviewed journals, one reaches around 3.2 W/m^2 total anthropogenic forcing.

Reply to  TheSunDoesShine
September 4, 2022 8:07 pm

Secondly Wu et al shows only 70% of the warming since 1850 is man made.”
Oh they do not.

Monckton of Brenchley
Reply to  Mike
September 5, 2022 4:16 am

Actually, the table in Wu et al., if one performs a duly-apportioned calculation, does show that only 70% of the warming is manmade, and that conclusion is also stated in Wu’s paper, but Wu himself does not stand by that conclusion.

Monckton of Brenchley
Reply to  TheSunDoesShine
September 5, 2022 4:23 am

The Sun Does Shine has made the sort of thoughtful and interesting contribution that is constructive.

As to the markedly negative aerosol contribution, Professor Lindzen, now backed by the great majority of opinions in the peer-reviewed journals, considers it to be a fudge-factor intended to reduce the apparent forcing so as falsely to increase the corresponding climate sensitivity. If one takes the forcing in IPCC (2021), which runs up to 2019, and makes a small adjustment for the over-negative aerosol forcings, one reaches around 3.2 W/m^2 forcing, just as the AGGI shows, since the non-GHG forcings not included therein broadly self-cancel after correction.

As to Wu et al, Wu himself does not adhere to the indication in his paper that 70% of warming is manmade, even though that value is derivable from the primary table therein.

If the system-gain factor were indeed 1.269, the temperature today would be as follows –

1.269(259.58+7.52+0.97) > 340 K

One must apply the system-gain factor to the entire reference temperature, not just to the most recent perturbation.

Therefore, we know that the forcings are not as low as some would suggest.

Beta Blocker
September 5, 2022 11:25 am

Regarding Lord Monckton’s verbal jousting with his critics — especially with Richard Greene, who is also regular participant on Francis Menton’s Manhattan Contrarian blog.

Mr. Greene has his own role on that blog as the Michigan Curmudgeon. He is no less opinionated over there as he is over here. But that is OK since we should always value a diversity of opinions if honestly expressed.

Back in early August, in a comment over on the Manhattan Contrarian, I posted a link to my one-page graphical analysis of where I think global mean temperature will be going over the next eighty years. Here is that graphical analysis:

Beta Blocker’s Year 2100 GMT Prediction Envelope (April, 2020)

Roughly 30 man-hours of work on my part were needed to produce the analysis. Richard Greene — a.k.a the Michigan Curmudgeon — said in reply to my MC post that it is not possible to predict where global mean temperature is headed and therefore the 30 hours I spent creating my GMT prediction envelope were wasted.

As for my own personal background, I’ve spent some good bit of time in the last two decades doing feasibility cost & schedule analysis and project risk analysis for nuclear construction projects in the ten to fifteen billion dollar range.

In the world of risk management, one is often faced with the problem of predicting the future using information which is incomplete, which is uncertain, and which contains known issues of one kind or another that will impact the accuracy and reliability of a risk management analysis.

But you still have to do as a good a job of making a prediction as you can, something which often requires the application of subjective judgement.

For those of you who have been following the endless debates over the climate models and their disparate estimates of ECS, recognize that my one-page graphical analysis is as much a commentary on the methods and means climate scientists use in constructing their computerized climate models as it is a prediction of where I think global mean temperature will be going over the next 80 years.

As analysis tools & techniques go, how does my graphical GMT prediction envelope compare with the IPCCs ensemble climate models?

* Basis Assumptions *

The IPCC’s climate models contain numerous assumptions large and small concerning how the physics of the atmosphere operate. Many of these assumptions involve parameterizations; i.e., estimates of the value of physical parameters as opposed to definitively known physical values.

In contrast, my own graphical analysis relies upon only one very large, very sweeping assumption: “The 1850-2020 HADCRUT4 Global Mean Temperature Record includes the combined effects of all natural and anthropogenic climate change processes as these have evolved through time. Similar processes will operate from 2020 through 2100.”

The more assumptions a hypothesis must rely upon to support that hypothesis, the weaker is the case for that hypothesis. The IPCC’s climate models embrace a number of assumptions about how the earth’s climate system operates. The veracity of those climate models is therefore inherently open to debate.

My prediction envelope uses only one assumption. But it is a big one — the mother of all sweeping scientific assumptions, in fact.

* Verification of the Models *

Verification of climate models ought to be simple. You measure how closely observed temperatures match the model’s predicted temperatures over some chosen time frame. But there is a complication here if we are employing a verification approach within a policy risk management context.

The earth has been warming for at least 150 years, maybe even 300 years. Any model ensemble which predicts a trend in future warming which includes the past 150 year warming trend somewhere inside of its predictive envelope can be labeled as ‘verified.’ Thus implying that the upper boundary of the predictive envelope for an IPCC model ensemble is just as likely to occur as the lower boundary.

My own GMT prediction envelope assumes that the earth’s climate system as it actually operates in nature places limits on the upper and lower boundaries of any long term warming trend. From a risk management perspective, the earth’s actual climate system is the experimental apparatus being relied upon for making a prediction.

A rate of increase of roughly 0.2 C per decade is the highest trend seen so far in the earth’s climate system. That rate will produce +3C of warming above pre-industrial by the year 2100 if it continues. On that basis, it is chosen as the upper boundary for my GMT prediction envelope.

That said, statistically significant pauses in GMT warming have occurred within the past hundred years. My subjective opinion is that if such pauses have occurred in the past, they will occur again in the future.

In a risk management context, subjective opinions are allowed if they produce a more reliable range of predicted outcomes. Hence the warming scenario I judge as most likely to occur is +2C warming above pre-industrial by the year 2100.

* Costs of Developing, Maintaining, and Operating a Climate Model *

The IPCC’s ensemble climate models cost millions of dollars to program and maintain, employ hundreds if not thousands of people in their maintenance and operation, and require the use of highly expensive super-computing resources to produce.

In great contrast, my GMT prediction envelope is produced using an old version of Adobe Photoshop whose expense has long since been written off. My data source is HADCRUT generated online for free using the woodfortrees temperature data tool.

Both the input data which enters the analysis at the beginning, and the output data which leaves the analysis at its conclusion, is methodologically the same set of data in that HADCRUT information is both the basis of the analysis model and the means by which the GMT predictive envelope will be verified over time.

If we assume a labor rate of $15 per hour minimum wage for 30 hours, and if we assume an equipment charge of $5 per hour for my computer and for my copy of Adobe Photoshop, Beta Blocker’s Year 2100 GMT Prediction Envelope cost about $600 to produce.

The Bottom Line:

OK, here is the question you have all been waiting for …. Is my one-page cheaply-produced graphical analysis of GMT trends as useful a tool for predicting the future as is the IPCC’s horrifically complex and expensive ensemble of climate models?

Absolutely! No question about it, really. But I do admit to being highly opinionated on that score.

Verified by MonsterInsights