CMIP6 models overshoot: Charney sensitivity is not 4.1 K but < 1.4 K

By Christopher Monckton of Brenchley

Recently the indefatigable Dr Willie Soon, who reads everything, sent me a link to the projections of equilibrium global warming in response to doubled CO2. This standard yardstick for global-warming prediction is known in the trade as “Charney sensitivity” after Dr Jule Charney, who wrote a report in 1979 saying that doubled CO2 would warm the world by 1.5-4.5 K, with a midrange estimate of 3 K. Most IPCC reports adopted the same sort of interval in making their predictions.

The Coupled Model Intercomparison Project’s 5th-generation models projected 2.1-4.7 K Charney sensitivity, with a midrange estimate of 3.35 K (from data in Andrews et al. 2012).

Now the sixth generation of these cybernetic behemoths, the CMIP6 ensemble predict 3 to 5.2 K Charney sensitivity, with a midrange estimate of 4.1 K (Fig. 1). The original midrange projection has become the lower bound.

clip_image002

Fig. 1. OTT: Projected Charney sensitivity in 21 CMIP6 models, September 2019.

In reality, the midrange Charney sensitivity to be expected on the basis of observed warming as well as total and realized forcing to 2011, the year to which climate data were updated in time for IPCC’s 2013 Fifth Assessment Report, is less than 1.4 K. That would take at least a century to happen.

Here, then, is a giant error of logic right at the heart of official climatology. CMIP5 models project 4.1 K warming in response to doubled CO2 when, on the basis of officially-published data, they should be projecting only 1.4 K. They are overshooting threefold.

No surprise, then, that children relentlessly propagandized by the sub-Marxist educational establishment are either collecting Nobel Peace Prize nominations for making snarly faces at President Trump in the U.N. General Dissembly or committing suicide, as one Communized child did recently in the English Midlands, because “climate emergency”.

Teaching children about the ever-more-absurd hyper-predictions of global warming is child abuse. It should surely be outlawed before anyone else is driven to death. Unfortunately, the Socialist Party in Britain, which has been taken over by Communists in recent years, is proposing mandatory global-warming indoctrination classes even for five-year-olds.

Official climatology’s own mainstream data and methods would lead it to expect a midrange Charney sensitivity no more than one-third of the latest models’ 4.1 K projection.

We shall take the approach –revolutionary in climatology – of deriving the true midrange Charney-sensitivity estimate directly from real-world data. You don’t need models, except at the margins. It is possible to derive future global warming from the observed period warming from 1850-2011, from official estimates of the reference anthropogenic radiative forcing over the same period, and from the radiative imbalance that subsisted at the end of that period.

As far as I can discover the Intergovernmental Panel on Climate Change, to name but one, has never attempted to derive a midrange estimate of future global warming by that most obvious and direct method – from real-world data.

clip_image004

Fig. 2. Not much warming: Monthly temperature anomalies, 1850-2011 (HadCRUT4).

First, we need the warming ΔR1 from 1850-2011. The answer, from HadCRUT4, the only global dataset that covers the whole period, is 0.75 K – at less than 0.5 K century–1 (Fig. 2).

Next, we need the Planck sensitivity parameter P – the factor by which a radiative forcing is multiplied to yield the corresponding warming before accounting for feedback. Roe (2009) calls this pre-feedback warming the “reference sensitivity”.

A respectable approximation to P is the Schlesinger ratio (Schlesinger 1985), the ratio of the global mean surface temperature at a given moment to four times the net incoming radiative-flux density at the top of the atmosphere.

In (1), total solar irradiance S is 1363.5 W m–2 (deWitte & Nevens 2017); albedo α is 0.29 (Stephens 2015); and the flat-Earth fudge-factor d is the ratio of the Earth’s spherical surface area to that of its great circle: i.e., 4. No allowance is made for Hölder’s inequalities between integrals (though it should be made), for we are using official climatology’s methods.

clip_image006

In (2), the Planck parameter P is derived on the basis that the global mean surface temperature TS in 2011 was 288.4 K (HadCRUT4: Morice et al. 2012).

clip_image007

Knowing P gives us the reference sensitivity ΔRC to doubled CO2 in (3). I do not yet have the CO2 forcing ΔQC from the CMIP6 models, so we shall take it as the mean of the estimates in 15 CMIP5 models (Andrews et al. 2012): i.e., 3.447 W m–2.

clip_image009

Next, we need the reference (pre-feedback) anthropogenic radiative forcing ΔQref from 1850-2011. IPCC (2013, fig. SPM.5) gives a midrange 2.29 W m–2, to which subsequent papers (e.g. Armour 2017) have added 0.2 to correct an overestimate of the negative aerosol forcing. Call it 2.5 W m–2.

We also need to know how much of that forcing has been realized: i.e., how much of it is reflected in the 0.75 K observed warming to 2011. Smith (2016) gives an estimated radiative imbalance, or unrealized forcing, of 0.6 W m–2. Therefore, the realized forcing ΔQrlz is 2.5 – 0.6, or 1.9 W m–2.

In (4), the system-gain factor A implicit in the real-world data from 1850-2011 is derived.

clip_image011

We can now derive the midrange estimate of Charney sensitivity ΔEC in (5).

clip_image013

Fig. 4 shows the startling discrepancy between Charney sensitivity expected on the basis of observed warming, reference forcing and its realized fraction to 2011, on the one hand, and, on the other, and the untenably-exaggerated Charney sensitivities predicted by the CMIP models.

clip_image015

Fig. 4. Overstated midrange Charney sensitivities (CMIP5 3.35 K, red bar; CMIP6 4.05 K, purple bar) are 2.5-3 times the 1.35 K (orange bar) to be expected given 0.75 K observed warming from 1850-2011 (HadCRUT4: green bar), 2.5 W m–2 total anthropogenic forcing to 2011 (IPCC 2013, fig. SPM.5; Armour 2017) and 0.6 W m–2 unrealized forcing to 2010 (Smith 2015).

The models’ projections flatly contradict the published data on manmade forcing and radiative imbalance. Global warming will be about one-third of their overblown midrange estimates. Scientifically speaking, that ought to be enough to end the climate “emergency”.

Since three-quarters of the CMIP6 models’ midrange 4.1 K projection of Charney sensitivity is feedback response, the error in the models is likely to be in their treatment of the water vapor feedback. Indeed, the predicted tropical mid-troposphere “hot spot” predicted in model after model is not evident in observed reality (Fig. 5). Without it, the water vapor feedback response must be small, and could not quadruple reference sensitivities.

clip_image017

Fig. 5. Models’ projected tropical mid-troposphere “hot spot” (a) is not observed (b).

Policymakers, therefore, should assume a Charney sensitivity not of 3 or 4 K but of less than 1.4 K. Since that warming is small, slow and net-beneficial, and since climatology has never asked, let alone answered, the key question what is the ideal global mean surface temperature, there is no rational justification for assuming that a mild warming requires any action at all, except the courage to face down the screeching Communists and have the courage to do nothing.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

211 Comments
Inline Feedbacks
View all comments
October 9, 2019 7:44 am

CO2 shows a logarithmic decay for W/m2 with an increase in CO2. At the current level of 410 ppm, each additional molecule of CO2 adds very very very little to the energy balance. More importantly, the Oceans dominate the atmospheric temperatures. To explain the atmosphere, you have to explain the Oceans and how CO2 warms them, which LWIR between 13 and 18 microns doesn’t do. Assuming however that it does, there is a rate of change associated with the warming due to a marginal 1.64 W/m2, and El Nino’s can cool the oceans by a full degree or more. For CO2 to be the cause, which is laughable considering the amount of variation between a cloudy and clear day is astronomical and the energy associated with visible radiation, one would have to explain how CO2 and replace all that energy lost due to an El Nino in the time period between El Ninos. CO2 is like trying to fill an Olympic sized pool with a syringe. Facts are, Mother Nature has a natural pressure valve called an El Nino that rapidly reduces the energy in the climate system. That is why in 600 million years of geologic history there has NEVER been catastrophic warming even when CO2 was 7,000 ppm. The rate as which CO2 can replace lost energy is simply way way way too slow. The only way for anything to cause catastrophic warming would be to place the earth in a thermos and not let any energy excape to outerspace. Anyway, Lord Monchton, I created an intesting experiment for High School Students and it looks like NASA had fouled it up by promptly removing the data. You may want to try this kind of experiment yourself. My experience is the simpler you make your arguements, the more effective they will be.

WUWT, you may want to look into this as well. NASA playing with the data isn’t something that should be tolerated. Otherwise all we will get is GIGO.

An Easy High School Experiment to Debunk CO2 Driven Climate Change
https://co2islife.wordpress.com/2019/09/15/an-easy-high-school-experiment-to-debunk-co2-driven-climate-change/

The Case of the Disappearing Data
https://co2islife.wordpress.com/2019/10/08/the-case-of-the-disappearing-data/

Stephen Philbrick
Reply to  CO2isLife
October 9, 2019 2:36 pm

The closing sentence of the 4th paragraph states:
“That would take at least a century to happen.”

Is something missing? What would take a century to happen?

John Q Public
October 9, 2019 9:11 am

IS there enough data available yet (for creating a linear response function) for Pat Frank to take a stab at doing an uncertainty analysis on CIMP6 models?

Reply to  John Q Public
October 9, 2019 7:07 pm

We’d need to know the average annual long wave cloud forcing error of the CMIP6 models to do the uncertainty analysis, John.

I doubt that will be known any time soon.

On the other hand, it’s very doubtful that the physics of cloud modeling is much improved over the CMIP5 models. The improvement from CMIP3 to CMIP5 was pretty much zero.

John Q Public
Reply to  Pat Frank
October 9, 2019 8:00 pm

Thank’s Pat. Basically you would need a lot of runs across a lot of models, then try and build a parametric based emulator which would be validated against the runs, right? That could then be compared to the actual data from Lauer as you did with CIMP5? Basically the first problem is having enough CIMP6 output to work with?

Reply to  John Q Public
October 10, 2019 9:59 am

Basically, John, someone would need to run a whole slew of CMIP6 simulations hindcasting the past climate over 20 years or so, say, 1996-2015.

Then compare the global total cloud fraction retrodicted in those hindcast simulations with the cloud fraction actually observed over those years by satellite.

Then calculate the (simulated minus observed) cloud fraction error grid-point by grid-point. Sum it all up for each model across that 20 years to get an annual average error in global total cloud fraction.

The cloud fraction error must then get converted to average annual error in long wave cloud forcing.

Then get a bunch of air temperature projections from different CMIP6 GCMs, their RCP projections, say, and see if paper eqn. 1 will emulate those projections, using the same RCP forcings.

It almost certainly will be successful (unless someone has decided to add some non-linearity to muddy the emulation waters).

Get all that, and the uncertainty analysis is a go.

At a guess that sort of detailed data on CMIP6 models will not be available for some time.

John Q Public
Reply to  Pat Frank
October 10, 2019 3:51 pm

Ok. Basically, repeat Lauer’s work, but with CIMP6 models to get the CIMP6 uncertainty, then your part starts. So, basically there is not enough data at his point.

You could build you emulator for CIMP6 (if there is enough data for that even), then compare it to CIMP5 and see if there is any difference. If not, the CIMP5 results still stand.

Phil Salmon
October 9, 2019 10:27 am

Can someone explain, in qualitative terms, why CO2’s property of absorbing and re-radiating IR photons does not cause incoming solar IR radiation to be back-radiated out to space?

Such that increasing CO2 in air would reduce the heating of the atmosphere by IR by backradiating incoming solar IR to space.

At all heights above the emission height, the sky above is transparent to IR. So in the stratosphere and above CO2’s radiative properties will reduce TSI and cool the planet.

In the troposphere the hypothesised back radiation back to earth might indeed conceivably cause some equilibration warming. But this would only offset the cooling caused by CO2 in the upper atmosphere re-radiating solar IR back out to space.

Tell me it ain’t so.

John Q Public
Reply to  Phil Salmon
October 9, 2019 4:52 pm

CO2 absorbs around 3.4 microns. Incoming radiation is in the visible range (predominantly). So, incoming radiation needs to be absorbed on the earth’s surface, then re-radiated in the thermal range (say 2-7 microns or so) before CO2 can absorb it. So basically, CO2 is not able to absorb incoming radiation, but can absorb re-radiated radiation.

Reply to  Phil Salmon
October 9, 2019 7:13 pm

Phil, the collisional decay of vibrationally excited CO2 completely dominates radiative decay in the troposphere. The absorbed radiant energy goes from CO2 into the kinetic energy of the troposphere.

The real question is, what does the climate do with that kinetic energy. Climate model(er)s assume it all shows up as sensible heat. But in fact, no one knows what happens.

Does the kinetic energy convect away? Does global cloud extent and distribution change? What happens, if anything, to tropical thunderstorms or tropical precipitation?

No one knows.

Phil Salmon
Reply to  Pat Frank
October 11, 2019 3:06 pm

Pat
Einstein wrote in 1917:

During absorption and emission of radiation there is also present a transfer of momentum to the molecules. This means that just the interaction of radiation and molecules leads to a velocity distribution of the latter. This must surely be the same as the velocity distribution which molecules acquire as the result of their mutual interaction by collisions, that is, it must coincide with the Maxwell distributions. We must require that the mean kinetic energy which a molecule per degree of freedom acquires in a Plank radiation field of temperature T be

kT / 2

this must be valid regardless of the nature of the molecules and independent of frequencies which the molecules absorb and emits.

“Independent of the molecules and frequencies…”
This hardly sounds like a resounding endorsement of CO2 backradiation warming theory from Einstein.

Phil Salmon
Reply to  Phil Salmon
October 10, 2019 2:31 am

John Q

CO2 is not able to absorb incoming radiation

Wrong. There is plenty of IR in sunlight at all the wavelengths absorbed by CO2 – 2.7, 4.3 and 15 microns – see the 4th figure here:

https://www.researchgate.net/publication/226227567_Solar_radiative_output_and_its_variability_Evidence_and_mechanisms/figures?lo=1

Since the spectrum reaching the surface is depleted in the CO2 absorption bands, this proves that sunlight on its way down to the surface has IR removed by CO2. Some of this is back-radiated out to space. So increasing atmospheric CO2 WILL REDUCE TSI – it will reduce solar IR reaching earth.

Granted IR does not reach the earth’s surface, but it does warm the atmosphere. This also adds to earth’s heat budget. Has this been ignored??

John Q Public
Reply to  Phil Salmon
October 10, 2019 3:55 pm

I kind of botched my answer, I see that. Around 15 microns is the magic range that classic GHG theory is built around.

https://twitter.com/dan613/status/1182415210268844032

Phil Salmon
Reply to  Phil Salmon
October 10, 2019 5:24 am

John and Pat

Thanks.

Eric Eikenberry
October 9, 2019 11:21 am

There are clearly only three major movers of climate on a geological scale; the oceans (or specifically the quantity of water on our planet), the planet’s precession of the equinoxes, and our sun. Everything else we have observed, and called “weather” is merely the chaotic balancing act of these forces in our atmosphere. Whether 4 parts per 10k or 70 parts per 10k, a trace molecule, which releases its extra “stored” energy within nanoseconds, will still produce only a trace effect upon temperature far outside the ability of modern instrumentation to quantify.

When scientists (and I use that term loosely) can adequately explain the cause and longevity of an ice age, I might begin to listen to them regarding “climate science” (used equally loosely). To judge by their current accuracy, and the earlier WUWT story regarding the temperature tipping point around 1000 AD, we will be well into the start of the next ice age before they’ll get around to noticing. And perhaps we already are…

Much ado about nothing, methinks. Akin to arranging the deck chairs on the Titanic.

Phil Salmon
October 9, 2019 12:48 pm

13.9 billion years ago the Big Bang occurred, energy-matter exploded into existence. Or maybe out of a string-world 11 or so dimensions sitting happily at the Planck length, three of them accidentally popped out and became spatially extended – whatever.

After a crazy fraction of a second including an inflationary phase, there followed a period of 300,000 years during which the photon ruled. Light was so intense that matter particles could not form and the universe though light dominated, was opaque to light. This is called the light epoch.

At the end of that 300,000 years, the universe had expanded enough to become transparent to photons and matter particles could form. From that moment until the present we are in the matter epoch.

Now the reason for saying all this is that to read the theory behind CO2 back radiation warming, one could be justified for wondering if the writer is confused as to which epoch we are in. Heat is discussed only in terms of radiation fluxes and matter is almost ignored.

We are firmly in the matter, not the light, epoch and one implication of this is that radiation plays a minor role in the heat dynamics of the atmosphere. The most important process is convection. This of course occurs only in the troposphere, but most of the atmosphere is the troposphere. Only the mesosphere and ionosphere are fully radiation dominated since matter is so sparse – it’s practically space.

Maybe there is an emission height. But heat is transported to the emission height from the surface primarily by convection, not radiation. Radiative fluxes play a negligible role easily snuffed out by water vapour feedbacks and attractor seeking nonlinear adaptative and optimising responses.

John Tillman
Reply to  Phil Salmon
October 9, 2019 2:30 pm

With more plant nutrient in the air, what a wonderful world it would be!

October 9, 2019 1:04 pm

Ja. Ja.
But it seems you have all forgotten that it will get hotter at the higher lats due to the natural climate change. Click on my name to read my report on that.

Mike Maddocks
October 9, 2019 1:06 pm

Are they using CMIP6 or CMIP6 without particle forcing? Because if it’s the latter then it is simply junk science.

You don’t need to go into any more detail than this. If they are selectively ignoring data because they don’t like the conclusions then they are propagandists, not scientists.

whiten
October 9, 2019 1:07 pm

I am sorry, for being so direct.

But from my point of view it has to be considered as a total imbecility case that totally burns as bright and as splendid.

As far as I know and I can tell, in consideration of models and experiment proper, as per GCM simulation the clause of CO2 doubling does not even exist in the mater of merit there ever.

The only GCM proper simulations that may consist as per the clause of CO2 doubling are simulation that have a initial condition at below 240 ppm, completely non realistic, where the entirety of the rest of GCM simulation proper above the 240 ppm initial condition never do a doubling there.

I am not sure yet, as how long for ppl it will take to understand that CS, ECS and TCR are simply AGW components or “man-made climate change” components, with no any value or any meaning out of such as a given.

When considering it by this angle the question requiring an answer stands as;

“Who happens to be more imbecile there, the ones that challenge for a feces dirty fight, or the ones who readily accept and comply with that challenge, every time as often as possible in the very means of that premise!??”

Sorry for being so strictly direct with this one, only so as due to my proposition of understanding and addressing of this issue in that regard.

It is like keep head banging the same wall for ever in the same method, and expecting a different result.

really really sorry!

cheers

ferd berple
October 9, 2019 1:32 pm

A question for MoB.

One thing that has troubled me for some time is what appears to me to be single minded obsession with radiation to calculate temperatures.

Consider this. During the day non radiative processes removes energy from the surface. At night a potion of this enery is returned to the surface, also by non radiative processes. A similar process happens between the tropics and the poles.

The net effect of non radiative processes is to reduce maximum temperatures and increase minimum temperatures, while the average temperatures are unchanged.

However, because radiation is a forth power function, reducing the variance in this fashion MUST also reduce outgoing radiation, which according to GHG theory must lead to an increase in surface temperatures to maintain the radiative balance at TOA.

Thus, non GHG processes also create an increase in surface temperatures according to the same logic used to explain the increase in surface temperatures due to GHG.

And if non GHG and GHG both create the GHG effect, then the GHG effect is nothing of the sort. That what we believe is due to GHG is in reality overestimated because we have incorrectly ignored the non GHG contribution to the GHG effect.

John Q Public
Reply to  ferd berple
October 9, 2019 5:01 pm

My understanding is that GCMs contain CFD code, so should be modeling some convective heat transfer.

Monckton of Brenchley
Reply to  ferd berple
October 10, 2019 1:33 pm

In response to Ferd Berple, non-radiative transports redistribute heat within the atmosphere, keeping it in overall thermodynamic equilibrium, while radiative processes – if Einstein (2017) is wrong and the greenhouse effect can cause warming at all in an atmosphere in overall thermodynamic equilibrium – will cause warming. It is only the radiative processes that are subject to the radiation law.

whiten
Reply to  Monckton of Brenchley
October 10, 2019 3:13 pm

Monckton of Brenchley
October 10, 2019 at 1:33 pm
————-

if Einstein (2017) is wrong
—————————–

Lord Monckton,

Really me falling there for the “bait”, with the mentioning of the Einstein….

From my position of understanding, the GCM proper model experiment,
apart from any alleged or otherwise contribution in the consideration of AGW or whatever else there in these given lines of climate,
still happens also to be and standing as the main and the only proof there, so far, even as in the case of an experiment,
but still valuable in consideration of the claim that:

t=nt,
where “t” the time as standard time of and for any given point in space,,,,
the only responsive variable there in the GCM simulations, that ends up as “effected”…
where “t” as standard of time, or time standard….

No way considering the possibility of contemplation that light may be considered as bending, outside the clause of:

t=nt…. as Einstein clearly claimed… if I happen to not be wrong with this one!

oh, well, that could be actually considered as way over the top…

But from my point of view, the GCM simulations in the summery, apart from the AGW and climate thingy, do propagate strongly as the main experimental platform thus far proving, or from some point of view considered as proper valid proof,
that Einstein being correct with his “punchy” claim of :

t=nt…
(even when such an experiment (GCM) was not set-up to provide or prove such as a condition).

All math or numbers or equations applied in consideration of GCMs, will not much make any sense there, unless including and being subject to the clause of:

delta “t”, where “t” the time as per time standard.

Oh, well, this really over the top, yes, most probably… but just saying it!

Please do ignore this, if missing the point, made or claimed here…
no hard feelings there, anyway… 🙂

Clever intelligible reaction there, if I may say… 🙂

But still bound to the “heartless” valuation of the scientific method. as all else there… prior or after.

Thank you.

cheers

October 9, 2019 2:33 pm

According to alarmists, water vapor increase depends only on temperature increase of the liquid surface water and has increased an average of 0.88% per decade. Actual measurements show the global average WV increase to be about 1.47% per decade.

This proves WV increase, not CO2 increase, has contributed to temperature increase.

CO2 increase in 3 decades, 1988 to 2018 = 407 – 347 = 60 ppmv
Water vapor increase from graph trend of NASA/RSS TPW data = 1.47 % per decade
From NASA graphic, average global WV = 10,000 ppmv
WV increase in 3 decades = .0147 * 10,000 * 3 = 441 ppmv
Per calculations from Hitran, each WV molecule is 5+ times more effective at absorbing energy from radiated heat than a CO2 molecule.
Therefore, WV has been 441/60 * 5 = 36+ times more effective at increasing ground level temperature than CO2.
The increased cooling by more CO2 well above the tropopause counters and apparently fully compensates for the tiny added warming from CO2 increase at ground level. Climate Sensitivity is not significantly different from zero.

Accounting for the WV increase, ocean surface temperature cycles and the solar effect (quantified by the sunspot number anomaly time-integral) matches 5-year smoothed HadCRUT4 measured temperatures 96+ % 1895-2018.

William Haas
Reply to  Dan Pangburn
October 9, 2019 9:05 pm

So Monckton is saying that ,according to the temperature data, assuming that all of the warming was caused by adding CO2 to the atmosphere, the climate sensitivity of CO2 cannot be more than 1.4 degrees K. You are saying that at least 96% of the warming can be attributed to natural process which would imply that the climate sensitivity of CO2 is less than .056 degrees K. Through some different logic I came up with a value of less than .017 degrees K. Either value is quite small and for all practical purposes is essentially zero. It has been my conclusion that there is no real evidence that CO2 has any effect on climate.

Reply to  William Haas
October 10, 2019 10:41 am

WH,
Results summarized in Table 1 of my blog/analysis show 0.62 K from increase in water vapor 1895 to 2018. If from CO2 increase at ground level this would be 0.62/36 = 0.017 K, same as you got.

My assessment at http://diyclimateanalysis.blogspot.com (second paragraph after Fig 1) shows that the added cooling from added CO2 above the tropopause more than compensates for this tiny amount at ground level, corroborating that CS is not significantly different from zero.

The 96% is how well the combination of ocean cycles, water vapor, and solar effect match the HadCRUT4 measurements. The WV increase correlates with irrigation increase (both exhibit major upturns around 1960). If that can be shown to be cause and effect, humanity had a substantial contribution to the temperature increase.

Average global temperature is being propped up by WV increase and el Ninos. Eventually the quiet sun and ocean cycle downtrend will prevail and an extended temperature decline will follow.

William Haas
Reply to  Dan Pangburn
October 10, 2019 1:38 pm

WOW! It may be only coincident but we both arrived at the number, .017 by two different methods that taken together includes both physics and measurement data. I am sure that IPCC would reject our lines of reasoning because in means that the increase in CO2 caused by mankind’s use of fossil fuel does not cause a significant change in climate and hence the IPCC should no longer be funded.

Phil Salmon
Reply to  Dan Pangburn
October 10, 2019 3:01 pm

What water vapour increase?
Water vapour has decreased in recent decades:

comment image

In China it’s gone down since 1990.

https://link.springer.com/article/10.1007/s11430-013-4792-1

Reply to  Phil Salmon
October 11, 2019 1:41 pm

Phil,
Those links both refer to RELATIVE humidity which depends also on atmospheric temperature. One goes back before satellites so its claim to be ‘global’ is a bit of a stretch. The other used data only from China. Both end several years ago.

Relative humidity is relevant to comfort and weather. Global average absolute humidity is what is relevant to global warming. It is expressed in different units as TPW.

NASA/RSS have been measuring TPW (Total Precipitable Water, i.e. sum of water vapor molecules all the way up) by satellite and reporting it monthly since 1988 at http://www.remss.com/measurements/atmospheric-water-vapor/tpw-1-deg-product . Fig 3 in my blog/analysis at http://globalclimatedrivers2.blogspot.com is a graph of the NASA/RSS numerical data and includes a rational extrapolation back to 1700. When normalized by dividing by the averages, the NASA/RSS data are corroborated by NCEP R1 and NCEP R2. https://wattsupwiththat.com/2018/06/09/does-global-warming-increase-total-atmospheric-water-vapor-tpw/

Phil Salmon
Reply to  Phil Salmon
October 11, 2019 2:50 pm

Thanks Dan

Michael S. Kelly LS, BSA Ret.
October 9, 2019 5:42 pm

The average insolation at is 1363.5 W m^-2. But January 5th 2020, it will be 1,410.3 W m^-2. That is the date Earth is at aphelion, the closest distance to the Sun. On July 4 2020, the insolation will be 1,319.0 W m^-2. That is the date of aphelion, when the Earth is furthest from the Sun.

The difference is due to the eccentricity of the Earth’s orbit, and the indisputable (and undisputed) fact that light intensity falls off as the inverse square of the distance from the source.

In January, the Earth is experiencing summer in the southern hemisphere. Most of the area presented to the Sun is ocean, which has an albedo of 0.06. In July, summer is in the northern hemisphere, and most of the area presented to the Sun is land. Land can have an albedo from 0.06 (dark wet soil) to 0.29 (desert). Virtually all of the northern hemisphere is more reflective than the southern.

So riddle me this: If the Earth is receiving 91.3 W m^-2 more power when it is at its least reflective (in the southern hemisphere), how come the northern hemisphere is experiencing more warming?

But apart from that, how in the world could anyone pick out a signal of 2 or 3 W m^-2 (actually, 8 or 12 when one takes out the ludicrous fudge factor of 4) in a world where the absolute, rock-solid, no schiffing around variation is 91.3 W m^-2 every 182.5 days?

Someone come up with a plausible explanation, please.

Michael S. Kelly LS, BSA Ret.
Reply to  Michael S. Kelly LS, BSA Ret.
October 9, 2019 7:30 pm

Sorry, the second sentence should have read “But January 5th 2020, it will be 1,410.3 W m^-2. That is the date Earth is at perihelion, the closest distance to the Sun.” And while I’m at it, the first sentence should have read “The average insolation at the top of the atmosphere is 1363.5 W m^-2.

Pardon my egregious gaffes.

October 9, 2019 10:25 pm

Scientific calculations do not involve the word “If.” The Little Ice Age ended when it ended, and clearly had nothing to do with CO2.

“If” the unclearly-defined warming since 1850, or 1880, or 1901, or some other year, was due to Natural Variation, which the Little Ice Age clearly was, then, this new unclearly-defined warming, which has happened more than once or twice in the past, has nothing to do with CO2.

Questionable temperature records adjusted by Advocates, who have cooled the past to make the present look scary, and ice records beginning in 1979, which I can assure you in my junior year at the U of M was savagely cold, None of this is rigorous science! None of it!

Calculate a Transient Climate Sensitivity with this assumption. Flunk out of my engineering school.

I understand, If it is 100% due to CO2, we must assume the worst case. But, this is not science, Einstein never assumed the worst case and called it Science.

And another thing, S-B calculations Cannot, Cannot, assume an averaged flux. The Flux, not Flux Density, not Flux Capacitor, is proportional to the 4th power of the Delta T. Resulting temperature will never ever be related to 1/4 of the flux! The Earth is heated by the Sun without averaging, ruins all calculations based on averaging. Sun is hot, heats the Surface of the Earth with some absorption of IR by the atmosphere, and some Albedo, incredibly hard to measure, those darn clouds. And then, it gets dark, called Night-Time, with no flux from the Sun.

Ruin our economy by continuing with this nonsense, try to beat them at their own game, which is based on fundamentally flawed assumptions which you do not see.

It is like this: Sun Shines on Earth. Sun is at 11,000 Degrees F, or thereabouts. Flux is based on the Delta T, really hot Sun, much cooler Earth. Geometry and angles of incidence, all good. But then, Sun goes down, Darkness!

Flux is proportional to the 4th power of the Delta T!

Stop with the averaging.

Stop with calling Albedor 0.29 or any other number, incredibly hard to measure and changed second by second.

Professor Wang is thanking me for this somewhere. Trenberth made a cartoon to fake out those without a technical education. Well I had one, difficult, but completed.

Do not over-simplify any of this.

Wow….

Monckton of Brenchley
Reply to  Michael Moon
October 10, 2019 1:27 pm

Mr Moon appears not to have read the head posting, which draws attention to the fact that no allowance is made in current calculations for Hoelder’s inequalities between integrals.

And he has yet to learn that the art of mathematics is to find a simple but effective way to address an apparently complex problem.

October 9, 2019 10:48 pm

Can you guys understand this? The Temp controls the Flux. O Flux cannot be averaged with 1366 W/m2 Flux! Far more complex than that. that is the amount of Energy arriving, but the Temp is proportional to W/m2(-4)! not the average.

You guys never passed Heat Transfer, also known as Transport of Heat and Mass. Go back to school, good luck

Ruins all of this.

The Sun never transfers heat at less than 11,000 degrees F. The amount of heat corresponds to the Delta T. So, we cannot reverse the calculation, and report the temperature resulting from an averaged calculation of flux…

Bit tricky to work that out.

Monckton of Brenchley
Reply to  Michael Moon
October 10, 2019 1:24 pm

In response to Mr Moon, radiative flux density from the Sun can be measured directly by cavitometers mounted on satellites. A respectable value at present is about 1363.5 Watts per square meter, as stated and referenced in the head posting.

October 9, 2019 11:05 pm

Heat Transfer cannot be averaged, it can be integrated with traditional laws. Hugely complex integral, 11,000 degrees at a normal angle over the surface of the Earth as it rotates, angles of incidence, albedo, just nothing like Trenberth’s cartoon. And this 1,366 W/m2 is not right either, much more at the Equator, much less at the Poles, proportional to the temp difference in each square inch.

I will research the derivation of this number. Hugely complex. See what I can find out.

Phil Salmon
October 10, 2019 5:39 am

Miskolczi measured with radiosonde balloons both the optical thickness (emission height) of the atmosphere and atmospheric humidity.

He found that over 60 years of increasing CO2, the emission height did not change.
This was because humidity was decreasing to compensate for increasing CO2.
Here’s the humidity decrease:

comment image

Everyone who refutes Miskolczi does so on the basis of flawed theory and maths.
But his main point was an instrumental observation, not a theory.

Has anyone tried to confirm Miskolczi’s measurements?
Has the optical thickness changed or not?
Has humidity changed or not?

Same issue with Nikolov and Zeller.
Everyone criticises their theory and maths.
But their point was also an instrumental observation.
A log-log (ln-ln) relationship between normalised pressure and atmospheric temperature.

You don’t criticise an instrumental observation by arguing theory.
You try to explain the observation.

Bindidon
Reply to  Phil Salmon
October 11, 2019 12:59 am

Phil Salmon

It’s better to link to an original article than to show a picture found in a place referring to it.

https://friendsofscience.org/assets/documents/TPW-and-GHE.pdf

I read the stuff, and at a first glance I didn’t quite understand why the decrease of relative humidity shown in page 4 is more relevant than the absolute humidity keeping constant in page 5.

Interesting paper.

Phil Salmon
Reply to  Phil Salmon
October 11, 2019 2:10 pm

Bindidon
article shows that based on humidity data from a major reanalysis dataset, declining humidity in the upper atmosphere offsets the greenhouse effect of increasing humidity in the lower atmosphere.

Could a dryer stratosphere explain the colder stratosphere – less energy to lose?

Another element of Miskolczi’s work was involvement of the Virial theorem. While this was roundly trashed for theoretical errors, the motivation for it seems justified. I’m no expert on it but the general idea is to replace temperature with a more all-embracing measure of contained energy which would include the water vapour component. Water vapour is no doubt the key to atmospheric heat dynamics.

Phil Salmon
October 10, 2019 5:50 am
October 10, 2019 9:05 am

Alan,

“Please explain this”

As the temperature increases, each incremental W/m^2 of solar energy results in incrementally more evaporation. It reaches a point where the next W/m^2 evaporates enough water that 1 W/m^2 of latent heat is also removed. What else will keep ocean surface temperatures from exceeding about 80F (300K)?

Examine this plot of atmospheric water vapor vs. temperature:

http://www.palisad.com/sens/st_wc.png

Y is atmospheric water content in grams/m^2 (Y label is wrong) and X is the surface temperature in degrees K. Like all of my scatter plots, each little dot is the average of 1 month of data for each 2.5 degree slice of latitude and the larger green and blue are the per slice averages over about 3 decades of ISCCP weather satellite data covering almost every m^2 of the surface every 4 hours.

Notice how above about 300K, atmospheric water increases exponentially which indicates evaporation is also increasing exponentially which indicates that the latent heat removed from the surface is also increasing exponentially.

It’s also interesting that while water vapor increases dramatically in the tropics, which should significantly increase the GHG effect, the ocean surface temperature still rarely exceed 80F.

tom0mason
October 10, 2019 11:01 am

As one of the top climate scientists in the world, Kevin Trenberth said in journal Nature (“Predictions of Climate”) about climate models in 2007 and is still the case in 2019:

None of the models used by the IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. There is neither an El Nino sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond. The Atlantic Multidecadal Oscillation, that may depend on the thermohaline circulation and thus oceanic currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forest for the next decade from Brazil to Europe. Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.
¯

¯
Therefore the problem of overcoming this shortcoming, and facing up to initializing climate models means not only obtaining sufficiently reliable observations of all aspects of the climate system, but also overcoming model biases. So this is a major challenge.

Monckton of Brenchley
Reply to  tom0mason
October 10, 2019 1:22 pm

If the head posting is right, one doesn’t need models at all if all one wants to know is how much global warming we may cause. It’s simple arithmetic.

Mike George
October 11, 2019 4:34 am

A couple of points here that I find confusing.

“First, we need the warming ΔR1 from 1850-2011”

My understanding is that we do not have that in any reliable, instrumental and global form at all. Instrumental temperature coverage of the Southern Hemisphere does not really exist much before the beginning of the 20th century. So, anything before that relies on proxies and they are not a reliable substitute for instrumental data. OK for determining trends but not so good for economists and politicians making life and death decisions about future climate scenarios and their social impact. Not to mention Extinction Rebellion, Saint Greta and the Catastrophic Anthropogenic Global Climate Heating Emergency Crisis Hysteria (CAGHECH).

Thus there is no reliable, instrumental global temperature data for ‘pre-industrial times’ at all. Not only that but;

Your graph – Figure 2 – does indeed show warming, but really, anything much before global instrumental data from radiosonde balloons starting in 1950, is hypothetical. I mean, surely by now, we’ve all done the rounds of weather stations beneath air-con vents, on tarmac, near airports, highways, etc. – not to mention poor maintenance – and all interpolated and adjusted. Not what I would call reliable. Not from a scientific perspective.

Fig 2 shows warming but does not, as far as I can tell, differentiate natural climate variation at all. So, how do you extract the ‘human signal’ from the natural background rate of change? Where is the curve that shows natural climate variation? I don’t see any ‘filtering equations’ there. Perhaps I’m missing something.

In other words, what should the global average temperature be today, without us pumping extra stuff into the atmosphere? If I subtract a number based proportionally on your ECS, would that be correct? Will that give me the answer I desire?

So, please , I’d be very grateful if you would fill in the gaps in my understanding. Thanks.

October 11, 2019 8:56 am

Lots of news coming about CMIP6!

“How accurately can the climate sensitivity to CO2 be estimated from historical climate change?”
J. M. Gregory, T. Andrews, P. Ceppi, T. Mauritsen, M. J. Webb
Climate Dynamics, online 10 October 2019.
https://link.springer.com/article/10.1007/s00382-019-04991-y

A summary: “The CMIP6 landscape”
Editorial in Nature Climate Change, October 2019
“CMIP6 output is growing rapidly and will afford a re-examination of important aspects of the climate system.”
https://www.nature.com/articles/s41558-019-0599-1

I assume we’ll get much more info from the papers at these:

The 2019 CFMIP Meeting on Clouds, Precipitation, Circulation, and Climate Sensitivity.
Sept 30 to Oc. 4 in Mykonos, Greece.
No abstracts or papers yet published on its website.
https://www.giss.nasa.gov/meetings/cfmip2019/

And, of course, at the AGU Fall meeting in December – in San Francisco.

This could be big. Bigger than big. An increase in temperature forecasts by CMIP6 models might blast away the last remaining restraints, allowing activists to make the WGI report of AR6 go full doomster. The resulting propaganda explosion, as activists further exaggerate it, might be record setting.

Now I have to go pick my few remaining ripe tomatoes, as frost is expected tonight. As usual. No delay yet from global warming, unfortunately (most of them are still green).

https://kwwl.com/weather/schnacks-weather-blog/2019/10/05/first-frost-when-it-typically-happens-and-when-we-might-see-one/

ferd berple
October 11, 2019 6:39 pm

MoB

I’m not sure if my point was clear. Consider a world with no atmospheric circulation. For argument let the average temp on the sunlit side be 330k and the dark side 270k, for an average temp of 300k. The radiation emitted from the surface without GHG will be proportional to 330^4 + 270^4 = 1.72 x 10^10

Now add circulation to the atmosphere such the average temp is unchanged but the high is now 315k and the low is 285k. The radiation will be proportional to 315^4 + 285^4 = 1.64 x 10^10. A 4.3% drop in outgoing radiation without any change in average temp or GHG.

And according to GHG theory, this must increase the average surface temperature by about 4C to restore equilibrium!!

330^4+270^4 approx = (315+4)^4+(285+4)^4

In other words, the moderating of surface temperatures due to atmospheric circulation appears likely to cause significant “greenhouse” warming, without the need for any greenhouse gasses.

thus the 33C GHG effect cannot all be due to GHG, thus GHG plays a smaller role than provided for by GHG theory.

Johann Wundersamer
October 17, 2019 12:40 am

“committing suicide, as one Communized child did recently in the English Midlands, because “climate emergency”.”

The classic https://www.google.com/search?client=ms-android-huawei&sxsrf=ACYBGNSwG8kU97b-gVuUuWlRsZm6sTLl4w%3A1571297870541&ei=ThqoXZLaIKrLmwWLp5vIBg&q=solipsism+syndrome&oq=solipsism+syndrome&gs_l=mobile-gws-wiz-serp.3

In ~12 years the world will end with Greta’s future.

Johann Wundersamer
October 17, 2019 2:33 am

Thomas Homer October 10, 2019 at 7:19 am

“The approximately logarithmic temperature response to change in CO2 concentration does not commence until 100 ppmv.”

Even at coldhouse, “snowball earth”, ppm CO2 never sunk beneath 100.

Why not 100 ppm. You want the Planck constant?

After all, detonation of pulverised fertilaters aka Dynamite doesn’t take minutes. Nor milliseconds.

With starting / ignition it happens “Now”.