CMIP6 models overshoot: Charney sensitivity is not 4.1 K but < 1.4 K

By Christopher Monckton of Brenchley

Recently the indefatigable Dr Willie Soon, who reads everything, sent me a link to the projections of equilibrium global warming in response to doubled CO2. This standard yardstick for global-warming prediction is known in the trade as “Charney sensitivity” after Dr Jule Charney, who wrote a report in 1979 saying that doubled CO2 would warm the world by 1.5-4.5 K, with a midrange estimate of 3 K. Most IPCC reports adopted the same sort of interval in making their predictions.

The Coupled Model Intercomparison Project’s 5th-generation models projected 2.1-4.7 K Charney sensitivity, with a midrange estimate of 3.35 K (from data in Andrews et al. 2012).

Now the sixth generation of these cybernetic behemoths, the CMIP6 ensemble predict 3 to 5.2 K Charney sensitivity, with a midrange estimate of 4.1 K (Fig. 1). The original midrange projection has become the lower bound.

clip_image002

Fig. 1. OTT: Projected Charney sensitivity in 21 CMIP6 models, September 2019.

In reality, the midrange Charney sensitivity to be expected on the basis of observed warming as well as total and realized forcing to 2011, the year to which climate data were updated in time for IPCC’s 2013 Fifth Assessment Report, is less than 1.4 K. That would take at least a century to happen.

Here, then, is a giant error of logic right at the heart of official climatology. CMIP5 models project 4.1 K warming in response to doubled CO2 when, on the basis of officially-published data, they should be projecting only 1.4 K. They are overshooting threefold.

No surprise, then, that children relentlessly propagandized by the sub-Marxist educational establishment are either collecting Nobel Peace Prize nominations for making snarly faces at President Trump in the U.N. General Dissembly or committing suicide, as one Communized child did recently in the English Midlands, because “climate emergency”.

Teaching children about the ever-more-absurd hyper-predictions of global warming is child abuse. It should surely be outlawed before anyone else is driven to death. Unfortunately, the Socialist Party in Britain, which has been taken over by Communists in recent years, is proposing mandatory global-warming indoctrination classes even for five-year-olds.

Official climatology’s own mainstream data and methods would lead it to expect a midrange Charney sensitivity no more than one-third of the latest models’ 4.1 K projection.

We shall take the approach –revolutionary in climatology – of deriving the true midrange Charney-sensitivity estimate directly from real-world data. You don’t need models, except at the margins. It is possible to derive future global warming from the observed period warming from 1850-2011, from official estimates of the reference anthropogenic radiative forcing over the same period, and from the radiative imbalance that subsisted at the end of that period.

As far as I can discover the Intergovernmental Panel on Climate Change, to name but one, has never attempted to derive a midrange estimate of future global warming by that most obvious and direct method – from real-world data.

clip_image004

Fig. 2. Not much warming: Monthly temperature anomalies, 1850-2011 (HadCRUT4).

First, we need the warming ΔR1 from 1850-2011. The answer, from HadCRUT4, the only global dataset that covers the whole period, is 0.75 K – at less than 0.5 K century–1 (Fig. 2).

Next, we need the Planck sensitivity parameter P – the factor by which a radiative forcing is multiplied to yield the corresponding warming before accounting for feedback. Roe (2009) calls this pre-feedback warming the “reference sensitivity”.

A respectable approximation to P is the Schlesinger ratio (Schlesinger 1985), the ratio of the global mean surface temperature at a given moment to four times the net incoming radiative-flux density at the top of the atmosphere.

In (1), total solar irradiance S is 1363.5 W m–2 (deWitte & Nevens 2017); albedo α is 0.29 (Stephens 2015); and the flat-Earth fudge-factor d is the ratio of the Earth’s spherical surface area to that of its great circle: i.e., 4. No allowance is made for Hölder’s inequalities between integrals (though it should be made), for we are using official climatology’s methods.

clip_image006

In (2), the Planck parameter P is derived on the basis that the global mean surface temperature TS in 2011 was 288.4 K (HadCRUT4: Morice et al. 2012).

clip_image007

Knowing P gives us the reference sensitivity ΔRC to doubled CO2 in (3). I do not yet have the CO2 forcing ΔQC from the CMIP6 models, so we shall take it as the mean of the estimates in 15 CMIP5 models (Andrews et al. 2012): i.e., 3.447 W m–2.

clip_image009

Next, we need the reference (pre-feedback) anthropogenic radiative forcing ΔQref from 1850-2011. IPCC (2013, fig. SPM.5) gives a midrange 2.29 W m–2, to which subsequent papers (e.g. Armour 2017) have added 0.2 to correct an overestimate of the negative aerosol forcing. Call it 2.5 W m–2.

We also need to know how much of that forcing has been realized: i.e., how much of it is reflected in the 0.75 K observed warming to 2011. Smith (2016) gives an estimated radiative imbalance, or unrealized forcing, of 0.6 W m–2. Therefore, the realized forcing ΔQrlz is 2.5 – 0.6, or 1.9 W m–2.

In (4), the system-gain factor A implicit in the real-world data from 1850-2011 is derived.

clip_image011

We can now derive the midrange estimate of Charney sensitivity ΔEC in (5).

clip_image013

Fig. 4 shows the startling discrepancy between Charney sensitivity expected on the basis of observed warming, reference forcing and its realized fraction to 2011, on the one hand, and, on the other, and the untenably-exaggerated Charney sensitivities predicted by the CMIP models.

clip_image015

Fig. 4. Overstated midrange Charney sensitivities (CMIP5 3.35 K, red bar; CMIP6 4.05 K, purple bar) are 2.5-3 times the 1.35 K (orange bar) to be expected given 0.75 K observed warming from 1850-2011 (HadCRUT4: green bar), 2.5 W m–2 total anthropogenic forcing to 2011 (IPCC 2013, fig. SPM.5; Armour 2017) and 0.6 W m–2 unrealized forcing to 2010 (Smith 2015).

The models’ projections flatly contradict the published data on manmade forcing and radiative imbalance. Global warming will be about one-third of their overblown midrange estimates. Scientifically speaking, that ought to be enough to end the climate “emergency”.

Since three-quarters of the CMIP6 models’ midrange 4.1 K projection of Charney sensitivity is feedback response, the error in the models is likely to be in their treatment of the water vapor feedback. Indeed, the predicted tropical mid-troposphere “hot spot” predicted in model after model is not evident in observed reality (Fig. 5). Without it, the water vapor feedback response must be small, and could not quadruple reference sensitivities.

clip_image017

Fig. 5. Models’ projected tropical mid-troposphere “hot spot” (a) is not observed (b).

Policymakers, therefore, should assume a Charney sensitivity not of 3 or 4 K but of less than 1.4 K. Since that warming is small, slow and net-beneficial, and since climatology has never asked, let alone answered, the key question what is the ideal global mean surface temperature, there is no rational justification for assuming that a mild warming requires any action at all, except the courage to face down the screeching Communists and have the courage to do nothing.

Advertisements

211 thoughts on “CMIP6 models overshoot: Charney sensitivity is not 4.1 K but < 1.4 K

  1. “Recently the indefatigable Dr Willie Soon, who reads everything, sent me a link to the projections of equilibrium global warming in response to doubled CO2. This standard yardstick for global-warming prediction is known in the trade as “Charney sensitivity” after Dr Jule Charney, who wrote a report in 1979 saying that doubled CO2 would warm the world by 1.5-4.5 K, with a midrange estimate of 3 K. Most IPCC reports adopted the same sort of interval in making their predictions.”

    Maybe so but what about the thermodynamics? and what about the carbon budgets constructed from the very stable TCRE? Why do deniers like to talk about the ECS and hide from the TCRE?
    Is it because the TCRE is a little too stable and predictable to mess with?

    Please see

    https://tambonthongchai.com/2019/10/08/quoraclimate/

    https://tambonthongchai.com/2019/09/21/boondoggle/

        • I DID read it! Evidently, you Read Part 1- the prosecution, but ignored Part 2- the defense.

          Continue reading Thongchai’s excellent blog- it’s very informative, and you may learn something.

          • Chaamjamal has figured out that he will get more people reading his blog if he pretends to be a bedwetter.

            The 1st chart is fraudulent. Dr. Roger Pielke has tracked storms and they are not increasing. Floods have nothing to do with warming of the atmosphere. I suspect that the hydrological numbers are fraudulent as well. There are no more droughts than usual. Forest fires are caused by lightning strikes and arson. There are less heat waves now than in the 1930’s. This fraudulent chart extremely upsets me and I am determined to get to the bottom of this fraud and who initiated it.

          • Alan, I think that chart, coming from the insurance industry, is probably based on the usual meaningless metric of how much “economic damage” (perhaps, in particular, insured damage) is caused by various weather events. Which shows nothing more than the serial stupidity of building more (and more expensive) housing in vulnerable coastal , flood plain, wildfire threatened, and other such “high risk” areas, NOT that the “weather” has gotten any “worse.”

          • Alan Tomalty -I would say the huge conflagrations these days are caused by forest mismanagement whereby large fuel loads are allowed to accumulate on the forest floors.

    • I humored you enough to look at the first image of your first link, even though it’s seven (7) years out of date.

      Counts of global “storm” events, “floods, mass movement” and “extreme temperature, drought, forest fire?” Give me a break.

      Any accounted event with a fatality counts as “1”…doesn’t matter if one person dies or 10,000. Whether or not a non-fatal event “counts” otherwise is determined by level or loss in USD, which is different country-by-country.

      Absurd methodology which is hampered by level in reporting in the earlier parts of the period, increases in population, increases in population density, more vulnerable infrastructure, etc. Population growth alone explains nearly all of the increase. Normalize to that and you’ll find practically nothing.

      “Extreme temperature” is included…safe to say that includes extreme COLD. Storms include COLD weather events.

      Poppycock.

      • Bangcock Poppycock. Never could make any sense out of his stuff, stopped looking long ago. Thanks for trying.

    • You know that co2 vs temp graph is fake right? It takes 100 years for ice to form in the antarctic, and co2 direct measurement only began 1960. So there is a huge gap in the middle where data doesnt exist, spliced to a proxy that cannot have been calibrated. The proxy also showed C14 levels anomalous with the ice layers indicating contamination. Stomata proxies on the other hand show pre industrial CO2 as closer to 310, indicating the warming comenced well before became a factor. Addionally you have failed to consider observed changes to albedo. Cloud cover decreased by 5% in the 1990s according to the international satellite cloud climatology project. This change is verified by moonshine project, and corresponds to the 2w/m2 increase in OLR. Thats a natural forcing twice the size of CO2 over the same period.

    • Re: “CMIP6 models overshoot: Charney sensitivity is not 4.1 K but < 1.4 K”

      Directionally correct, as stated below (but imo, real climate sensitivity, if it exists at all in measurable quantity, is probably even lower than 1C/doubling of CO2):

      Excerpted from:
      CO2, GLOBAL WARMING, CLIMATE AND ENERGY
      by Allan M.R. MacRae, B.A.Sc., M.Eng. June 2019
      https://wattsupwiththat.com/2019/06/15/co2-global-warming-climate-and-energy-2/

      9. Even if ALL the observed global warming is ascribed to increasing atmospheric CO2, the calculated maximum climate sensitivity to a hypothetical doubling of atmospheric CO2 is only about 1 degree C, which is too low to cause dangerous global warming.

      Christy and McNider (2017) analysed UAH Lower Troposphere data since 1979:
      Reference: https://wattsupwiththat.files.wordpress.com/2017/11/2017_christy_mcnider-1.pdf

      Lewis and Curry (2018) analysed HadCRUT4v5 Surface Temperature data since 1859:
      Reference: https://journals.ametsoc.org/doi/10.1175/JCLI-D-17-0667.1

      Climate computer models used by the IPCC and other global warming alarmists employ climate sensitivity values much higher than 1C/doubling, in order to create false fears of dangerous global warming.

      • Yes. Strange that CoB goes off on his own without even mentioning more serious published papers from competent statisticians, like Lewis and Curry’s work.

        Now the sixth generation of these cybernetic behemoths, the CMIP6 ensemble predict 3 to 5.2 K Charney sensitivity, with a midrange estimate of 4.1 K (Fig. 1). The original midrange projection has become the lower bound.

        What is the source of that graph? How are those calculations done? GCMs do not output ECS, someone has to calculate it. How was that done? Global Climate Intelligence Group sounds highly suspect. CoB gives no reference to where this BS comes from. As ever he is not reliable.

        • So, using either method, if we can geoengineer Earth to a comfortable 800ppm of CO2 – we only get less than 3deg of warming. And, since the effectiveness of CO2 to warm declines logarithmically, going to 1200ppm doesn’t get us beyond 4deg of warming.

          Going to be hard to get back to a warm, green Earth using only CO2.

    • Your FIGURE 5: NASA INFRARED SPECTROGRAPH, cites an IR spectra mapped to wavenumber space.

      This is a surprisingly common error. A Planck function mapped to wavenumber space does indeed exhibit a peak close to 667 cm-1, which appears to be close to the dominant Q branch of the CO2 v2 band spectra. However, such a peak is illusory and arises because of the known distortion of the Planck function when mapped to a wavenumber space.

      Mapping the Planck function to wavelength space exhibits a peak at around 9.6 um, close to the O3 feature you allude to in your earlier quote from Wikipedia. In such a spectra the role of CO2 is relagated to the margins of the energy distribution where it can have but little direct effect.

      • In response to Mr Stavros, the units in Fig. 5 are latitude (x axis), pressure altitude (y axis), also shown as km a.g.l., and temperature change (color coded). There is nothing about wavenumber. And I never quote Wikipedia, which is the most unreliable information source on Earth. I suspect Mr Stavros’ comment may be directed at another commenter.

    • In response to the furtively anonymous “Chaamjamal”, who labels those who disagree with the climate-Communist Party Line as “deniers” from behind a cowardly cloak of anonymity, equilibrium climate sensitivity is – as the head posting explains if he will get someone to read it to him – the standard metric for studying rates of global warming. If “Chaamjamal” wishes to challenge the use of this metric, he should direct his concerns to the Secretariat of IPeCaC, which will pay no more attention to this fatuous attempt to avoid the main point than I shall.

      The main point is that the rate at which the world is warming (the transient climate response) is adjusted to calculate the equilibrium response by taking account not only of total anthropogenic forcing but also of that fraction of the anthropogenic forcing over a given period that has not yet been realized. The useless models are predicting three times as much warming as they should, and they are in any event unnecessary because the simple method outlined in the head posting allows a direct calculation of equilibrium sensitivity.

    • The “Total Carbon Budget” idea doesn’t make sense and seems to be a political idea that removes the ability for nations to compensate for their emissions. TCRE looks at the total human CO2 emitted over time and doesn’t seem to care how much has been absorbed. Therefore the Net-Zero crap becomes the latest fad. If Climate Alarmists really cared about the Earth and thought CO2 was a threat, then they would stop getting their science from a truant child and start investigating ocean fertilization, which would also help the environment by providing food to restore fish stocks reduced by ever increasing fishing

    • Thong of Thailand argues for a runaway warning from CO2 radiation dynamics.

      The palaeo climate record totally contradicts and refutes this. CO2 has been much higher than now for most of the history of earth with multicellular life. And runaway warming from CO2 has never once occurred.

  2. “, the error in the models is likely to be in their treatment of the water vapor feedback.”

    singular “error”……..there’s several errors…all going on at the same time

    adjusting past temps down..to show an agenda driven faster rate of warming…and then “tuning” the models to that…and wonder why the models show the same faster rate of warming

    “it’s not a flaw..it’s a feature”

    • In response to “Latitude”, it is generally thought (and is claimed by climatologists) that there is little uncertainty as to most radiative forcings, which, in any event, account for only a quarter of global warming. The other three-quarters comes from temperature feedback response. But IPCC (2013) finds that midrange temperature feedback responses to all feedback processes other than the water-vapor feedback self-cancel. For this reason, it is self-evident that the most likely source of the error that has led to the over-prediction of global warming demonstrated in the head posting is the water vapor feedback. As Fig. 5 shows, there is indeed an empirically-discernible discrepancy between prediction and reality, in that the water vapor feedback response cannot be anything like as great as predicted because in the crucial tropical mid-troposphere the model-predicted increase in specific humidity that is the reason for the model-predicted but in reality absent “hot spot” is not an increase but a decline.

      • I’ve always thought that the fact that ENSO didn’t cause runaway warming to be reasonable evidence that water vapor feedback was grossly overstated.

        rip

      • “But IPCC (2013) finds that midrange temperature feedback responses to all feedback processes other than the water-vapor feedback self-cancel.”

        The whole misinterpretation starts with the word “feedback”.

        When a machine is broken by “feedbacks” than the whole concept is dumpable from the start;

        There could be shock absorbers to replace / to renew or dampeners.

        The complete climate machine comes to a grinding halt when crucial parts get damaged. And does anyone think “feedbacks” are the same as external energy supply.

        Did this planet function for 4,8 BYears to get broke by the latest newcomers in 2 MYears: godlike newcomers.

  3. It only got up to 0 degrees C here in the Edmonton area of Alberta today. It would sure be nice if we could see some of that warming they claim is happening or keep promising us, but never happens.

  4. “Teaching children about the ever-more-absurd hyper-predictions of global warming is child abuse. ”

    You’re d***ned right it’s child abuse. But until or unless someone takes those lying bozos to court for fraud, it won’t stop. And then what? An entire generation of children who become adults and realize that they’ve been lied to, and how will they get even with the scurrilous jerks who did this to them?

    • In response to Sara, the climate scam is indeed moving inexorably towards the mother of all fraud prosecutions. First, however, it will be necessary to demonstrate by the very simplest means that the predictions on which the fraud is based are false. The head posting demonstrates just one of three or four distinct ways to show that the rate of global warming is about 1.4 K per CO2 doubling, not 4.1 K and, therefore, not the far higher predictions on the basis of which scientifically-illiterate governments implement the insane policies that are destroying the economies of the West.

      • I doubt that proving that the predictions upon which the fraud is based will have any effect whatsoever. The alarmist momentum is based upon a moral conviction that they are right (and making a lot of money, gathering power and propagating a socialist philosophy) and the veracity or otherwise of the science is simply lost in the noise.

        From various IPCC -related utterances:

        Timothy Wirth:
        “We’ve got to ride the global-warming issue. Even if the theory of global warming is wrong, we will be doing the right thing in terms of economic policy and environmental policy.”

        Ottmar Edenhofer
        “Climate policy has almost nothing to do anymore with environmental protection. The next world climate summit in Cancun is actually an economy summit during which the distribution of the world’s resources will be negotiated.”

        I address this in my recent essays (links to the first two at the beginning):

        https://wattsupwiththat.com/2019/10/06/understanding-the-climate-movement-part-3-follow-the-money/

      • I doubt that proving that the predictions upon which the fraud is based is false will have any effect whatsoever. The alarmist momentum is based upon a moral conviction that they are right (and making a lot of money, gathering power and propagating a socialist philosophy) and the veracity or otherwise of the science is simply lost in the noise.

        From various IPCC -related utterances:

        Timothy Wirth:
        “We’ve got to ride the global-warming issue. Even if the theory of global warming is wrong, we will be doing the right thing in terms of economic policy and environmental policy.”

        Ottmar Edenhofer
        “Climate policy has almost nothing to do anymore with environmental protection. The next world climate summit in Cancun is actually an economy summit during which the distribution of the world’s resources will be negotiated.”

        I address this in my recent essays (links to the first two at the beginning):

        https://wattsupwiththat.com/2019/10/06/understanding-the-climate-movement-part-3-follow-the-money/

          • Yep. It’s a cult, not science. The belief must be challenged though a morality argument. Just disproving the ‘settled science’ has no effect on the Gretins.

        • In response to Mr Rossiter, one should not underestimate the power of absolute proof in science. It is now possible to prove the climate Communists wrong, scientifically.

    • Sara, and here’s one for the children “taught about the ever-more-absurd hyper-predictions of global warming for child abuse”

      “But IPCC (2013) finds that midrange temperature feedback responses to all feedback processes other than the water-vapor feedback self-cancel.”

      The whole misinterpretation starts with the word “feedback”.

      When a toy is broken by “feedbacks” then the whole thing is dumpable from the start;

      The complete toy breaks up evermore with parts damaged. And does anyone think toys get broken by itself.

      The children will come to ask for a new one. A better one!

  5. Given the amount of rain at the equator and the insolation, it looks to me as if almost all the solar radiation landing on the equator is taken up in evaporating water. Said water vapor is carried upward in the atmosphere where it condenses, creating thunderstorms. With all that heat being carried skyward, I’m somewhat perplexed that there is no equatorial hot spot.

    • Above about 300K, the incremental latent heat removed by ocean evaporation is almost completely offsetting the incremental solar energy evaporating the water. There’s an equatorial cold spot in the tropical oceans, where ocean surface temperatures generally don’t exceed 300K or else Hurricanes start to form which dissipate the excess energy.

      The temperatures of atmospheric gases and clouds are slaved to the surface temperatures and since there’s no tropical hot spot on the surface, there will be no tropical hot spot in the atmosphere. This next plot demonstrates the roughly linear relationship between cloud top emissions and surface emissions where both are converted with Stefan-Boltzmann from the reported average cloud top and surface temperatures.

      http://www.palisad.com/co2/sens/se/ce.png

      You can see the 300K bump at about 460 W/m^2 of surface emissions corresponding to a surface temperature of 300K. Note that like all of the relationships between W/m^2 and other W/m^2, this one is roughly linear since all Joules are equivalent making superposition applicable to calculations of relationships involving Joules, Watts and Watts/m^2.

      • You said “Above about 300K, the incremental latent heat removed by ocean evaporation is almost completely offsetting the incremental solar energy evaporating the water.”

        Please explain this. This sounds gibberish to me.

    • CommieB

      Glad you’re still up. You mention tropical evaporation. I’d like to point out (indirectly to Christopher) that in the tropics the sun beats down on the ground where also falls said rain. Copiously. Said rain absorbs heat from said hot surface, cooling it and transferring the heat energy to the atmosphere.

      This is called convective heat transfer and is admitted by the likes of Trenberth 2009 and small children.

      The calculations above ignore this effect, widely provable over the surface of the planet, or any other planet with an atmosphere. The radiative portion of atmospheric heating is indeed caused by GHG’s, and the rest is caused by convective heat transfer. As can easily be seen, the whole temperature difference between CO2 concentration before and now is credited for providing all the heat gained from the hot surface and the GH effect. They forget Trenberth and his cartoon of energy flow. The contribution to the air temperature by the hot surface should first be deducted from the quantity currently attributed to GHG’s, and only then should the Charney sensitivity calculation be attempted, on that remains. It is probably 1.4/2 = 0.7 C per doubling. Now that’s more like the truth.

      • Something like that.

        We have the simplified energy budget. link The biggest numbers are surface radiation, back radiation, and TOA incoming solar radiation. IMHO, the contribution of convection and phase change are understated.

        The trouble is that the Earth’s energy flux is not uniform. It appears that the simplified model grossly understates the role of convection and phase change at the equator.

        What got me going is the observation that the tropics are warmer than the equator. link Someone also pointed out that the equator is quite cloudy, but that is also not uniform. It changes dramatically over the day and does reduce the insolation at the surface once the thunderstorms get going.

        I wonder if what’s going on at the equator is enough to explain Trenberth’s missing heat.

    • With all that heat being carried skyward, I’m somewhat perplexed that there is no equatorial hot spot.

      The “hot spot” is what is predicted relative to the climate without “too much” of the magic gas. You will note that fig5 is temperature “anomaly”. The RH panel shows observed hotspot is barely detectable ( even using a more expanded colour map ).

      That implies that the reduce outgoing LWIR caused by additional GHGs is NOT being compensated by extra evaporation and convention. This must mean that some other action is limiting the INCOMING SW radiation. Either that is a coincidental drop in TSI, cosmic rays or other external forcing, or it is negative cloud feedback.

      Until we understand clouds we will never understand climate.

    • commieBob, don’t get somewhat perplexed that there is no equatorial hot spot.

      Every new day brings a new sunshine morning. Following a cold freezing night with fresh winds up to the clouds. At morning due.

  6. CMoB wrote: “CMIP5 models project 4.1 K warming in response to doubled CO2 when, on the basis of officially-published data, they should be projecting only 1.4 K. They are overshooting threefold.”

    CMIP 6???

      • Moderators, please fix: the reference cited by Mr O’Bryan should be CMIP5, not CMIP5.

        [?? CMIP5 is not already CMIP5? .mod]

        • Dear moderators, – Please correct the reference cited by Mr O’Bryan to read “CMIP6”, not “CMIP5”.

  7. There is an easier way to do the calculation, but yes essentially this is the case. However you have failed to note that most of this warming occurred during a period not affected by Human CO2 and when adjusted to remove this the evidence suggests Human induced warming of less than 0.5 deg C per doubling.

    It’s probably useful to point out that there is no evidence that there is any cooling influence over that period either (Ie the solar input increased from 1850 – 2011)

    For your interest you can also do this calculation by looking at the level of CO2 energy saturation (85%) and the supposed GH warming of the planet (Ca 10 deg). If 85% CO2 energy saturation represents 10 deg of GH warming (Assuming its all CO2) then the remaining 15% available energy represents just 10/85 x 15 =1.76 degrees rise for a 100% saturated CO2 atmosphere. Yes, I know the energy to temperature is not a linear relationship, but the power law establishes it as less than the linear result. The linear extrapolation can establish a boundary. There are reasonably around 5 doublings to reach 99% saturation so we can conclude that the warming per doubling is strictly less than 1.76/5 or 0.35 C per doubling. This is an area where the established total GH warming for the planet contradicts the idea of high sensitivity. For example 8 Doublings from 1 PPM to 256 ppm at 4.1K per doubling represents total CO2 atmospheric GH warming of 32 degrees which is essentially impossible.

    These simple tests constrain AGW sensitivity to well under 1 C per doubling

    • Yes, the number of doublings from zero to 256ppm gives more heating than the apparent greenhouse effect, so it must be called out as BS just on that.

      Then there is Mars, which has an atmosphere that 95% CO2, although only around 1/160th of the atmospheric pressure. There is still 3.6 times more CO2 in the Martian atmosphere compared with earth, but why is the greenhouse warming just .5 degrees on Mars??? It simply does not add up at all. I think applying PV=nRT equation would explain the apparent greenhouse effect vastly more accurately. Basic high school science.

      Looks like the warmists need to go back to school.

    • bobl,
      I am not reading you about doublings.
      The first doubling is 1 molecule to 2. The second doubling is 2 molecules to 4. A doubling here,can doubling there and sooner or later, your doublings start to mean real values.
      What base values are you using for your first doubling, and why?

      • Its just illustrative that you can look at overall GH heating and energy saturation of CO2 band and derive an estimate of sensitivity. I used 1 PPM just to illustrate this point but the point at which the log relation breaks down is lower than 1 PPM (hence as you point out the situation is worse). Its will take someone better than I to develop this further. There is a point where the log relationship breaks down, at both low concentrations (linearisation) and high (saturation), where this actually occurs needs some research. Especially given the bandwidth of the CO2 lines is related to the temperature of the CO2 (doppler effect) and not the concentration of the CO2 (another physics error by the environmental “science” grads – wish they would take physics and stats). CO2 effect is not strictly a log law and does saturate.

        Essentially what I’ve outlined is a simple “Boundary test’ that I use to constrain the possible value of sensitivity, IE it must be below around 0.4C per doubling otherwise overall planetary global warming would have to be higher. I don’t need more than that to disprove high sensitivity theories. The boundary tests I outline here show that sensitivities above around 0-4-0.5C per doubling are implausible.

      • Geoff Sherrington October 9, 2019 at 4:05 am
        ,
        You’re not reading bobl about doublings:

        The first doubling is 1 molecule to 2. The second doubling is 2 molecules to 4. –>

        The first doubling is 1 photon to 2. The second doubling is 2 photons to 4 photons / energy packs.

        – since e=mc2 none such readings don’t represent the physical real existing world. Game over.

      • 1 “don’t” to much:

        Geoff Sherrington October 9, 2019 at 4:05 am
        ,
        You’re not reading bobl about doublings:

        The first doubling is 1 molecule to 2. The second doubling is 2 molecules to 4. –>

        The first doubling is 1 photon to 2. The second doubling is 2 photons to 4 photons / energy packs.

        – since e=mc2 none such readings represent the physical real existing world. Game over.

    • It would be most helpful if Bobl were able to provide references for his intriguing suggestions that the CO2 energy saturation is 85% (and a definition of the term would also help); that 85% represents 10 K warming from CO2; and that there are 5 doublings to reach 99% saturation. I should like to study this question further.

      The approximately logarithmic temperature response to change in CO2 concentration does not commence until 100 ppmv.

      • Hi Christopher,

        Why 100 ppm?
        Did someone use this as a convention?
        Or is that the supposed lower limit before all vegetation dies?
        I am more puzzled than before.
        Regards Geoff S from Melbourne

      • “The approximately logarithmic temperature response to change in CO2 concentration does not commence until 100 ppmv.”

        That is pitiful! I’m disappointed.

        I’ve previously shown that the ‘approximately logarithmic temperature response to change in CO2 concentration’ is absurd for very small concentrations of CO2. I’ve asked for the lower bound of CO2 concentration that this logarithmic ‘property’ of CO2 begins and you arbitrarily proclaim: ‘100 ppmv’ with no scientific analysis. That itself is absurd! And, awfully convenient.

        What changes in CO2 behavior between 99 ppmv and 101 ppmv?

        You’re just conveniently attempting to dismiss the clear absurdity of the ‘logarithmic’ claim across all densities of CO2. Shall I show how it is equally absurd for very high concentrations of CO2? Then you’ll need to define the upper bound.

        You’ve previously seemed to agree with the odd notion that ‘greenhouse gases’ make the Earth 33C warmer than it would be without them. If 100 ppmv CO2 is the lower bound of it’s ‘greenhouse gas’ property, and we’re at 400 ppmv now, then there are only two doublings of CO2 from 100 ppmv to 400 ppmv. Therefore you are claiming a sensitivity of 33/2 = 16.5.

        Again, I’m disappointed.

        • If Mr Horner is disappointed with official climatology for its decision that the approximately logarithmic CO2 response occurs only from about 100 ppmv onward, he should take that matter up not with me but with official climatology. My approach is to accept official climatology’s methods and data ad argumentum, except where I can prove them to be wrong. That, like it or not, is how Socratic elenchus works.

          Formulae for calculating the warming from a zero concentration are available in IPCC (2001, ch. 6.1).

          • Monckton of Brenchley – Thank you for your response.

            I can’t speak for ‘Mr. Horner’, but I remain disappointed.

            You just deferred to ‘official climatology” in defense of your own claim. Much like Greta and ‘The Science’, it’s always ‘over there’.

            I had thought that “Socratic elenchus ” includes questioning an absurd claim.

            What changes in CO2 behavior between 99 ppmv and 101 ppmv? Aren’t you curious?

            What’s the upper bound for “the approximately logarithmic CO2 response “? Is it a conveniently round number as well?

        • Thomas,
          No No, and Monckton must also be wrong, again the problem is being able to intercept energy (at the right wavelength) to drive the warming, It works at low concentrations but at high (Say 100 ppm plus) the loss of available energy to intercept must distort the log assumption.

          In your case there is nothing to say where the log C/Co assumption breaks down at the low end but you can’t say 33/2, for two reasons 1. Even the most committed climate scientists don’t claim the whole 33 degrees is GHG warming, 2. If the log relationship breaks down at 100 PPM then below that there is another relationship and therefore some warming effect ), probably more than due to log relation, occurs below that point . Monckton did NOT say that 100 PPM was a threshold. More than likely at the low end the relationship increasingly approaches a linear one.

          The Question does remain as to what actually happens at low concentrations and where the transition occurs. If we were to assume the log law all the way then at our starting point we have DeltaT=k x ln(C/0) C/0 is non physical, it cannot be. Between zero and where the log case dominates its not related to C/C0 could be C-Co though . Let me say from the beginning that I have no idea what happens at low concentration other than at C0=0 there is a pole which cannot be physically realised.

          • bobl – Thank you for your response

            Yes, I introduced a preposterous claim in an effort to showcase how preposterous Monckton of Brenchley’s unqualified claim is. His unqualified claim:

            “The approximately logarithmic temperature response to change in CO2 concentration does not commence until 100 ppmv.”

            is absurd and clearly exposes that this entire spectacle is not based on actual measurements but merely extended theories of a theory of a ‘property’ that can’t be measured.

            He said it, he didn’t qualify it, and he won’t defend it.

        • Thomas Homer October 10, 2019 at 7:19 am

          “The approximately logarithmic temperature response to change in CO2 concentration does not commence until 100 ppmv.”

          Why not 100 ppm. You want the Planck constant?

          After all, detonation of pulverised fertilaters aka Dynamite doesn’t take minutes. Nor milliseconds.

          With starting / ignition it happens “Now”.

      • Chris,
        I hope you get back here

        Gee, now you are stretching the memory, this is something I’ve known for years. From memory the 85% comes from the fact that energy emissions of the atmosphere across the Co2 lines is 15% of the peak spectral emission, that is the bottom of the dip is at 15%. You can find that satellite radiometry information online quite easily. What this says is that if CO2 is treated as an absorber (paint/blanket) as the global warmists do then 85% of upwelling energy is absorbed into the atmosphere and 15% is radiated.

        If we double the concentration of the absorber then theoretically we halve the energy transmission, so we get for the next 5 doublings, saturations of 92.5, 96.25, 98.125, 99.0625,99.53125.

        The problem for climate science really though is because of saturation effects the incremental energy intercepted in the absorber is decreasing faster than the logarithmic effect of CO2 absorption is increasing. This comes about because of a few science errors by the environment grads trying to model climate without a physics education.

        The Log assumption for CO2 implicitly assumes that the supply of energy within the absorption band is unlimited. it is NOT, it is in fact very limited. No available energy, no heating regardless of the amount of CO2. I think this is a key error. This is exaggerated by the bode requirement that gain needs an energy source for positive feedback to happen, no energy, no feedback either, just saturation. From what I can see the climate scientists (Environment grads) assume that doubling CO2 will double energy absorbed (which requires going from 85% absorption to 170% absorption). This violates the law of conservation of energy, ie the models can’t be constrained to conserve energy within the CO2 absorption band. For their mechanism to work this constraint needs to be enforced.

        There has been some poor physics sprayed around that the width of the CO2 band will increase as CO2 partial pressure increases, and this is where the unlimited energy comes from. This was inferred from comparing spectral line width from Mars and Venus, and the assumption was that the smearing of the CO2 lines from Venus is due to its dense atmosphere. This is wrong, when molecules are travelling very fast toward or away from the observer the apparent frequency of the emission changes due to the doppler effect, velocity of a gas is related to temperature and so the width of the CO2 lines on Venus are wider than Mars or Earth because the CO2 is hotter on Venus.

        Finally there is an implicit assumption, that the added CO2 adds to the mass of the atmosphere, meaning the heating effect can increase forever, but when Carbon is burnt CO2 replaces O2, it does not add to atmospheric mass. All things being equal then the mass of the atmosphere with respect to manmade emissions is a constant. In this case energy absorption must limit if only because the atmosphere is finite and constant.

        Then you need an estimate of total GHG warming of the planet from blackbody, exclusive of compression effects. There are some papers on this and my understanding is 10C all cause GHG warming was the consensus on this. This is all cause and CO2 is only a part of that, water being the major part. My boundary test simply makes the assumption its all Co2 and still gets low sensitivity, do this properly by working out CO2 (+ feedbacks) fraction / water only fraction and CO2 warming all but disappears.

        Bear in mind that I use this as a simple boundary test to place an upper limit on sensitivity, to be sure about my upper limit estimate is above whats possible I make assumptions in favour of AGW, for example that the whole 10C is caused by CO2 when clearly its not.

        The logic here is simple:

        **** If absorbing 85% of available energy causes 10C rise than absorbing the other 15% cannot cause more than 1.77 C rise even for a 100% Co2 atmosphere.

        Happy to help further and Anthony can pass on my email address to you.

  8. Can’t find a reference to that suicide. Too many stories of predictions of climate change leading to more suicides that its impossible to find one of the real suicides due to the scare mongering. An economist who suicides because their share mongering didn’t win them a Nobel Prize was all I could find.

  9. If this assessment is correct, then get it published in a reputable scientific journal. That way you may change some views that count rather than our echo chamber here.

    • In response to Mr Morris, our paper is currently out for peer review at a journal where we hope that we shall either be given a clear explanation of why we are wrong or the opportunity to publish. An editor has been appointed and we shall expect to hear more later this year.

    • Warwick Morris October 8, 2019 at 7:24 pm

      If this assessment is correct, then get it published in a reputable scientific journal. That way you may change some views that count rather than our echo chamber here.
      _________________________________________

      What’s wrong with echo Chambers.

      When the vikings after a dangerous Atlantic ride rowed through the mist , they knew they were near an island. An island they wouldn’t find in the fog. So they called in all 4 heavenly directions to Odin.

      From where the echo answered “Odin”
      To there they rowed.

      Try that once, Warwick

  10. “, the error in the models is likely to be in their treatment of the water vapor feedback.”

    The error is with the mishandling of clouds. The system will balanced for any amount of clouds which makes determining the proper amount of clouds problematic. The more clouds there are, the more energy is reflected and the less energy is emitted into space, as clouds emit at a colder temperature than the surface below and the 2 effects nearly exactly offset. The average cloud coverage is relatively constant, thus there must be something other then the radiant balance driving their behavior.

    If we plot the monthly average amount of the surface covered by clouds vs. the average surface emissions across 2.5 degree slices of latitude, a non linear behavior reminiscent of a tunnel diode emerges.

    http://www.palisad.com/co2/sens/se/ca.png

    The small dots are the monthly averages while the larger green and blue dots are the averages for the N and S hemispheres respectively. The average surface emissions are calculated by applying the Stefan-Boltzmann Law to the reported average temperatures.

    By modulating the rate of energy arriving and leaving the planet, the amount of clouds primarily affects the relationship between the surface emissions and the planets emissions at TOA. That being said, we should expect to see the same shape in the relationship between the surface emissions and the emissions at TOA. What we observe is quite different.

    http://www.palisad.com/co2/sens/se/po.png

    The wiggle near 300K seems present in the monthly dots, but largely absent in the long term averages. Furthermore, the long term averages are the same in both hemispheres, even though the cloud behavior per hemisphere is quite different. Further examination shows the slope of this is steeper between 350 W/m^2 and 400 W/m^2 which is roughly centered on the planets average and approaching 1 W/m^2 of surface emissions per W/m^2 emitted at TOA. The linear average across the entire range is about 1.62 W/m^2 of surface emissions per W/m^2 emitted at TOA.

    This MEASURED behavior can be explained in 1 of 2 ways. First is that a really bizarre, hemisphere specific relationship between clouds and surface emissions coincidentally results in a mostly linear average relationship between surface emissions and emissions at TOA. Second is that the goal of the climate system is a mostly linear average relationship between surface emissions and planet emissions and clouds adapt to achieve this goal from pole to pole. There can be no question which one makes sense and no climate model comes close to manifesting the observed behavior by either method.

    • In response to “co2isnotevil”, Spencer & Braswell have published an interesting paper demonstrating that the cloud feedback is net-negative, not – as is currently imagined – net-positive. However, the cloud feedback is too small to make much difference to overall feedback response. Since that and all other feedbacks self-cancel except the water-vapor feedback, it is the water-vapor feedback that is likely to be the largest contributor to overstatement of equilibrium sensitivities.

      • “Spencer & Braswell have published an interesting paper demonstrating that the cloud feedback is net-negative”
        l
        The concept of ‘feedback’, as developed by Bode to analyze linear feedback amplifiers, doesn’t apply to the climate system, especially with regard to the seriously flawed idea that feedback ‘amplifies’ the sensitivity. What seems to be misinterpreted as cloud feedback per Bode is neither positive or negative, but is the causal reaction of clouds acting as the free variable driving the system towards its goal which can be easily confused as negative feedback. In fact there are 2 goals. One is for the surface temperature that results in a steady state equilibrium, which can be achieved for any amount of clouds, and the other is the aforementioned constant average ratio from pole to pole between the SB emissions of the surface and the emissions at TOA which determines how much clouds are required.

        Much like a thermostat that keeps the temperature constant, the clouds keep the ratio between the SB emissions of the surface and the emissions at TOA constant and the surface temperature follows. This average ratio is easily measured at about 1.62 W/m^2 of surface emissions per W/m^2 of solar forcing and is demonstrably constant from pole to pole and applies equally to each of the 240 W/m^2 of solar forcing. That this goal happens to be within <0.2% of the golden mean (1.61703…) may just be a coincidence, but the fact that this ratio is emerging from a chaotically self organized system, where clouds are what's being organized, indicates something quite different.

        Note that the analysis used to describe a feedback amplifier is very different from that used to describe a feedback control system, although linear feedback amplifiers are often found as components of control systems. It seems that climate science is incorrectly conflating feedback amplifiers with feedback control systems.

  11. “First, we need the warming ΔR1 from 1850-2011. The answer, from HadCRUT4, the only global dataset that covers the whole period, is 0.75 K – at less than 0.5 K century–1 (Fig. 2).”

    Wrong.

    1. HadCrut is not global
    2. BerkeleyEarth and Cowtan and Way are.

    • Mosh pup,
      and yet the observed Charney Sensitivity calculated from those other datasets is still far below the CMIP5 and CMIP6 pseudo-science garbage.

      And the modellers do what in response to 30 years of failure?
      They model ever on-wards, like good cargo-cultists ignorant islanders seeking rent, selling witch doctor science to pay their salaries (rent).

      No matter how you slice it, the IPCC CMIP ensemble efforts for 30+ years now are pure Cargo Cult Science writ large.

    • drive-by Mosh is wrong once again when he said
      “1. HadCrut is not global”.

      from the met office’s own website:
      HadCRUT4 is a gridded dataset of global historical surface temperature anomalies relative to a 1961-1990 reference period. Data are available for each month since January 1850, on a 5 degree grid.

    • Mr. Mosher; I’m curious why you say that. Here is how they have described their dataset. Are you saying this is not credible or something else? “Brief description of the data; The gridded data are a blend of the CRUTEM4 land-surface air temperature dataset and the HadSST3 sea-surface temperature (SST) dataset. The dataset is presented as an ensemble of 100 dataset realisations that sample the distribution of uncertainty in the global temperature record given current understanding of non-climatic factors affecting near-surface temperature observations. This ensemble approach allows characterisation of spatially and temporally correlated uncertainty structure in the gridded data, for example arising from uncertainties in methods used to account for changes in SST measurement practices, homogenisation of land station records and the potential impacts of urbanisation.”

  12. Chris’s 1.4k median or high end sensitivity estimate is still 50% too high because it assumes that all warming over the reference period was caused by CO2.

    CO2 provides no explanation at all for the 300 years of warming before 1975. The IPCC itself only proposes that it is only since 1950 there has been enough anthropogenic CO2 increase to have a significant effect on global temperature, and temperatures were flat or cooling from 1950 to 1975.

    So what caused the warming from the bottom of the Little Ice Age to the mid-20th century? The leading theory as of 1975 was that climate is somehow driven by solar activity, a theory that also explains the modicum of late-20th century warming and the leveling off of temperatures in the 21st century.

    Solar being the better evidenced theory we probably ought to attribute more than half of warming since 1850 (or from 1700) to solar-magnetic variation. That would reduce the amount of warming attributable to CO2 by more than half, which would reduce the Charney sensitivity by more than half.

    Of course Chris and especially Willy know this better than anyone. Just reminding everyone else: that Lord Monckton’s proposed conservative sensitivity estimate is actually hyper-conservative.

    • Mr Rawls is of course correct: I should have made it explicit in the head posting that, in line with only 0.3% of published papers in the reviewed journals, I am assuming that all recent warming was manmade. To the extent that it was not manmade, the Charney sensitivity of 1.4 K should indeed be diminished.

    • Yes, there are latent heat feedbacks. Of the 342 average watts/square meter falling on Earth’s surface, 30% is reflected away by clouds. Those clouds are part of a negative feedback from warming.

      Take away all clouds, leaving only polar icecaps, deserts, etc. affecting albedo, and I’ve read that the amount of reflected Sunlight would drop to 15%

      85% of 342 watts/square meter is about 290 watts per square meter. From Trenberth’s “Earth’s Global Energy Budget”,

      http://www.cgd.ucar.edu/staff/trenbert/trenberth.papers/BAMSmarTrenberth.pdf

      thanks to the “greenhouse effect”, Earth’s surface receives about 493 watts/meter squared on average.

      Using round numbers, the greenhouse effect has increased the original 290 watts per square meter coming in to 490 watts per square meter coming in. Of that 200 watt per square meter increase, HALF goes in to latent heat, HALF into sensible temperature. Presuming the same continues for future increases, rather than a 1° to 1.5° warming from doubled CO2, actual warming should be 0.5° C to
      0.75° C, with the balance of the energy going into increased convection, evaporation and rainfall- and THAT’S assuming ALL of the warming was due to CO2- it wasn’t.

      • Alan,

        Since the Moon and Earth are made of the same stuff, the albedo of Earth would be the same as the Moon (0.11) if the Earth had no GHG’s or clouds. The non reflected energy would be 304 w/m^2 corresponding to an average surface temperature of 271K. The effects of GHG’s and clouds increases this to about 288K, or 390 W/m^2 of surface emissions, but at the same time increases reflectivity increasing the albedo and reducing the incident power.

        The effects of GHG and clouds may increase 255K to 288K, but this increase is accompanied with a decreases in the no GHG/cloud temperature from 271K to 255K. In order words, the 33C increase from GHG’s and clouds can not be decoupled from the 16C decrease arising from increased albedo. The total temperature increase from clouds and GHG’s is only 33-16 = 17C and not the 33C often claimed in an effort to make the effects of GHG’s seem larger than they actually are.

        • co2isnotevil
          You said, “Since the Moon and Earth are made of the same stuff, the albedo of Earth would be the same as the Moon (0.11) if the Earth had no GHG’s or clouds.”
          Strictly speaking, that is not true. The presence of water and life on Earth has produced lots of minerals that cannot be found on the moon. Most notably, the two have conspired to produce dark shales and light limestones, which are abundant, and non-existent on the moon.

          • Clyde,

            Yes, but the issue was the albedo in the absence of GHG’s and clouds, which means that there’s no water and as a consequence, no life either. The salient point I was making is that you can’t separate GHG effects dominated by water vapor from the co-dependent effects of liquid or solid water, clouds and even life, as is done when GHG effects are claimed to increase the surface temperature by 33C.

            If it was the case where there was liquid water for billions of years and it disappeared, then the albedo might be more like Mars, which is about 0.15, although some of this is contributed by the Mars ice caps and thin clouds.

  13. Criticizing ECS numbers is useless. The IPCC itself written in AR5, p. 1110, like this: “ECS determines the eventual warming in response to stabilization of atmospheric composition on multi-century time scales, while TCR determines the warming expected at a given time following any steady increase in forcing over a 50- to 100-year time scale.” And further on page 1112, IPCC states that “TCR is a more informative indicator of future climate than ECS”. I fully agree with this opinion. It is enough to analyze temperature trends during this century.

    And by the way, the positive water feedback cannot be found according to the humidity measurements. What is the TCS according to the IPCC if there is no water feedback?
    In AR4, The Physical Science Basis, p.630: “The diagnosis of global radiative feedbacks allows better understanding of the spread of equilibrium climate sensitivity estimates among current GCMs. In the idealised situation that the climate response to a doubling of atmospheric CO2 consisted of a uniform temperature change only, with no feedbacks operating (but allowing for the enhanced radiative cooling resulting from the temperature increase), the global warming from GCMs would be around 1.2°C (Hansen et al., 1984; Bony et al., 2006).

    • Mr Ollila is correct that IPCC thinks reference sensitivity to doubled CO2 (i.e., sensitivity before accounting for feedback) is 1.2 K. However, the CMIP5 models’ results imply a mean reference sensitivity of just 1. 02 K, as stated in the head posting. I don’t have the equivalent value for the CMIP6 models yet.

      But Mr Ollila is wrong to state that “criticizing equilibrium climate sensitivity numbers is useless”. As the head posting shows, the transient warming from 1850-2011 was 0.75 K, but one can convert that to an equilibrium warming by the method shown in the head posting, taking account of the measured radiative imbalance to 2010.

      If Mr Ollila disagrees with the use of equilibrium sensitivity to doubled CO2 as the standard metric for global warming studies, he should direct his concerns not to me but to the IPCC.

      • “If Mr Ollila disagrees with the use of equilibrium sensitivity to doubled CO2 as the standard metric for global warming studies, he should direct his concerns not to me but to the IPCC.”

        I did this as part of my recent expert review of AR6, but no doubt they will ignore the problem. The only sensitivity metric that has any physical justification is in the energy domain and expressed as W/m^2 of EQUIVALENT BB surface emission per W/m^2 of forcing. This has an average value of 1.62 W/m^2 of surface emissions per W/m^2 of forcing and is demonstrably independent of the surface temperature, total solar forcing, latitude or ratio of land to ocean. In other words, this relationship is strictly linear.

        A linear relationship between W/m^2 of forcing and W/m^2 of emissions unambiguously exists as 1.62 W/m^2 of surface emissions per W/m^2 of forcing, so there can be no legitimate reason to convert W/m^2 of emissions into a temperature and assert that the linear requirement for applying feedback analysis can be met by considering the relationship between W/m^2 of forcing and temperature as approximately linear over a narrow range of T around the average. Not only is this improper, the IMPLICIT source of Joules powering the gain is still missing, so even as an incorrect ‘fix’, approximate linearity around the mean is still insufficient for applying the analysis being used.

        IMHO, the improper linearization of the relationship between temperature and W/m^2 is the keystone of the broken science used to support climate alarmism. Take this away and climate alarmism goes the way of a flat Earth.

  14. –Policymakers, therefore, should assume a Charney sensitivity not of 3 or 4 K but of less than 1.4 K. Since that warming is small, slow and net-beneficial, and since climatology has never asked, let alone answered, the key question what is the ideal global mean surface temperature,—

    This is something always asked {though it’s not really supposed to answered- a rhetorical question}, but
    what is ideal global mean surface temperature?
    I think the decision processes should take into account our natural global climate.
    Our “natural global climate” is a icebox climate, or an Ice Age.
    And we in this natural global climate of an Ice Age , due to global tectonic activity.
    Or simply, as result of natural geological process which have occurred over millions of years.

    I regard the “Greenhouse Effect Theory” as pseudo science and it is religious ideology. Closely resembling a State religion and even includes follows who want to imprison people who do agree with the State’s religious tenets. No one want to lock up people who don’t agree about General Relativity or any other concept related to Science. But there are people who seems to want to be taken seriously, who want to lock up “climate deniers”. So obviously, there are religious fanatics, and the State is part of indoctrinating them with “wild climate science” which has nothing to do with science.
    Anyhow this religious faith started before anyone knew what caused glacial and interglacial periods, or anything much about plate tectonic. And that we are currently in an Ice Age due to global geological changes.

    So anyways, if don’t allow for these natural geological changes, you might think that Earth average global temperature should be around 20 C or more. Because Earth’s normal/typical state is not being in a Ice Age.
    But with our present natural arrangement of Land and Oceans and results of geological processes, it is”normal” to be in a Ice Age. What is also natural in this time, is a low levels of CO2, and during our Ice Age
    we come close to dangerously low levels of CO2 {about 180 ppm}.
    So just because it is natural to have dangerously low levels of CO2, does mean we should have such low levels of CO2. And it’s normal/natural in our Ice Age to spent a long time in glacial periods. Likewise don’t need to live in glacial period just because it’s “natural” to do so.
    Instead in terms of ideal global mean surface temperature, the temperature should not be as low as to have chance of descending into a glacial period. And I think global average temperature should be warmer than they are now. But question is how much warmer should it be?
    Now I don’t it should be about air temperature, instead the metric I think is important is the average volume temperature of the Ocean. And current this average temperature is about 3.5 C.

    And during our Ice Age this ocean temperature has been in the range of 1 C to 5 C.
    I think it could a problem if our average volume temperature of the ocean was 3 C or cooler.
    And I think it could be problem if average ocean was 4.5 C or warmer.
    So I think optimal is somewhere between 3.5 and 4.0 C. And tend to think 4 C is better than 3.5 C.
    And don’t think we will get to 4 C within 100 years.
    And in terms of climate change, I think we should green the Sahara Desert.
    And I think if our ocean was 4 C, our Sahara Desert would be greening.
    But if wrong that 4 C ocean will green Sahara Desert, I think we should make it green.

    In terms of global air temperature what will 4 C ocean do. I think would make polar sea ice ice free during the summer {every year}. I think it will increase Canada and Russia average temperature by about 3 C. and their current average temperature is about -4 C. So some years it could above 0 C and other less than -1 C. And more trees growing in Canada and Russia will also have slight warming effect. And higher CO2 levels will also have warming effect upon these northern. Adding those in, they could have average well above 0 C or a 4 C increase in their average temperatures.
    China average temperature is about is currently above 7.5 C:
    http://berkeleyearth.lbl.gov/regions/China
    and 4 C ocean could cause to have average temperature of about 10 C
    Europe is about 9 C, and could increase to 13 C.
    US is about 12 C, and could increase to 14 C.
    And not much change in tropics and average ocean temperature has little effect of it it’s temperature- or tropics has tens of meters of warm ocean water, though those thousand meter deeper it could around 2 C.
    Or increasing the average temperature the entire ocean is mostly about warming polar regions. And not clear to me that 4 C ocean would have much effect upon Antarctica polar sea ice, though winter polar sea ice growth would decrease by some amount {it wildly fluctuates], and also for whatever reason our current warming is having little effect upon Antarctica, and roughly expect that to continue. But 4 C ocean will have some warming effect, because added .5 C to our Ocean is a huge global effect. And unlikely to happen within 100 years.
    And it would cost a lot money to make this occur faster.

    • In response to GBaikie, there are many opinions on what the ideal global mean surface temperature should be. Me, I’d like it to be tropical everywhere during the day and temperature at night.

      The point is, however, that the question what is the ideal temperature is one that climatology has not even asked, let alone answered. The fact that the question has not been asked tells us that the debate about global warming is political, not scientific. The fact that it has not been answered tells us that there is no basis for any suggestion that warmer worldwide weather will entail a net global welfare loss.

      • “The fact that it has not been answered tells us that there is no basis for any suggestion that warmer worldwide weather will entail a net global welfare loss.”

        Far from it. As Dick Lindzen put it (paraphrasing), “a basic book on meteorology will tell you that in a warmer climate, extra tropical storminess would be reduced” – NOT increased as the AGW pseudo-science would have us believe.

        Colder climate = colder poles (that “polar amplification” they keep trying to use as a “melting ice cap” scare point) and larger equator-to-pole temperature differential, which means more atmospheric turbulence than in a warmer climate. The AGW pseudo-science is anti-logic.

      • “In response to GBaikie, there are many opinions on what the ideal global mean surface temperature should be. Me, I’d like it to be tropical everywhere during the day and temperature at night.”

        Ok, but you aren’t going to get it, because we are living in a icebox global climate. And we in global Ice Age, due to the geological features of our present world. {Ie, plate tectonic activity}

        But in our icebox climate, our entire ocean average temperature can become about 5 C. OR our icebox climate in the last couple million of year, has had ocean which have become 5 C.
        Or we have “proof” that our ocean can become 5 C.
        BUT there seems something about our Icebox climate which “stops” the ocean from becoming much colder than 1 C or much warmer than 5 C.
        And I don’t think a 4 C ocean “triggers” “something” to cause the world to become colder.
        If we knew how our climate actually works, then perhaps we could have an ocean with average temperature nearer to 5 C.
        So we have ocean currently which is about 3.5 C. And it’s been around this temperature for about
        4000 years {or more}. And a 3.5 C ocean doesn’t stop us from having such things as the Little Ice Age {during which time, the ocean cooled slightly {barely measurably} and sea level fell {more easily measured].
        And currently our ocean might be .1 C warmer than 100 years ago, and compared lowest temperature ocean in LIA, perhaps .2 C warmer. But in same sense than we say our global average temperature is “about” 15 C, our ocean is about 3.5 C. Or neither is very accurate.
        So I say that over last 4000 year the ocean has been about 3.5 C.
        But if you “want” to state that our ocean is exactly 3.5 C, then over last 4000 years years perhaps our ocean has been 3.5 +/- .3 C.

        But I am sticking with our ocean is about 3.5 C until such time as it’s been more accurately measured. Just as I am sticking saying Earth average surface air temperature is about 15 C.
        Though sometimes I will mention that Southern Hemisphere is about 1 C cooler than Northern Hemisphere {and this has known for more than hundred years, and has argued bout for over hundred as for why, this is the case}

        Anyhow if the Ocean become 4 C within a couple centuries, you get a world which more what you want- more of “be tropical everywhere during the day and temperature at night” and that roughly is what happens when you get any global warming. And it decreases the amount deserts in the world- why do people want so much land being deserts?
        I want some land as deserts, for sake of variety, but even with a lot more of world being “…tropical everywhere during the day and temperature at night” you still will huge regions which are desert. Because we are in a Ice Age and there is nothing practical that can be done to change this reality {{Other than “localized” terraforming” as in LA was once a desert- and we made dams and added water to the place- or California best farming regions was once [and still is] a dry desert region {San Joaquin Valley}.}}

        So, if we understand and can manage it, we could have ocean of 5 C, but ocean of 4 C should only raise sea level by about 1 foot. Plus not going to be possibly get a 4 C ocean within century. But if we terraform the Sahara Desert {could do this within decades] this will cause a significant warming effect {which has nothing to do with ocean average temperature] and will make the lands across the pond, more tropical {not tropical but mostly warmer nights]. Maybe, even “more tropical” than 4 C ocean would do.

  15. May I offer you a very simple estimate for climate sensitivity? Trenberth & Kiehl 1997 estimate CO₂ greenhouse effect at 32W/m² for cloudless sky, and 24W/m² for clouds. Given that clouds are currently at ca. 65% that gives us the total GHECO₂ of 26.8W/m². Now, we know that GHECO₂ is logarithmic, therefore it can be described as GHECO2 = C*ln(pCO₂), therefore C = 26.8/ln(410) = 4.45W/m². This gives us the radiative forcing for CO₂ doubling: 4.45W/m²*ln(2) = 3W/m². Taking derivation of Stefan-Boltzmann equation at the earth surface temperature (who cares what happens at 17k feets!) gives us 5.4W/m²K. Therefore ECS = 3/5.4 = 0.57K. And the entire anthropogenic warming 4.45*ln(410/280) / 5.4 = 0.3K. Nothing to worry about.

    Seriously, I find attempts to derive ECS from the (flawed) temperature series completely stupid, because we don’t know other forcings. Which can be negative or positive, and are certainly greater than the CO₂ influence.

    PS I also find the Trenberth’s number a bit high, because he attributes most of the forcing from the overlapping with water vapor bands to CO₂. He also assumes earth emissivity as equal to 1. Also the 280ppm is probably not the correct baseline, since some of the increase in CO₂ is attributable to the higher ocean temperature. Anyway that doesn’t matter much. Given the method above ECS will always stay between 0.45-0.65K.

      • Mr Van Doren makes the mistake I made when first studying global warming in 2006: he assumes that one can derive the Planck sensitivity parameter using surface rather than incoming radiative flux density. As Professor Lindzen was kind enough to point out to me when I first made that mistake, the emission temperature would indeed apply at the surface in the absence of greenhouse gases, but rises through the atmosphere as greenhouse gases are added to the mix. Therefore, official climatology’s approach, with which I have not argued in the head posting, is to derive the Planck sensitivity parameter as the ratio of surface temperature to four times the incoming solar irradiance net of albedo at the top of the atmosphere.

        • This has nothing to do with albedo, incoming solar flux or something else. This is the question of basic thermodynamics – to raise equilibrium object temperature from 288K to 289K you have to apply 5.4W/m². Period.

        • If Mr Van Doren wishes to disagree with Professor Lindzen, he should take up the matter not with me but with the Professor, who explains in his papers how a change in net (down minus up) radiative flux density at the characteristic-emission altitude changes the temperature at that altitude and, therefore, through the lapse rate, all the way down through the atmosphere, whereupon, in response to warming, the altitude at which the emission temperature obtains rises.

          For this reason, Schlesinger (1985) showed that one could obtain a reasonable approximation to the Planck parameter by the method shown in the head posting.

          • Lord Monckton, where do you see the 100W/m² between the theoretical GHE-less incoming solar energy and the emitted energy? At the surface. Take all GHGs away – and you will get 299W/m² absorbed by the Earth surface. And emitted are according to Trenberth 398W/m² – 99W/m² more, which is the GH effect. At the surface!! You won’t see the 100W/m² difference at 5km height.

    • My research study based on the reproduction of Myhre et al. formula gave CS value of 0.6 degrees. The present warming by CO2 about 0.25 degrees. Surprisingly the same numbers.

  16. This seems a good place to ask a question which ha niggle me for some time. Despite all these erudite articles it is very basic and simple.

    As I understand it since the end of the last glaciation the level of atmospheric CO2 gradually rose until the mid 19th century when a more rapid rise began whi h has accelerated since the mid 20th century. During the 9~10k years the global temperature rose and fell but was generally warmer than today. So my question is what was the climate sensitivity during the Minoan, Roman and other warm periods and what was it at the end of theses warm periods., for example when the LIA started. Is there any research on that and why it is different today?

    • Dear Mr Ben Vorlich

      Thank you for that very interesting question
      I did not go back that far but even just in the range of global mean temperature reconstructions (HadCRUT, GISTEMP, Berkeley Earth) can be found a large range of ECS values {ECS 7}.

      Please see Figure 6 in this post
      https://tambonthongchai.com/2019/01/22/empirical-sensitivity/

      Post Script:
      It appears that the Holocene warming and cooling trends appear to be chaotic and cyclical at millennial and centennial time scales . Pls see
      https://tambonthongchai.com/2019/06/11/chaoticholocene/

    • Ben Vorlich,

      Your understanding is incorrect. Atmospheric CO2 increased during deglaciation until 10,000 BP, according to Monnin et al. 2004, then decreased until 7,000 BP when started to increase again until 600 years ago, when it decreased during the Little Ice Age.
      https://i.imgur.com/Ffs00Le.png

      The modeling of climate change during the Holocene leads to a near constant warming driven by changes in GHGs, while temperature proxy records indicate a near constant cooling in what has been termed the Holocene temperature conundrum.

      • Javier

        In that case it’s worse than I thought. Although 7K years mismatch is still pretty serious. Michael Mann needs to start work on getting rid of the earlier warming ASAP.

  17. https://wikispooks.com/wiki/Document:Some_Big_Lies_of_Science#endnote_8

    Global Warming as a Threat to Humankind
    In 2005 and 2006, several years before the November 2009 Climategate scandal burst the media bubble that buoyed public opinion towards acceptance of carbon credits, cap and trade, and the associated trillion dollar finance bonanza that may still come to pass, I exposed the global warming cooptation scam in an essay that Alexander Cockburn writing in The Nation called “one of the best essays on greenhouse myth-making from a left perspective” [4][5][6].

    “The majority of politicians, on the evidence available to us, are interested not in truth but in power and in the maintenance of that power. To maintain that power it is essential that people remain in ignorance, that they live in ignorance of the truth, even the truth of their own lives. What surrounds us therefore is a vast tapestry of lies, upon which we feed.” – Harold Pinter, Nobel Prize Lecture (Literature), 2005

    https://longhairedmusings.wordpress.com/2019/10/09/through-the-needles-eye-the-future-of-global-competition-author-gopi-kumar-bulusu-grubstreetjournal/comment-page-1/#comment-5870
    Through The Needle’s Eye The Future of Global Competition Author Gopi Kumar Bulusu #GrubStreetJournal

  18. Incident sunlight contains a lot of IR:

    https://images.app.goo.gl/ZwburEGRX3VoKvke8

    It is sometimes said or implied that sunlight is shortwave visible light only with no IR, and IR is only present in radiation out to space from earth. But this is not true. Sunlight has a lot of IR. That’s why it warms your face.

    Therefore CO2 will backradiate out to space some of the incident IR in sunlight. (This phenomenon does not only occur at the earth’s solid surface.) So increasing CO2 in the atmosphere will actually reduce the incident solar energy at the top of atmosphere due to this back-radiation.

    The heat energy transferred from earth’s surface upward into the atmosphere, within the troposphere at least, is transferred primarily by convection, not radiation. Warm land surface warms the air which rises, etc., and we get clouds and weather. Actual flux of radiation including IR upward at the surface is much less than in incident downward sunlight, since most vertical heat transport is by convection, not radiation. Upward and downward radiated heat would only balance on the moon or another space body with no atmosphere. And especially no water in that atmosphere.

    (Heat export to space for overall global heat balance at the emission height is of course by radiation. But transport of heat to that height, upward from the surface is by both radiation and convection, mostly the latter.)

    Here’s the temperature profile of the atmosphere:

    http://apollo.lsc.vsc.edu/classes/met130/notes/chapter1/graphics/vert_temp.gif

    In the troposphere, temperature declines with height as higher air layers receive less and less convective warming from below. Here, radiation will play a much lesser role especially in the presence of water vapour with its large heat capacity. Flow of water heat in the atmosphere is predominantly by convection not radiation. And water vapour dominates the heat budget in the troposphere.

    Above the troposphere temperature zig-zags up through the stratosphere, mesosphere and ionosphere. Air is now much less dense with no water present. So the thermal effect of convection declines and direct solar irradiation predominates. However the picture is complicated by local solar heating phenomena restricted to specific layers: radiative heating of ozone at the top of the stratosphere and by cosmic ray ionisations occurring in the (thus named) uppermost ionosphere. Thus the temperature zig-zags.

    (As the stratosphere is heated from above by the sun, CO2 back radiation rejects some of this radiative solar heat energy. Therefore increasing CO2 reduces the solar warming of the stratosphere, causing the stratospheric cooling that is observed.)

    Some downward convective movement of heat in the atmosphere also occurs in the troposphere. This is the only layer that is dense enough for convection to be significant. Thus some downward convective heat transfer will occur from the upper sunlight-warmed troposphere. If downward and upward thermal convection equalled each other then a uniform temperature would pertain. But this is not the case. Downward convective heat movement does not balance upward because of the density gradient. Warmer air heats neighbouring cooler air more effectively when the warm air is more dense – it has disproportionately more energy. Thus on balance convective heat movement upward predominates over downward, and as a consequence the troposphere cools – rather than warms – with height. And density, dare one say it – is caused by 9ravity. It one is still allowed to believe in that g-word.

    It is at the bottom of the troposphere at the earth’s surface where relevant weather and climate occur. Here the radiation fluxes are not in balance (as AWG erroneously implies) but the flux downward from the sun is much stronger than the flux upward from the land surface. Remember / energy from the sun is by radiation only (much less convection due to the density gradient) while upward heat transport in the troposphere is mostly convective. If it were otherwise, then we would warm our face by stooping and facing the ground, rather than looking up toward the sun. (You might warm your face from car-park asphalt, but not from the surface of the sea.) The upward and downward fluxes of heat are of course in approximate equilibrium but the majority of the upward heat transport is by convection.

    CO2 cuts out some IR photons out of a mixed electromagnetic wavetrain. It cuts out more IR from incident sunlight at the upper atmosphere (mainly stratosphere) by back-radiation. It cuts out less IR from radiation from earth’s surface since this radiation flux is much weaker than incident sunlight. Therefore overall CO2 will exert a cooling effect on climate since it effectively reduces TSI, the heat energy incident on earth from the sun, due to the CO2 back-radiation to space. The atmosphere is well mixed with CO2 all the way up to the top of the mesosphere.

    One’s face is a sufficiently sensitive and accurate radiation detector to allow one to physically perceive the falsehood of the CO2 back radiation warming conjecture. It’s enough just feel the warmth of the sun on your face.

    • Phil Salmon
      You said, “Sunlight has a lot of IR. That’s why it warms your face.”
      Surely you aren’t suggesting that only IR can warm something! What happens when light with a wavelength shorter than 700 microns is absorbed by anything?

  19. Not clear why only data up to December 2011 is used when HadCRUT4 is current to August 2019. Also, the IPCC global surface temperature reference is a combination of at least 3 global temperature data sets (HadCRUT4, GISS and NOAA that I know of) to compensate for acknowledged gaps in coverage; in particular large gaps in Arctic and African coverage by HadCRUT4.

    Starting the data at 1850 is also questionable if what we are looking for is the man-made influence. The IPCC anomaly reference period for pre-industrial is 1850-1900, a period which is not expected to show a significant man-made warming effect; indeed, in HadCRUT4 the period has no discernible trend. Incorporating that period into the calculation for ΔEC will necessarily bias it low.

    If we use a combination of the 3 main surface data sets rather than just the coolest-running one (all set to the same anomaly base), draw the data out to the most recent common month rather than stopping it in 2011, and start the data ‘after’ the pre-industrial period of reference (i.e. from Jan 1901), then the value for ΔR1, which Lord Monckton gives as 0.75 K or 0.7 K per century.

    Plugging that into the other equations gives an ΔEC of 1.6 K, which, while still at the low end, is within the IPCC 2013 confidence range for equilibrium climate sensitivity (1.5 – 4.5 K ) and well within their estimate for transient climate response (1.0 – 2.5 K).

    • The text of the 3rd paragraph above is clipped for some reason. I should read:

      “…the value for ΔR1, which Lord Monckton gives as 0.75 K or 0.7 K per century.”

      • Sorry, my use of greater than and less than signs messed up the formatting. I also miscalculated the ΔR1, which changes ΔEC from 1.6 to 1.9 K. At the 3rd attempt then, end of the above post should read:

        “If we use a combination of the 3 main surface data sets rather than just the coolest-running one (all set to the same anomaly base), draw the data out to the most recent common month rather than stopping it in 2011, and start the data ‘after’ the pre-industrial period of reference (i.e. from Jan 1901), then the value for ΔR1, which Lord Monckton gives as 0.75 K or < 0.5 K per century, becomes 1.07 K or just under 1.1 K per century.

        Plugging that into the other equations gives an ΔEC of 1.9 K, which, while still at the low end, is within the IPCC 2013 confidence range for equilibrium climate sensitivity (1.5 – 4.5 K ) and well within their estimate for transient climate response (1.0 – 2.5 K)."

        • TheFinalNail asks why I have used 1850-2011 as the reference period. The reason is that all climate data were updated to 2011 from various sources in time for IPCC (2013).

          The FinalNail also says we should use all three surface datasets. IPCC (2013) used HadCRUT4, which is the mean of the five longest-standing datasets, two of which are lower-troposphere and three of which are surface.

          TheFinalNail also appears not to have adjusted the total-forcing and radiative-imbalance data to 2019. As a result, his calculation is excessive, though still very much below the current midrange estimate in the models.

          • I thank Lord Monckton for his reply. He says he used the period 1850-2011 as his reference period because all climate data were updated to 2011 from various sources in time for IPCC (2013). In fact IPCC (2013) uses data up to and including 2012, but my point is that his formula does not rely on “all climate data”; it relies on the global surface temperature record, which is currently updated to August 2019. By omitting data after 2011, even only using HadCRUT4, Lord Monckton omits the 5 warmest years in the surface temperature record (in descending order: 2016, 2015, 2017, 2018 & 2014). One can see how this might lend his results a cool bias.

            I’m afraid I must contradict Lord Monckton when he implies that IPCC (2013) only used HadCRUT4 as its reference surface data set. It used all 3 surface data sets set to the 1961-90 anomaly base. See for example Figure SPM.1(a) from the summary for policy makers which is marked “Observed global mean combined land and ocean surface temperature anomalies, from 1850 to 2012 from three data sets.” These are identified as HadCRUT4, GISS and NOAA in the main body of the report. By using only HadCRUT4, the coolest of the surface data sets, another cool bias is introduced to Lord Monckton’s result. The 2013 SPM is here (PDF, 22 MB): https://www.ipcc.ch/site/assets/uploads/2018/02/WG1AR5_SPM_FINAL.pdf

            Lord Monckton may be correct about my calculating abilities, however, really the only value I changed was the one for ΔR1, which is derived from linear regression of the temperature data and is independent, as far as I can see, from the forcing and imbalance adjustments (I stand to be corrected). He bases his calculation for ΔR1 on HadCRUT4 only and for the period Jan 1850- Dec 2011. This gives 0.75 K. I get that figure too, for that data. However, for the reasons outlined above, I used the average of the 3 surface data sets, started in Jan 1901 and used data right up to August 2019. This gives an ΔR1 of 1.07 K. If you plug that into the remaining calculations as presented by Lord Monckton, you get a value of 1.88 K for A leading to a value of 1.91 K for ΔEc.

            That is on the low side of the IPCC range for ECS (1.5-4.5 K), but this ΔEc value does not represent ECS; it is far more akin to transient climate response (TCR), which IPCC (2013) suggested is likely to be in the region of 1.0 to 2.5 K (see section D2 of the IPCC 2013 SPM linked to above).

          • Thefinal nail:

            How did you calculate the natural warming through the entire period? What evidence do you use to identify natural warming from anthropogenic?

          • TheFinalNail has not understood the basis on which the calculation in the head posting was done. All of the data – not just the temperature data – were updated to 2011 (not 2012) for IPCC 2013). I needed authoritative data to the same year if possible, not only for temperature but also for radiative forcing and radiative imbalance. TheFinalNail appears not to have adjusted the forcing and radiative imbalance data to whatever date he used for his warming estimate.

    • “The IPCC anomaly reference period for pre-industrial is 1850-1900 …”

      When defining “pre-industrial (temperatures)” the IPCC managed to shoot itself in the foot by acknowledging that different definitions exist, but failing to decide which one to use internally. This applies to both the time period corresponding to “pre-industrial” and the (absolute / anomaly) reference temperature associated with that time period.

      For the IPCC AR5 report (WG1, 2013) they include at least the following statements and comments.

      – – – – –

      Table 5.1, page 389 :
      “Pre-industrial period” … “refers to times before 1850 or 1850 values (h)”

      Note (h) : In this chapter, when referring to comparison of radiative forcing or climate variables, pre-industrial refers to 1850 values in accordance with Taylor et al. (2012). Otherwise it refers to an extended period of time before 1850 as stated in the text. Note that Chapter 7 uses 1750 as the reference pre-industrial period.

      Section 7.5.1, page 614 :
      The reference year of 1750 is chosen to represent pre-industrial times …

      Section 7.5.3, page 618 :
      The reference years for pre-industrial and present-day conditions vary between estimates. Studies can use 1750, 1850, or 1890 for pre-industrial …

      Section 8.1.2, page 668 :
      Analysis of forcing due to observed or modelled concentration changes
      between pre-industrial, defined here as 1750 …

      Figure 12.40, panel (b), page 1100 :
      PI temperature (anomalies) = 1980-1999 average – 0.5°C (apparently …)

      • Fair point. The period 1850-1900 is commonly used in studies as the anomaly base for ‘pre-industrial’, perhaps because there are instrument data for the period and it does not contain any trend. A 2018 IPCC FAQ defined 1850-1900 as an “approximation” for pre-industrial but was open to improvement.

        However, the point remains that if you include a lengthy early period of the global instrument temperature record, one when human influence on climate was likely negligible, then any results you get regarding the past or future rate of human-caused warming are likely to have a cool bias.

    • Not clear why only data up to December 2011 is used…

      I don’t know why you aren’t clear considering Lord M points out why in the article “…2011, the year to which climate data were updated in time for IPCC’s 2013 Fifth Assessment Report…”. You did read the entire article I hope.

  20. It is even less than 1.4K, but in the range 0.5-0.7K, depending on what feedback on the convective lapse rate is assumed. If there is equipartition of the convectively transported and radiatively transported heat flux, then the number is 0.5K. A triffling amount.

    • I like trifle…. But, as David Middleton said, we should worry about people who make laws but can’t add up, not trifling (and beneficial) AGW.

  21. Viscount Monckton,

    You say,
    “As far as I can discover the Intergovernmental Panel on Climate Change, to name but one, has never attempted to derive a midrange estimate of future global warming by that most obvious and direct method – from real-world data.”

    Unfortunately, you cannot “discover” the information there because the Intergovernmental Panel on Climate Change (IPCC) studiously refuses to mention it.

    Empirical – n.b. not model-derived – determinations indicate climate sensitivity is less than 1.0deg.C for a doubling of atmospheric CO2 equivalent. This is indicated by the studies of
    Idso from surface measurements http://www.warwickhughes.com/papers/Idso_CR_1998.pdf
    Lindzen & Choi from ERBE satelite data http://www.drroyspencer.com/Lindzen-and-Choi-GRL-2009.pdf
    and Gregory from balloon radiosonde data http://www.friendsofscience.org/assets/documents/OLR&NGF_June2011.pdf

    I especially commend the suite of 8 (yes, eight) “natural experiments” reported by Idso.

    I hope this is helpful.

    Richard

    • Most grateful to Richard Courtney for his comment. It is intriguing that IPCC carefully avoids looking at real-world methods of deriving equilibrium sensitivity.

      Lindzen and Choi published a correction to their 2009 paper in 2011, for the original paper had been based on defective ERBE data that were later corrected for orbital decline in the satellite.

      But the approach they took was a beautifully elegant one.

    • Once the nomenclature is corrected in Figure 4, it will be apparent why the data is unimportant.

      The Figure as currently constructed compares “Observed” with “Expected” values. Once the figure is corrected to read “Observed” and “Actual”, that’ll clarify why the dataset is superfluous to the task at hand and should be properly ignored.

      Yes: it’s a joke. Absurd? I witnessed a graph being presented wherein the data was marked “observed” and the model prediction “actual”. The worker concluded from the copious disagreement between the two that the experiments needed to be repeated. He hadn’t worked-around to applying corrections to the observed data to close the discrepancy.

      Nice piece of work. It make some simplyfying assumptions, but ones that generally favor the case of CO2 forcing. Talking about how models fail to match measured temperatures is (conceptually) the same thing, but has an esoteric flavor that can be lost on the non-scientific. Taking their own concept and simply running the numbers is more approachable.

      One can quibble about some of the values and the exact estimation, but the bracing difference between the predictions and reality doesn’t leave much oxygen to argue for CGW unless they wish to string together some pretty magical scenarios.

      It will be entertaining to watch them try as they must. They have gone way to far down that road to turn back. For them, it’s with their shields or on them.

      • The method of deriving Charney sensitivity from given values of global warming, total and unrealized forcing over a given period is well established in the journals. The head posting shows how a system-gain factor can be derived from these values and then applied to future reference warming to obtain equilibrium warming. In particular, a doubling of CO2 concentration would warm the world by about 1.4 K at equilibrium, not 4.1 K.

  22. Lewis & Curry (2018) derive a best estimate for ECS = 1.50 K implicitly assuming that all warming comes from CO2 increase.

    https://www.nicholaslewis.org/wp-content/uploads/2018/07/LewisCurry_The-impact-of-recent-forcing-and-ocean-heat-uptake-data-on-estimates-of-climate-sensitivity_JCli2018.pdf

    When taking into account that some of the warming since 1850 has natural reasons we get these values:
    25% natural: ECS = 1.03 K
    50% natural: ECS = 0.65 K

    http://www.drroyspencer.com/2018/02/diagnosing-climate-sensitivity-assuming-some-natural-warming/

    So the reported CO2 sensitivity in CMIP6 seems to be far away from reality.

    • 100% Natural: ECS = 0.0°K
      200% Natural (without CO2 it would have gotten even warmer): ECS = -1.5°K
      Unknown % Natural: ECS = Anyone’s Guess

      When allowing for “some of the warming to be natural”, should one not take into account the full range of possibilities, particularly when starting out with the belief that no one really knows for sure?

    • I agree with Mr Haas that there may be a zero temperature response to atmospheric CO2 enrichment. If, for instance, Michael and Ronan Connolly are correct to find in the radiosonde data that the atmosphere taken as a whole acts as an ideal gas, and if they are correct to draw the conclusion from Einstein’s 1917 paper on quantum radiation that in an ideal gas the greenhouse effect, though real and measurable, cannot cause any net warming, the temperature response to all greenhouse gases is indeed zero. But they have not yet proven their case, though they are working on it.

    • Would the Connollys’ case for a nonexistent greenhouse effect (from Einstein’s quantum theory), bear any resemblance to the case and theory of Ferenc Miskolczi? Nonlinear self-reorganisation to maintain a constant optical depth? Or is it a different argument entirely?

      • In response to Mr Salmon, the Connollys’ argument is not on all fours with Miskolczi. The Connollys demonstrate that the atmosphere behaves as an ideal gas: i.e., the boundary layer, the troposphere and the tropopause are each in overall thermodynamic equilibrium, wherefore the entire atmosphere is in thermodynamic equilibrium. In an ideal gas, the greenhouse effect cannot cause warming, if the Connollys are interpreting Einstein’s 1917 paper correctly. It’s some way above my pay-grade, but they are working hard on a series of papers. If they are right, this is definitely Nobel Prize material.

      • MoB
        Einstein wrote in that 1917 paper:

        During absorption and emission of radiation there is also present a transfer of momentum to the molecules. This means that just the interaction of radiation and molecules leads to a velocity distribution of the latter. This must surely be the same as the velocity distribution which molecules acquire as the result of their mutual interaction by collisions, that is, it must coincide with the Maxwell distributions. We must require that the mean kinetic energy which a molecule per degree of freedom acquires in a Plank radiation field of temperature T be

        kT / 2

        this must be valid regardless of the nature of the molecules and independent of frequencies which the molecules absorb and emits.

        “Independent of the molecules and frequencies…”
        This hardly sounds like a resounding endorsement of CO2 backradiation warming theory from Einstein.

    • Let us start with a reference sensitivity of 1.02 degrees K. A researcher from Japan pointed out that such calculations completely ignore the fact that a doubling of CO2 in the Earth’s atmosphere will cause a slight decrease in the dry lapse rate in the troposphere which is a cooling effect and will reduce the climate sensitivity of CO2 by more than a factor of 20 yielding a reference sensitivity of CO2 to less than .051 degrees K. The major feedback effect would be that of H2O. However adding H2O to the atmosphere very significantly lowers the lapse rate as evidenced by the fact that the the wet lapse rate is significantly less than the dry lapse rate in the troposphere. So instead of amplifying CO2 based warming by a factor of 3 it may be more realistic that H2O feedback retards CO2 warming by a factor of 3 yielding a climate sensitivity of CO2 of less than .017 degrees K for a doubling of CO2 in the Earth’s atmosphere. Less than .017 degrees K is trivial but but there is some additional logic that will lead one to the conclusion that the climate sensitivity of CO2 is essentially zero.

      The reality is that the Earth’s climate has been changing for eons yet the change is so small that it takes networks of very sophisticated sensors, decades to even detect it. One must not mix up true global climate change with weather cycles that are part of the current climate. Based on the paleoclimate record and the work done with models, the climate change we are experiencing today is caused by the sun and the oceans over which mankind has no control. Despite the hype, there is no real evidence that CO2 has any effect on climate and there is plenty of scientific rationale to support the idea that the climate sensitivity of CO2 is zero.

      AGW is not a proven theory but rather a conjecture. AGW sounds plausible at first but upon a more detailed examination one finds that the AGW conjecture is based on only partial science and is full of holes. For example there is the idea that CO2 acts as a thermostat and the increasing CO2 in the atmosphere causes warming because CO2 has LWIR absorption bands that cause CO2 to trap heat. CO2 based warming causes more H2O to enter the atmosphere which causes even more warming because H2O also has LWIR absorption bands and hence causes H2O to trap even more heat. So according the ths AGW conjecture H2O acts to amplify any warming that CO2 might cause. Al Gore in his movie, “The Inconvenient Truth” presents a chart showing CO2 and temperature for the past 650.000 years. There is an obvious correlation between CO2 and temperature which Al Gore claims shows that CO2 works as a thermostat and that more CO2 in our atmosphere causes warming. But a closer look at the data shows that CO2 follows instead of leeds temperature. It is higher temperatures that cause more CO2 to enter the atmosphere because warmer water does not hold as much CO2 as does cooler water. Contrary to what AL Gore claims, there is no evidence that the additional CO2 causes warming. On the plot, Al gore included where CO2 is today. CO2 is much higher than one would expect form the warming of the oceans and the proximate cause of the increase in CO2 is mankind’s burning of fossil fuels. According to the chart, if CO2 were the thermostat of global warming then it should be a heck of a lot warmer that it actually is but it is not. If anything, Al Gore’s chart shows that CO2 does not cause global warming as Al Gore claims.

      H2O is actually a stronger absorber of IR than is CO2 on a molecule per molecule basis. According to he AGW conjecture, the idea is that CO2 warming causes more H2O to enter the atmosphere which causes even more warming which causes even more H2O to enter the atmosphere which causes even more H2O to enter the atmosphere and so forth. This positive feedback effect does not really require CO2 based warming but will operate on H2O based warming alone. This positive feedback effect, if true, would make Earth’s climate very unstable with H2O based warming causing more H2O to enter the atmosphere causing even more warming causing even more H2O to enter the atmosphere until all the bodies of water on Earth boiled away. Such an event would cause the barometric pressure and temperature of the Earth’s surface to be much higher than it is on Venus but such has never happened. What the AGW conjecture ignore’s is that besides being the primary greenhouse gas, H2O is a major coolant in the Earth’s atmosphere moving heat energy from the Earth’s surface to where clouds form and where heat energy is more readily radiated to space. The over all cooling effect of H2O is evidenced by the fact that the wet lapse rate is significantly less than the dry lapse rate in the troposphere. So instead of providing a positive feedback amplifying any warming that CO2 might provide, H2O provides negative feedback and retards any warming the CO2 might provide, Negative feedback systems are inherently stable as has been the Earth’s climate for over the past 500 million years, enough for life to evolve because we are here.

      The AGW conjecture depends upon the existence of a radiant greenhouse effect in the Earth’s atmosphere caused by trace gases with LWIR absorption bands. A real greenhouse does not stay warm because of the action of heat trapping gases but rather stays warm because the glass limits cooling by convection. It is entirely a convectime greenhouse effect that keeps a real greenhouse warm. No radiant greenhouse effect has been observed, So too on Earth where gravity and the heat capacity of the atmosphere acts to limit cooling by convection. Derived from first principals, the Earth’s convective greenhouse effect causes the surface of the Earth to be roughly 33 degrees C warmer than it would otherwise be. 33 degrees C is the amount derived from first principals and 33 degrees C is what has been observed. Any additional warming caused by a radiant greenhouse effect has not been observed. The radiant greenhouse effect has not been observed in a real greenhouse, in the Earth’s atmosphere no on any planet in solar system with a thick atmosphere. The radiant greenhouse effect is nothing but science fiction so hence the AGW conjecture is nothing but science fiction as well, This is all a matter of science.

      • While one has some sympathy with Mr Haas’ viewpoint, it is a question of what one can prove, and not what one wishes was so.

        Nothing less than absolute proof will eventually budge the climate Communists. The head posting provides proof by an elementary but robust method that midrange Charney sensitivity is less than 1.4 K. And that value is small enough not to require any mitigation action at all.

        • Monckton of Brenchley and his group of associates have done a good job showing that if all the warming since 1850 were caused by an increase in CO2, based on measured data, that the CS of CO2 could not be greater than 1.2K from one article and 1.4K from another article. But those that believe in the AGW conjecture might argue that if it were not for the increase in CO2 there should have been net cooling during that period so that CS CO2 sensitivity is still much higher. That is why it is so important as a next step to get a good handle on what non-CO2 related climate change most likely happened during that time frame. One cannot really prove anything in climate science. Proponents of the AGW conjecture have stated that CO2 based warming is the only explanation for warming since the middle of the 20th century. We who do not believe in the AGW conjecture need to show that there are other explanations that are quite plausible that do not involve any CO2 based warming. Because no one can really prove anything either way it comes down more to a preponderance of the evidence to sway public opinion so as to stop the very wasteful spending trying to stop the Earth’s climate from changing.

    • WH, thank you for making this point, and I see nearby that MoB agrees. I understand the logic MoB has been using in exposing the error of mainstream climate science concerning feedback, using the widely accepted temperature data and forcings. But I also cringe at the conclusion of even a low Charney sensitivity (“less than 1.4K”) when there are such good reasons to expect zero.

      • In response to Mr Dibbell, there are indeed good reasons to expect Charney sensitivity to be close to zero, but ther is a difference between what one imagines to be the case and what one can prove, particularly when the establishment is determined not to give way to anything other than absolute proof.

        • But there is no absolute proof in climate science because one cannot run definitive global climate experiments like rerunning the last 150 years but with no increase in CO2.

  23. bobl,
    I am not reading you about doublings.
    The first doubling is 1 molecule to 2. The second doubling is 2 molecules to 4. A doubling here,can doubling there and sooner or later, your doublings start to mean real values.
    What base values are you using for your first doubling, and why?

    • Thanks for asking the question that most posters don’t seem to bother with as we are currently between 9 & 10 actual doublings.
      Where does that put us on the logarithmic scale?

      • We’re between eight and nine, if the first doubling be from one to two ppm.

        At present, the other GHGs are negligible, so consider just H2O, at an average concentration of ~25,000 ppm, and CO2, at 415 ppm. If, as alarmists claim, the GHE is responsible for a warming of 33 degrees C, and assuming equal effect from both gases, then CO2’s share is about 1.6%, or 0.5 degree C. There are places where the presumed effect of CO2 might be greater, as over the arid polar regions and lower latitude dry deserts, and higher in the atmosphere, where there is less water vapor.

        https://www.acs.org/content/acs/en/climatescience/energybalance/planetarytemperatures.html

        Yet the one place on Earth’s surfface where the GHE of CO2 should be most pronounced, ie the South Pole, shows no warming since records have been kept there, despite steadily rising magic gas levels.

        Also, the stratosphere is supposed to cool under CO2-induced global warming of the troposphere.

          • You’re welcome.

            In the Silurian Period, 444 to 419 Ma, CO2 averaged an estimated 4500 ppm, ie 16 times “pre-industrial” level, yet best guess temperature averaged only some three degrees C higher than now, implying 0.75 degrees per doubling.

          • However, in the following Devonian Period, the spread of land plants drew down CO2 to an estimated average of 2200 ppm, eight times “pre-industrial”, for three doublings, with average T around six degrees C over the ~14 degrees at the end of the LIA, c. AD 1850. This implies ECS of 2.0 degrees C per doubling.

            During the Cretaceous Period, CO2 averaged an estimated 1700 ppm (six times mid-19th century’s supposed 285 ppm), with mean T higher by four degrees than pre-industrial, implying 1.6 degrees C per doubling.

  24. When I see “Charney sensitivity is 4.1 K”, I have to think “Carney sensitivity” (came up by carnival employees). Similar to Chimp5 models.

  25. CO2 shows a logarithmic decay for W/m2 with an increase in CO2. At the current level of 410 ppm, each additional molecule of CO2 adds very very very little to the energy balance. More importantly, the Oceans dominate the atmospheric temperatures. To explain the atmosphere, you have to explain the Oceans and how CO2 warms them, which LWIR between 13 and 18 microns doesn’t do. Assuming however that it does, there is a rate of change associated with the warming due to a marginal 1.64 W/m2, and El Nino’s can cool the oceans by a full degree or more. For CO2 to be the cause, which is laughable considering the amount of variation between a cloudy and clear day is astronomical and the energy associated with visible radiation, one would have to explain how CO2 and replace all that energy lost due to an El Nino in the time period between El Ninos. CO2 is like trying to fill an Olympic sized pool with a syringe. Facts are, Mother Nature has a natural pressure valve called an El Nino that rapidly reduces the energy in the climate system. That is why in 600 million years of geologic history there has NEVER been catastrophic warming even when CO2 was 7,000 ppm. The rate as which CO2 can replace lost energy is simply way way way too slow. The only way for anything to cause catastrophic warming would be to place the earth in a thermos and not let any energy excape to outerspace. Anyway, Lord Monchton, I created an intesting experiment for High School Students and it looks like NASA had fouled it up by promptly removing the data. You may want to try this kind of experiment yourself. My experience is the simpler you make your arguements, the more effective they will be.

    WUWT, you may want to look into this as well. NASA playing with the data isn’t something that should be tolerated. Otherwise all we will get is GIGO.

    An Easy High School Experiment to Debunk CO2 Driven Climate Change
    https://co2islife.wordpress.com/2019/09/15/an-easy-high-school-experiment-to-debunk-co2-driven-climate-change/

    The Case of the Disappearing Data
    https://co2islife.wordpress.com/2019/10/08/the-case-of-the-disappearing-data/

    • The closing sentence of the 4th paragraph states:
      “That would take at least a century to happen.”

      Is something missing? What would take a century to happen?

  26. IS there enough data available yet (for creating a linear response function) for Pat Frank to take a stab at doing an uncertainty analysis on CIMP6 models?

    • We’d need to know the average annual long wave cloud forcing error of the CMIP6 models to do the uncertainty analysis, John.

      I doubt that will be known any time soon.

      On the other hand, it’s very doubtful that the physics of cloud modeling is much improved over the CMIP5 models. The improvement from CMIP3 to CMIP5 was pretty much zero.

      • Thank’s Pat. Basically you would need a lot of runs across a lot of models, then try and build a parametric based emulator which would be validated against the runs, right? That could then be compared to the actual data from Lauer as you did with CIMP5? Basically the first problem is having enough CIMP6 output to work with?

        • Basically, John, someone would need to run a whole slew of CMIP6 simulations hindcasting the past climate over 20 years or so, say, 1996-2015.

          Then compare the global total cloud fraction retrodicted in those hindcast simulations with the cloud fraction actually observed over those years by satellite.

          Then calculate the (simulated minus observed) cloud fraction error grid-point by grid-point. Sum it all up for each model across that 20 years to get an annual average error in global total cloud fraction.

          The cloud fraction error must then get converted to average annual error in long wave cloud forcing.

          Then get a bunch of air temperature projections from different CMIP6 GCMs, their RCP projections, say, and see if paper eqn. 1 will emulate those projections, using the same RCP forcings.

          It almost certainly will be successful (unless someone has decided to add some non-linearity to muddy the emulation waters).

          Get all that, and the uncertainty analysis is a go.

          At a guess that sort of detailed data on CMIP6 models will not be available for some time.

          • Ok. Basically, repeat Lauer’s work, but with CIMP6 models to get the CIMP6 uncertainty, then your part starts. So, basically there is not enough data at his point.

            You could build you emulator for CIMP6 (if there is enough data for that even), then compare it to CIMP5 and see if there is any difference. If not, the CIMP5 results still stand.

  27. Can someone explain, in qualitative terms, why CO2’s property of absorbing and re-radiating IR photons does not cause incoming solar IR radiation to be back-radiated out to space?

    Such that increasing CO2 in air would reduce the heating of the atmosphere by IR by backradiating incoming solar IR to space.

    At all heights above the emission height, the sky above is transparent to IR. So in the stratosphere and above CO2’s radiative properties will reduce TSI and cool the planet.

    In the troposphere the hypothesised back radiation back to earth might indeed conceivably cause some equilibration warming. But this would only offset the cooling caused by CO2 in the upper atmosphere re-radiating solar IR back out to space.

    Tell me it ain’t so.

    • CO2 absorbs around 3.4 microns. Incoming radiation is in the visible range (predominantly). So, incoming radiation needs to be absorbed on the earth’s surface, then re-radiated in the thermal range (say 2-7 microns or so) before CO2 can absorb it. So basically, CO2 is not able to absorb incoming radiation, but can absorb re-radiated radiation.

    • Phil, the collisional decay of vibrationally excited CO2 completely dominates radiative decay in the troposphere. The absorbed radiant energy goes from CO2 into the kinetic energy of the troposphere.

      The real question is, what does the climate do with that kinetic energy. Climate model(er)s assume it all shows up as sensible heat. But in fact, no one knows what happens.

      Does the kinetic energy convect away? Does global cloud extent and distribution change? What happens, if anything, to tropical thunderstorms or tropical precipitation?

      No one knows.

      • Pat
        Einstein wrote in 1917:

        During absorption and emission of radiation there is also present a transfer of momentum to the molecules. This means that just the interaction of radiation and molecules leads to a velocity distribution of the latter. This must surely be the same as the velocity distribution which molecules acquire as the result of their mutual interaction by collisions, that is, it must coincide with the Maxwell distributions. We must require that the mean kinetic energy which a molecule per degree of freedom acquires in a Plank radiation field of temperature T be

        kT / 2

        this must be valid regardless of the nature of the molecules and independent of frequencies which the molecules absorb and emits.

        “Independent of the molecules and frequencies…”
        This hardly sounds like a resounding endorsement of CO2 backradiation warming theory from Einstein.

    • John Q

      CO2 is not able to absorb incoming radiation

      Wrong. There is plenty of IR in sunlight at all the wavelengths absorbed by CO2 – 2.7, 4.3 and 15 microns – see the 4th figure here:

      https://www.researchgate.net/publication/226227567_Solar_radiative_output_and_its_variability_Evidence_and_mechanisms/figures?lo=1

      Since the spectrum reaching the surface is depleted in the CO2 absorption bands, this proves that sunlight on its way down to the surface has IR removed by CO2. Some of this is back-radiated out to space. So increasing atmospheric CO2 WILL REDUCE TSI – it will reduce solar IR reaching earth.

      Granted IR does not reach the earth’s surface, but it does warm the atmosphere. This also adds to earth’s heat budget. Has this been ignored??

  28. There are clearly only three major movers of climate on a geological scale; the oceans (or specifically the quantity of water on our planet), the planet’s precession of the equinoxes, and our sun. Everything else we have observed, and called “weather” is merely the chaotic balancing act of these forces in our atmosphere. Whether 4 parts per 10k or 70 parts per 10k, a trace molecule, which releases its extra “stored” energy within nanoseconds, will still produce only a trace effect upon temperature far outside the ability of modern instrumentation to quantify.

    When scientists (and I use that term loosely) can adequately explain the cause and longevity of an ice age, I might begin to listen to them regarding “climate science” (used equally loosely). To judge by their current accuracy, and the earlier WUWT story regarding the temperature tipping point around 1000 AD, we will be well into the start of the next ice age before they’ll get around to noticing. And perhaps we already are…

    Much ado about nothing, methinks. Akin to arranging the deck chairs on the Titanic.

  29. 13.9 billion years ago the Big Bang occurred, energy-matter exploded into existence. Or maybe out of a string-world 11 or so dimensions sitting happily at the Planck length, three of them accidentally popped out and became spatially extended – whatever.

    After a crazy fraction of a second including an inflationary phase, there followed a period of 300,000 years during which the photon ruled. Light was so intense that matter particles could not form and the universe though light dominated, was opaque to light. This is called the light epoch.

    At the end of that 300,000 years, the universe had expanded enough to become transparent to photons and matter particles could form. From that moment until the present we are in the matter epoch.

    Now the reason for saying all this is that to read the theory behind CO2 back radiation warming, one could be justified for wondering if the writer is confused as to which epoch we are in. Heat is discussed only in terms of radiation fluxes and matter is almost ignored.

    We are firmly in the matter, not the light, epoch and one implication of this is that radiation plays a minor role in the heat dynamics of the atmosphere. The most important process is convection. This of course occurs only in the troposphere, but most of the atmosphere is the troposphere. Only the mesosphere and ionosphere are fully radiation dominated since matter is so sparse – it’s practically space.

    Maybe there is an emission height. But heat is transported to the emission height from the surface primarily by convection, not radiation. Radiative fluxes play a negligible role easily snuffed out by water vapour feedbacks and attractor seeking nonlinear adaptative and optimising responses.

  30. Ja. Ja.
    But it seems you have all forgotten that it will get hotter at the higher lats due to the natural climate change. Click on my name to read my report on that.

  31. Are they using CMIP6 or CMIP6 without particle forcing? Because if it’s the latter then it is simply junk science.

    You don’t need to go into any more detail than this. If they are selectively ignoring data because they don’t like the conclusions then they are propagandists, not scientists.

  32. I am sorry, for being so direct.

    But from my point of view it has to be considered as a total imbecility case that totally burns as bright and as splendid.

    As far as I know and I can tell, in consideration of models and experiment proper, as per GCM simulation the clause of CO2 doubling does not even exist in the mater of merit there ever.

    The only GCM proper simulations that may consist as per the clause of CO2 doubling are simulation that have a initial condition at below 240 ppm, completely non realistic, where the entirety of the rest of GCM simulation proper above the 240 ppm initial condition never do a doubling there.

    I am not sure yet, as how long for ppl it will take to understand that CS, ECS and TCR are simply AGW components or “man-made climate change” components, with no any value or any meaning out of such as a given.

    When considering it by this angle the question requiring an answer stands as;

    “Who happens to be more imbecile there, the ones that challenge for a feces dirty fight, or the ones who readily accept and comply with that challenge, every time as often as possible in the very means of that premise!??”

    Sorry for being so strictly direct with this one, only so as due to my proposition of understanding and addressing of this issue in that regard.

    It is like keep head banging the same wall for ever in the same method, and expecting a different result.

    really really sorry!

    cheers

  33. A question for MoB.

    One thing that has troubled me for some time is what appears to me to be single minded obsession with radiation to calculate temperatures.

    Consider this. During the day non radiative processes removes energy from the surface. At night a potion of this enery is returned to the surface, also by non radiative processes. A similar process happens between the tropics and the poles.

    The net effect of non radiative processes is to reduce maximum temperatures and increase minimum temperatures, while the average temperatures are unchanged.

    However, because radiation is a forth power function, reducing the variance in this fashion MUST also reduce outgoing radiation, which according to GHG theory must lead to an increase in surface temperatures to maintain the radiative balance at TOA.

    Thus, non GHG processes also create an increase in surface temperatures according to the same logic used to explain the increase in surface temperatures due to GHG.

    And if non GHG and GHG both create the GHG effect, then the GHG effect is nothing of the sort. That what we believe is due to GHG is in reality overestimated because we have incorrectly ignored the non GHG contribution to the GHG effect.

    • My understanding is that GCMs contain CFD code, so should be modeling some convective heat transfer.

    • In response to Ferd Berple, non-radiative transports redistribute heat within the atmosphere, keeping it in overall thermodynamic equilibrium, while radiative processes – if Einstein (2017) is wrong and the greenhouse effect can cause warming at all in an atmosphere in overall thermodynamic equilibrium – will cause warming. It is only the radiative processes that are subject to the radiation law.

      • Monckton of Brenchley
        October 10, 2019 at 1:33 pm
        ————-

        if Einstein (2017) is wrong
        —————————–

        Lord Monckton,

        Really me falling there for the “bait”, with the mentioning of the Einstein….

        From my position of understanding, the GCM proper model experiment,
        apart from any alleged or otherwise contribution in the consideration of AGW or whatever else there in these given lines of climate,
        still happens also to be and standing as the main and the only proof there, so far, even as in the case of an experiment,
        but still valuable in consideration of the claim that:

        t=nt,
        where “t” the time as standard time of and for any given point in space,,,,
        the only responsive variable there in the GCM simulations, that ends up as “effected”…
        where “t” as standard of time, or time standard….

        No way considering the possibility of contemplation that light may be considered as bending, outside the clause of:

        t=nt…. as Einstein clearly claimed… if I happen to not be wrong with this one!

        oh, well, that could be actually considered as way over the top…

        But from my point of view, the GCM simulations in the summery, apart from the AGW and climate thingy, do propagate strongly as the main experimental platform thus far proving, or from some point of view considered as proper valid proof,
        that Einstein being correct with his “punchy” claim of :

        t=nt…
        (even when such an experiment (GCM) was not set-up to provide or prove such as a condition).

        All math or numbers or equations applied in consideration of GCMs, will not much make any sense there, unless including and being subject to the clause of:

        delta “t”, where “t” the time as per time standard.

        Oh, well, this really over the top, yes, most probably… but just saying it!

        Please do ignore this, if missing the point, made or claimed here…
        no hard feelings there, anyway… 🙂

        Clever intelligible reaction there, if I may say… 🙂

        But still bound to the “heartless” valuation of the scientific method. as all else there… prior or after.

        Thank you.

        cheers

  34. According to alarmists, water vapor increase depends only on temperature increase of the liquid surface water and has increased an average of 0.88% per decade. Actual measurements show the global average WV increase to be about 1.47% per decade.

    This proves WV increase, not CO2 increase, has contributed to temperature increase.

    CO2 increase in 3 decades, 1988 to 2018 = 407 – 347 = 60 ppmv
    Water vapor increase from graph trend of NASA/RSS TPW data = 1.47 % per decade
    From NASA graphic, average global WV = 10,000 ppmv
    WV increase in 3 decades = .0147 * 10,000 * 3 = 441 ppmv
    Per calculations from Hitran, each WV molecule is 5+ times more effective at absorbing energy from radiated heat than a CO2 molecule.
    Therefore, WV has been 441/60 * 5 = 36+ times more effective at increasing ground level temperature than CO2.
    The increased cooling by more CO2 well above the tropopause counters and apparently fully compensates for the tiny added warming from CO2 increase at ground level. Climate Sensitivity is not significantly different from zero.

    Accounting for the WV increase, ocean surface temperature cycles and the solar effect (quantified by the sunspot number anomaly time-integral) matches 5-year smoothed HadCRUT4 measured temperatures 96+ % 1895-2018.

    • So Monckton is saying that ,according to the temperature data, assuming that all of the warming was caused by adding CO2 to the atmosphere, the climate sensitivity of CO2 cannot be more than 1.4 degrees K. You are saying that at least 96% of the warming can be attributed to natural process which would imply that the climate sensitivity of CO2 is less than .056 degrees K. Through some different logic I came up with a value of less than .017 degrees K. Either value is quite small and for all practical purposes is essentially zero. It has been my conclusion that there is no real evidence that CO2 has any effect on climate.

      • WH,
        Results summarized in Table 1 of my blog/analysis show 0.62 K from increase in water vapor 1895 to 2018. If from CO2 increase at ground level this would be 0.62/36 = 0.017 K, same as you got.

        My assessment at http://diyclimateanalysis.blogspot.com (second paragraph after Fig 1) shows that the added cooling from added CO2 above the tropopause more than compensates for this tiny amount at ground level, corroborating that CS is not significantly different from zero.

        The 96% is how well the combination of ocean cycles, water vapor, and solar effect match the HadCRUT4 measurements. The WV increase correlates with irrigation increase (both exhibit major upturns around 1960). If that can be shown to be cause and effect, humanity had a substantial contribution to the temperature increase.

        Average global temperature is being propped up by WV increase and el Ninos. Eventually the quiet sun and ocean cycle downtrend will prevail and an extended temperature decline will follow.

        • WOW! It may be only coincident but we both arrived at the number, .017 by two different methods that taken together includes both physics and measurement data. I am sure that IPCC would reject our lines of reasoning because in means that the increase in CO2 caused by mankind’s use of fossil fuel does not cause a significant change in climate and hence the IPCC should no longer be funded.

  35. The average insolation at is 1363.5 W m^-2. But January 5th 2020, it will be 1,410.3 W m^-2. That is the date Earth is at aphelion, the closest distance to the Sun. On July 4 2020, the insolation will be 1,319.0 W m^-2. That is the date of aphelion, when the Earth is furthest from the Sun.

    The difference is due to the eccentricity of the Earth’s orbit, and the indisputable (and undisputed) fact that light intensity falls off as the inverse square of the distance from the source.

    In January, the Earth is experiencing summer in the southern hemisphere. Most of the area presented to the Sun is ocean, which has an albedo of 0.06. In July, summer is in the northern hemisphere, and most of the area presented to the Sun is land. Land can have an albedo from 0.06 (dark wet soil) to 0.29 (desert). Virtually all of the northern hemisphere is more reflective than the southern.

    So riddle me this: If the Earth is receiving 91.3 W m^-2 more power when it is at its least reflective (in the southern hemisphere), how come the northern hemisphere is experiencing more warming?

    But apart from that, how in the world could anyone pick out a signal of 2 or 3 W m^-2 (actually, 8 or 12 when one takes out the ludicrous fudge factor of 4) in a world where the absolute, rock-solid, no schiffing around variation is 91.3 W m^-2 every 182.5 days?

    Someone come up with a plausible explanation, please.

    • Sorry, the second sentence should have read “But January 5th 2020, it will be 1,410.3 W m^-2. That is the date Earth is at perihelion, the closest distance to the Sun.” And while I’m at it, the first sentence should have read “The average insolation at the top of the atmosphere is 1363.5 W m^-2.

      Pardon my egregious gaffes.

  36. Scientific calculations do not involve the word “If.” The Little Ice Age ended when it ended, and clearly had nothing to do with CO2.

    “If” the unclearly-defined warming since 1850, or 1880, or 1901, or some other year, was due to Natural Variation, which the Little Ice Age clearly was, then, this new unclearly-defined warming, which has happened more than once or twice in the past, has nothing to do with CO2.

    Questionable temperature records adjusted by Advocates, who have cooled the past to make the present look scary, and ice records beginning in 1979, which I can assure you in my junior year at the U of M was savagely cold, None of this is rigorous science! None of it!

    Calculate a Transient Climate Sensitivity with this assumption. Flunk out of my engineering school.

    I understand, If it is 100% due to CO2, we must assume the worst case. But, this is not science, Einstein never assumed the worst case and called it Science.

    And another thing, S-B calculations Cannot, Cannot, assume an averaged flux. The Flux, not Flux Density, not Flux Capacitor, is proportional to the 4th power of the Delta T. Resulting temperature will never ever be related to 1/4 of the flux! The Earth is heated by the Sun without averaging, ruins all calculations based on averaging. Sun is hot, heats the Surface of the Earth with some absorption of IR by the atmosphere, and some Albedo, incredibly hard to measure, those darn clouds. And then, it gets dark, called Night-Time, with no flux from the Sun.

    Ruin our economy by continuing with this nonsense, try to beat them at their own game, which is based on fundamentally flawed assumptions which you do not see.

    It is like this: Sun Shines on Earth. Sun is at 11,000 Degrees F, or thereabouts. Flux is based on the Delta T, really hot Sun, much cooler Earth. Geometry and angles of incidence, all good. But then, Sun goes down, Darkness!

    Flux is proportional to the 4th power of the Delta T!

    Stop with the averaging.

    Stop with calling Albedor 0.29 or any other number, incredibly hard to measure and changed second by second.

    Professor Wang is thanking me for this somewhere. Trenberth made a cartoon to fake out those without a technical education. Well I had one, difficult, but completed.

    Do not over-simplify any of this.

    Wow….

    • Mr Moon appears not to have read the head posting, which draws attention to the fact that no allowance is made in current calculations for Hoelder’s inequalities between integrals.

      And he has yet to learn that the art of mathematics is to find a simple but effective way to address an apparently complex problem.

  37. Can you guys understand this? The Temp controls the Flux. O Flux cannot be averaged with 1366 W/m2 Flux! Far more complex than that. that is the amount of Energy arriving, but the Temp is proportional to W/m2(-4)! not the average.

    You guys never passed Heat Transfer, also known as Transport of Heat and Mass. Go back to school, good luck

    Ruins all of this.

    The Sun never transfers heat at less than 11,000 degrees F. The amount of heat corresponds to the Delta T. So, we cannot reverse the calculation, and report the temperature resulting from an averaged calculation of flux…

    Bit tricky to work that out.

    • In response to Mr Moon, radiative flux density from the Sun can be measured directly by cavitometers mounted on satellites. A respectable value at present is about 1363.5 Watts per square meter, as stated and referenced in the head posting.

  38. Heat Transfer cannot be averaged, it can be integrated with traditional laws. Hugely complex integral, 11,000 degrees at a normal angle over the surface of the Earth as it rotates, angles of incidence, albedo, just nothing like Trenberth’s cartoon. And this 1,366 W/m2 is not right either, much more at the Equator, much less at the Poles, proportional to the temp difference in each square inch.

    I will research the derivation of this number. Hugely complex. See what I can find out.

  39. Miskolczi measured with radiosonde balloons both the optical thickness (emission height) of the atmosphere and atmospheric humidity.

    He found that over 60 years of increasing CO2, the emission height did not change.
    This was because humidity was decreasing to compensate for increasing CO2.
    Here’s the humidity decrease:

    https://i0.wp.com/clivebest.com/blog/wp-content/uploads/2013/03/GlobalRelativeHumidity300_700mb.jpg

    Everyone who refutes Miskolczi does so on the basis of flawed theory and maths.
    But his main point was an instrumental observation, not a theory.

    Has anyone tried to confirm Miskolczi’s measurements?
    Has the optical thickness changed or not?
    Has humidity changed or not?

    Same issue with Nikolov and Zeller.
    Everyone criticises their theory and maths.
    But their point was also an instrumental observation.
    A log-log (ln-ln) relationship between normalised pressure and atmospheric temperature.

    You don’t criticise an instrumental observation by arguing theory.
    You try to explain the observation.

    • Bindidon
      article shows that based on humidity data from a major reanalysis dataset, declining humidity in the upper atmosphere offsets the greenhouse effect of increasing humidity in the lower atmosphere.

      Could a dryer stratosphere explain the colder stratosphere – less energy to lose?

      Another element of Miskolczi’s work was involvement of the Virial theorem. While this was roundly trashed for theoretical errors, the motivation for it seems justified. I’m no expert on it but the general idea is to replace temperature with a more all-embracing measure of contained energy which would include the water vapour component. Water vapour is no doubt the key to atmospheric heat dynamics.

  40. Alan,

    “Please explain this”

    As the temperature increases, each incremental W/m^2 of solar energy results in incrementally more evaporation. It reaches a point where the next W/m^2 evaporates enough water that 1 W/m^2 of latent heat is also removed. What else will keep ocean surface temperatures from exceeding about 80F (300K)?

    Examine this plot of atmospheric water vapor vs. temperature:

    http://www.palisad.com/sens/st_wc.png

    Y is atmospheric water content in grams/m^2 (Y label is wrong) and X is the surface temperature in degrees K. Like all of my scatter plots, each little dot is the average of 1 month of data for each 2.5 degree slice of latitude and the larger green and blue are the per slice averages over about 3 decades of ISCCP weather satellite data covering almost every m^2 of the surface every 4 hours.

    Notice how above about 300K, atmospheric water increases exponentially which indicates evaporation is also increasing exponentially which indicates that the latent heat removed from the surface is also increasing exponentially.

    It’s also interesting that while water vapor increases dramatically in the tropics, which should significantly increase the GHG effect, the ocean surface temperature still rarely exceed 80F.

  41. As one of the top climate scientists in the world, Kevin Trenberth said in journal Nature (“Predictions of Climate”) about climate models in 2007 and is still the case in 2019:

    None of the models used by the IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. There is neither an El Nino sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond. The Atlantic Multidecadal Oscillation, that may depend on the thermohaline circulation and thus oceanic currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forest for the next decade from Brazil to Europe. Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.
    ¯

    ¯
    Therefore the problem of overcoming this shortcoming, and facing up to initializing climate models means not only obtaining sufficiently reliable observations of all aspects of the climate system, but also overcoming model biases. So this is a major challenge.

    • If the head posting is right, one doesn’t need models at all if all one wants to know is how much global warming we may cause. It’s simple arithmetic.

  42. A couple of points here that I find confusing.

    “First, we need the warming ΔR1 from 1850-2011”

    My understanding is that we do not have that in any reliable, instrumental and global form at all. Instrumental temperature coverage of the Southern Hemisphere does not really exist much before the beginning of the 20th century. So, anything before that relies on proxies and they are not a reliable substitute for instrumental data. OK for determining trends but not so good for economists and politicians making life and death decisions about future climate scenarios and their social impact. Not to mention Extinction Rebellion, Saint Greta and the Catastrophic Anthropogenic Global Climate Heating Emergency Crisis Hysteria (CAGHECH).

    Thus there is no reliable, instrumental global temperature data for ‘pre-industrial times’ at all. Not only that but;

    Your graph – Figure 2 – does indeed show warming, but really, anything much before global instrumental data from radiosonde balloons starting in 1950, is hypothetical. I mean, surely by now, we’ve all done the rounds of weather stations beneath air-con vents, on tarmac, near airports, highways, etc. – not to mention poor maintenance – and all interpolated and adjusted. Not what I would call reliable. Not from a scientific perspective.

    Fig 2 shows warming but does not, as far as I can tell, differentiate natural climate variation at all. So, how do you extract the ‘human signal’ from the natural background rate of change? Where is the curve that shows natural climate variation? I don’t see any ‘filtering equations’ there. Perhaps I’m missing something.

    In other words, what should the global average temperature be today, without us pumping extra stuff into the atmosphere? If I subtract a number based proportionally on your ECS, would that be correct? Will that give me the answer I desire?

    So, please , I’d be very grateful if you would fill in the gaps in my understanding. Thanks.

  43. Lots of news coming about CMIP6!

    “How accurately can the climate sensitivity to CO2 be estimated from historical climate change?”
    J. M. Gregory, T. Andrews, P. Ceppi, T. Mauritsen, M. J. Webb
    Climate Dynamics, online 10 October 2019.
    https://link.springer.com/article/10.1007/s00382-019-04991-y

    A summary: “The CMIP6 landscape”
    Editorial in Nature Climate Change, October 2019
    “CMIP6 output is growing rapidly and will afford a re-examination of important aspects of the climate system.”
    https://www.nature.com/articles/s41558-019-0599-1

    I assume we’ll get much more info from the papers at these:

    The 2019 CFMIP Meeting on Clouds, Precipitation, Circulation, and Climate Sensitivity.
    Sept 30 to Oc. 4 in Mykonos, Greece.
    No abstracts or papers yet published on its website.
    https://www.giss.nasa.gov/meetings/cfmip2019/

    And, of course, at the AGU Fall meeting in December – in San Francisco.

    This could be big. Bigger than big. An increase in temperature forecasts by CMIP6 models might blast away the last remaining restraints, allowing activists to make the WGI report of AR6 go full doomster. The resulting propaganda explosion, as activists further exaggerate it, might be record setting.

    Now I have to go pick my few remaining ripe tomatoes, as frost is expected tonight. As usual. No delay yet from global warming, unfortunately (most of them are still green).

    https://kwwl.com/weather/schnacks-weather-blog/2019/10/05/first-frost-when-it-typically-happens-and-when-we-might-see-one/

  44. MoB

    I’m not sure if my point was clear. Consider a world with no atmospheric circulation. For argument let the average temp on the sunlit side be 330k and the dark side 270k, for an average temp of 300k. The radiation emitted from the surface without GHG will be proportional to 330^4 + 270^4 = 1.72 x 10^10

    Now add circulation to the atmosphere such the average temp is unchanged but the high is now 315k and the low is 285k. The radiation will be proportional to 315^4 + 285^4 = 1.64 x 10^10. A 4.3% drop in outgoing radiation without any change in average temp or GHG.

    And according to GHG theory, this must increase the average surface temperature by about 4C to restore equilibrium!!

    330^4+270^4 approx = (315+4)^4+(285+4)^4

    In other words, the moderating of surface temperatures due to atmospheric circulation appears likely to cause significant “greenhouse” warming, without the need for any greenhouse gasses.

    thus the 33C GHG effect cannot all be due to GHG, thus GHG plays a smaller role than provided for by GHG theory.

  45. Thomas Homer October 10, 2019 at 7:19 am

    “The approximately logarithmic temperature response to change in CO2 concentration does not commence until 100 ppmv.”

    Even at coldhouse, “snowball earth”, ppm CO2 never sunk beneath 100.

    Why not 100 ppm. You want the Planck constant?

    After all, detonation of pulverised fertilaters aka Dynamite doesn’t take minutes. Nor milliseconds.

    With starting / ignition it happens “Now”.

Comments are closed.