The credibility gap between predicted and observed global warming

By Christopher Monckton of Brenchley

The prolonged el Niño of 2016-2017, not followed by a la Niña, has put paid to the great Pause of 18 years 9 months in global warming that gave us all such entertainment while it lasted. However, as this annual review of global temperature change will show, the credibility gap between predicted and observed warming remains wide, even after some increasingly desperate and more or less openly prejudiced ever-upward revisions of recent temperatures and ever-downward depressions in the temperatures of the early 20th century in most datasets with the effect of increasing the apparent rate of global warming. For the Pause continues to exert its influence by keeping down the long-run rate of global warming.

Let us begin with IPCC’s global warming predictions. In 2013 it chose four scenarios, one of which, RCP 8.5, was stated by its authors (Riahi et al, 2007; Rao & Riahi, 2006) to be a deliberately extreme scenario and is based upon such absurd population and energy-use criteria that it may safely be ignored.

For the less unreasonable, high-end-of-plausible RCP 6.0 scenario, the 21st-century net anthropogenic radiative forcing is 3.8 Watts per square meter from 2000-2100:

clip_image002

CO2 concentration of 370 ppmv in 2000 was predicted to rise to 700 ppmv in 2100 (AR5, fig. 8.5) on the RCP 6.0 scenario (thus, the centennial predicted CO2 forcing is 4.83 ln(700/370), or 3.1 Watts per square meter, almost five-sixths of total forcing). Predicted centennial reference sensitivity (i.e., warming before accounting for feedback) is the product of 3.8 Watts per square meter and the Planck sensitivity parameter 0.3 Kelvin per Watt per square meter: i.e., 1.15 K.

The CMIP5 models predict 3.37 K midrange equilibrium sensitivity to CO2 doubling (Andrews+ 2012), against 1 K reference sensitivity before accounting for feedback, implying a midrange transfer function 3.37 / 1 = 3.37. The transfer function, the ratio of equilibrium to reference temperature, encompasses by definition the entire operation of feedback on climate.

Therefore, the 21st-century warming that IPCC should be predicting, on the RCP 6.0 scenario and on the basis of its own estimates of CO2 concentration and the models’ estimates of CO2 forcing and Charney sensitivity, is 3.37 x 1.15, or 3.9 K.

Yet IPCC actually predicts only 1.4 to 3.1 K 21st-century warming on the RCP 6.0 scenario, giving a midrange estimate of just 2.2 K warming in the 21st century and implying a transfer function of 2.2 / 1.15 = 1.9, little more than half the midrange transfer function 3.37 implicit in the equilibrium-sensitivity projections of the CMIP5 ensemble.

clip_image004

Note that Fig. 2 disposes of any notion that global warming is “settled science”. IPCC, taking all the scenarios and hedging its bets, is predicting somewhere between 0.2 K cooling and 4.5 K warming by 2100. Its best estimate is its RCP 6.0 midrange estimate 2.2 K.

Effectively, therefore, given 1 K reference sensitivity to doubled CO2, IPCC’s 21st-century warming prediction implies 1.9 K Charney sensitivity (the standard metric for climate-sensitivity studies, which is equilibrium sensitivity to doubled CO2 after all short-acting feedbacks have operated), and not the 3.4 [2.1, 4.7] K imagined by the CMIP5 models.

Since official predictions are thus flagrantly inconsistent with one another, it is difficult to deduce from them a benchmark midrange value for the warming officially predicted for the 21st century. It is somewhere between the 2.2 K that IPCC gives as its RCP 6.0 midrange estimate and the 3.9 K deducible from IPCC’s midrange estimate of 21st-century anthropogenic forcing using the midrange CMIP5 transfer function.

So much for the predictions. But what is actually happening, and does observed warming match prediction? Here are the observed rates of warming in the 40 years 1979-2018. Let us begin with GISS, which suggests that for 40 years the world has warmed at a rate equivalent not to 3.9 C°/century nor even to 2.2 C°/century, but only to 1.7 C°/century.

clip_image006

Next, NCEI. Here, perhaps to make a political point, the dataset is suddenly unavailable:

clip_image008

Next, HadCRUT4, IPCC’s preferred dataset. The University of East Anglia is rather leisurely in updating its information, so the 40-year period runs from December 1978 to November 2018, but the warming rate is identical to that of GISS, at 1.7 C°/century equivalent, below the RCP 6.0 midrange 2.2 C°/century rate.

clip_image010

Next, the satellite lower-troposphere trends, first from RSS. It is noticeable that, ever since RSS, whose chief scientist publicly describes those who disagree with him about the climate as “deniers”, revised its dataset to eradicate the Pause, it has tended to show the fastest apparent rate of global warming, now at 2 C°/century equivalent.

clip_image012

Finally, UAH, which Professor Ole Humlum (climate4you.com) regards as the gold standard for global temperature records. Before UAH altered its dataset, it used to show more warming than the others. Now it shows the least, at 1.3 C°/century equivalent.

clip_image014

How much global warming should have occurred over the 40 years since the satellite record began in 1979? CO2 concentration has risen by 72 ppmv. The period CO2 forcing is thus 0.94 W m–2, implying 0.94 x 6/5 = 1.13 W m–2 net anthropogenic forcing from all sources. Accordingly, period reference sensitivity is 1.13 x 0.3, or 0.34 K, and period equilibrium sensitivity, using the CMIP5 midrange transfer function 3.37, should have been 1.14 K. Yet the observed period warming was 0.8 K (RSS), 0.7 K (GISS & HadCRUT4) or 0.5 K (UAH): a mean observed warming of about 0.7 K.

A more realistic picture may be obtained by dating the calculation from 1950, when our influence first became appreciable. Here is the HadCRUT4 record:

clip_image016

The CO2 forcing since 1950 is 4.83 ln(410/310), or 1.5 Watts per square meter, which becomes 1.8 Watts per square meter after allowing for non-CO2 anthropogenic forcings, a value consistent with IPCC (2013, Fig. SPM.5). Therefore, period reference sensitivity from 1950-2018 is 1.8 x 0.3, or 0.54 K, while the equivalent equilibrium sensitivity, using the CMIP5 midrange transfer function 3.37, is 0.54 x 3.37 = 1.8 K, of which only 0.8 K actually occurred. Using the revised transfer function 1.9 derived from the midrange predicted RCP 6.0 predicted warming, the post-1950 warming should have been 0.54 x 1.9 = 1.0 K.

It is also worth showing the Central England Temperature Record for the 40 years 1694-1733, long before SUVs, during which the temperature in most of England rose at a rate equivalent to 4.33 C°/century, compared with just 1.7 C°/century equivalent in the 40 years 1979-2018. Therefore, the current rate of warming is not unprecedented.

It is evident from this record that even the large and naturally-occurring temperature change evident not only in England but worldwide as the Sun recovered following the Maunder minimum is small compared with the large annual fluctuations in global temperature.

clip_image018

The simplest way to illustrate the very large discrepancy between predicted and observed warming over the past 40 years is to show the results on a dial.

clip_image020

Overlapping projections by IPCC (yellow & buff zones) and CMIP5 (Andrews et al. 2012: buff & orange zones) of global warming from 1850-2011 (dark blue scale), 1850 to 2xCO2 (dark red scale) and 1850-2100 (black scale) exceed observed warming of 0.75 K from 1850-2011 (HadCRUT4), which falls between the 0.7 K period reference sensitivity to midrange net anthropogenic forcing in IPCC (2013, fig. SPM.5) (cyan needle) and expected 0.9  K period equilibrium sensitivity to that forcing after adjustment for radiative imbalance (Smith et al. 2015) (blue needle). The CMIP5 models’ midrange projection of 3.4 K Charney sensitivity (red needle) is about thrice the value consistent with observation. The revised interval of global-warming predictions (green zone), correcting an error of physics in models, whose feedbacks do not respond to emission temperature, is visibly close to observed warming.

Footnote: I undertook to report on the progress of my team’s paper explaining climatology’s error of physics in omitting from its feedback calculation the observable fact that the Sun is shining. The paper was initially rejected early last year on the ground that the editor of the top-ten journal to which it was sent could not find anyone competent to review it. We simplified the paper, whereupon it was sent out and, after many months’ delay, only two reviews came back. The first was a review of a supporting document giving results of experiments conducted at a government laboratory, but it was clear that the reviewer had not actually read the laboratory’s report, which answered the question the reviewer had raised. The second was ostensibly a review of the paper, but the reviewer stated that, because he found the paper’s conclusions uncongenial he had not troubled to read the equations that justified those conclusions.

We protested. The editor then obtained a third review. But that, like the previous two reviews, was not a review of the present paper. It was a review of another paper that had been submitted to a different journal the previous year. All of the points raised by that review had long since been comprehensively answered. None of the three reviewers, therefore, had actually read the paper they were ostensibly reviewing.

Nevertheless, the editor saw fit to reject the paper. Next, the journal’s management got in touch to say that it was hoped we were content with the rejection and to invite us to submit further papers in future. I replied that we were not at all satisfied with the rejection, for the obvious reason that none of the reviewers had actually read the paper that the editor had rejected, and that we insisted, therefore, on being given a right of appeal.

The editor agreed to send out the paper for review again, and to choose the reviewers with greater care this time. We suggested, and the editor accepted, that in view of the difficulty the reviewers were having in getting to grips with the point at issue, which was clearly catching them by surprise, we should add to the paper a comprehensive mathematical proof that the transfer function that embodies the entire action of feedback on climate is expressible not only as the ratio of equilibrium sensitivity after feedback to reference sensitivity before feedback but also as the ratio of the entire, absolute equilibrium temperature to the entire, absolute reference temperature.

We said we should explain in more detail that, though the equations for both climatology’s transfer function and ours are valid equations, climatology’s equation is not useful because even small uncertainties in the sensitivities, which are two orders of magnitude smaller than the absolute temperatures, lead to large uncertainty in the value of the transfer function, while even large uncertainties in the absolute temperatures lead to small uncertainty in the transfer function, which can thus be very simply and very reliably derived and constrained without using general-circulation models.

My impression is that the editor has realized we are right. We are waiting for a new section from our professor of control theory on the derivation of the transfer function from the energy-balance equation via a leading-order Taylor-series expansion. That will be with us at the end of the month, and the editor will then send the paper out for review again. I’ll keep you posted. If we’re right, Charney sensitivity (equilibrium sensitivity to doubled CO2) will be 1.2 [1.1, 1.3] C°, far too little to matter, and not, as the models currently imagine, 3.4 [2.1, 4.7] C°, and that, scientifically speaking, will be the end of the climate scam.

Advertisements

240 thoughts on “The credibility gap between predicted and observed global warming

  1. and that, scientifically speaking, will be the end of the climate scam.

    Hardly. If you want it to end, proclaim the truth:

    >i>CO2 is mainly produced by the ocean via outgassing per Henry’s Law, with atmospheric CO2 change following ocean temperature change. Therefore CO2 is not warming the ocean, and since the climate follows the ocean, CO2 doesn’t warm the climate either.

    https://www.dropbox.com/s/74c6xxrxn1kjwqm/AGU%20Fig12.JPG?dl=0

    Therefore all calculations of ECS of CO2 are science fiction, including those in this post.

      • Post 3 of 3 for Louis.

        https://wattsupwiththat.com/2019/01/09/a-sea-surface-temperature-picture-worth-a-few-hundred-words/#comment-2583665

        Further comments on MacRae 2008 and Humlum et al 2013, referenced above.

        I generally agree with the first three conclusions from Humlum 2013, as follows:
        1– Changes in global atmospheric CO2 are lagging 11–12 months behind changes in global sea surface temperature.
        2– Changes in global atmospheric CO2 are lagging 9.5–10 months behind changes in global air surface temperature.
        3– Changes in global atmospheric CO2 are lagging about 9 months behind changes in global lower troposphere temperature.

        Points 2 and 3 are similar to my 2008 conclusions.

        Critiques of Humlum failed to refute the three conclusions above. In general, I regard all the critiques of these three conclusions as specious nonsense, which tend to obfuscate the clear observations in these papers.

        One hint: It is not necessary that ALL the increase in atmospheric CO2 is due to temperature – part of the CO2 increase can be due to other causes such as fossil fuel combustion, deforestation, etc., but part of it is clearly due to temperature – and that part demonstrates that CO2 trends lag, and do not lead temperature trends in the modern data record, and that observation DISPROVES the CAGW hypothesis.

        Another highly credible disproof of the CAGW meme is that fossil fuel consumption accelerated strongly after 1940 as did atmospheric CO2 concentrations, but global temperatures COOLED from ~1945 to 1977, warmed for over a decade, and then were relatively constant since – so the correlation with increasing atmospheric CO2 was NEGATIVE, POSITIVE AND NEAR-ZERO. To claim that atmospheric CO2 is the “control knob” for global temperature is a bold falsehood, that is refuted by observations at all measured time scales.

        Regards, Allan

      • I suggest that if one wants to understand the science, one has to understand the observations of MacRae 2008 and Humlum et al 2013 (references above, assuming that my post exists moderation).

        If one wants to disprove global warming alarmism, then calculations of the maximum probable climate sensitivity based on full-Earth-scale data are a suitable way to do so. Below are two such calculations, both of which disprove the catastrophic human-made global warming hypothesis.

        Regards, Allan

        https://wattsupwiththat.com/2018/09/03/the-great-debate-part-d-summary/#comment-2447187
        Excerpt:

        Lewis and Curry (2018) estimate climate sensitivity at 1.6C/doubling for ECS and 1.3C/doubling for TCR, using Hatcrut4 surface temperatures. These surface temperatures probably have a significant warming bias due to poor siting of measurements, UHI effects, other land use changes, etc.

        Christy and McNider (2017) estimate climate sensitivity at 1.1C/doubling for UAH Lower Tropospheric temperatures.

        Both analyses are “full-earth-scale”, which have the least room for errors. Both are “UPPER BOUND” estimates of sensitivity, derived by assuming that ~ALL* warming is due to increasing atmospheric CO2. It is possible, in fact probable, that less of the warming is driven by CO2, and most of it is natural variation.
        (*Note – Christy and McNider make allowance for major volcanoes El Chichon in 1982 and Pinatubo in 1991+).

        The slightly higher sensitivity values for Curry and Lewis are due to the higher warming estimates of Hadcrut4 surface temperatures versus UAH LT temperatures.

        Practically speaking, however, these sensitivity estimates are similar, and are far too low to support any runaway or catastrophic man-made global warming.

        Higher estimates of climate sensitivity have little or no credibility and there is no real global warming crisis.

        Increased atmospheric CO2, from whatever cause will at most drive minor, net-beneficial global warming, and significantly increased plant and crop yields.

        The total impact if increasing atmospheric CO2 is hugely beneficial to humanity and the environment. Any politician who contradicts this statement is a scoundrel or an imbecile and is destructive to the well-being of society. It IS that simple.

        Best, Allan

        • Most grateful to Allan Macrae for his long note on other papers whose authors have concluded that equilibrium sensitivity is a lot less than official climatology profits by asking us to believe. We should perhaps reference some of these in the final draft of our paper.

          In our submission, the advantage of our approach is that it demonstrates official climatology’s definition of temperature feedback to be erroneous in that it considers the transfer function to be solely the ratio of equilibrium to reference sensitivities, failing to include the fact that the transfer function is also the ratio of absolute equilibrium to reference temperatures. The latter definition allows much more reliable derivation and constraint of equilibrium sensitivity, because even quite large uncertainties in absolute temperatures two orders of magnitude greater than the sensitivities entail only a small uncertainty in the transfer function, while even small uncertainties in the minuscule sensitivities entail large uncertainty in the transfer function, which is why the current interval of equilibrium sensitivities is so large and has resisted constraint for so long.

    • In response to Mr Weber, science is quantitative, not qualitative. To provide a formal demonstration that the influence of anthropogenic sins of emission on global temperature is small, it is not sufficient to demonstrate that changes in the rate of CO2 emission lag changes in sea-surface temperature by some months. It is necessary to demonstrate that, even if the CO2 radiative forcing is as official climatology imagines, the resulting warming will necessarily be small. That is what my team claims to have achieved.

      • Monckton
        It appears that the original reviewers were intimidated by the mathematics. It would then seem that a possible solution to your impasse would be to ask the editor to find a mathematician, or someone with a double major in math and physics, to review your paper.

        • Mr Spencer raises an excellent point. Actually, all that we really need is someone with training in Classical logic. Our argument is in reality a very simple argumentum ex definitione, which runs thus:

          There subsists an equilibrium global mean surface temperature after all temperature feedbacks of subdecadal duration have acted. In 1850, before any anthropogenic perturbation, that temperature was the observed temperature of about 287.5 K. We define that temperature as the equilibrium temperature in 1850.

          In the absence of any temperature feedback, there would subsist a reference temperature in 1850. That temperature comprises the sum of emission temperature and the reference sensitivity (before accounting for feedback) to the presence of the noncondensing greenhouse gases as they were in 1850 (note in passing that water vapor, the key noncondensing greenhouse gas, is treated as a temperature feedback, not as a forcing). Starting with an ice planet of albedo 0.66, the global mean surface temperature (which is also the emission temperature in the absence of any forcing or temperature feedback) would be 221.5 K. Add to that the reference sensitivity of about 11.5 K to the noncondensing greenhouse gases as they were in 1850. Define 233 K, therefore, as the reference temperature (before accounting for feedback) in 1850.

          Then, ex definitione, the transfer function that encompasses the entire action of temperature feedback on the climate is simply the ratio of absolute equilibrium to reference temperature: i.e., 287.5 / 233 = 1.3, and not the 3.4 [2.1, 4.7] that is the implicit interval of the transfer function in the CMIP5 models (Andrews+ 2012).

          Of course, the 0.7 K industrial-era reference sensitivity from 1850-2011 (based on IPCC 2013 fig. SPM.5) and the 0.9 K period equilibrium sensitivity after allowing for the radiative imbalance are nowhere near enough to alter the small transfer function. And that’s it, really.

          • “the action of temperature feedback on the climate is simply the ratio of absolute equilibrium to reference temperature: i.e., 287.5 / 233 = 1.3,”
            Actually, all that we really need is someone with training in Classical logic and some more attempts at explaining it all in a more simple way, if possible, to those of us who get lost in the hyperbole and over complicated language and maths sometimes used.
            ATTP has an article under discussion and in part states “Researchers should be aware of how the manner in which they present information might influence how people interpret the significance of that information.”
            I really like LM’s past presentations on the pause and his current work on trying to address the ECS.
            If his paper can do what he says it does it needs to be published, promulgated and discussed widely.
            Unfortunately at the moment I feel he leaves 80% of his audience behind.
            I wonder if Rud or Smith could rephrase it or soften it for more general appreciation though the latter might have to moderate his rhetoric as well.
            I am not wanting the general theory of relativity re-explained and feel there is a lot here if it can only be better expressed.
            Similar to ” Credit to Willis Eschenbach for setting the Nikolov-Zeller silliness straight”
            Sorry.

          • Angech likes the content but not the form of our result. In defence of the apparently complicated mathematics, we can no longer rely on our audience to be familiar with the Classical modes of thought, in particular Classical logic, which allows the argument to be expressed in a very simple form.

            Also, we are faced with sullen, bemused resistance from just about all of the climate establishment and just about all of the political establishment. Therefore, statement of a simple argumentum ex definitione, which is all that is really necessary to establish the rightness of our argument, will not be enough.

            Therefore, it is necessary that, in presenting our result in the form of a scientific paper, we should not only state the Classical argument (which, on its own, is definitive) but also proceed to demonstrate it formally, both in the physics of systems theory and in the number theory of infinite convergent series.

            Unfortunately, in an environment in which, for reasons of totalitarian conformity to the Party Line and commercial profiteering from the scientific illiteracy of politicians, simple and reasonable arguments are not enough, it is necessary to provide formal demonstrations that leave absolutely no room for argument.

            That said, let me express the outline of our argument simply. Official climatology, at a vital point in its treatment of temperature feedbacks, has neglected to take account of the observable fact that the Sun is shining.

            Climatologists do not realize that the feedback processes present in the climate at a chosen moment necessarily respond to the entire, absolute reference temperature at that moment, which comprises not only anthropogenic but natural warming as well as the emission temperature that would be present without either those warmings or temperature feedbacks.

            Climatology’s formal definition of temperature feedback is confined to the notion that feedback processes respond only to an anthropogenic perturbation in temperature, driving the further perturbation that is the feedback response.

            Frankly, one should not expect that overthrowing a Party Line that has been sedulously insisted upon for decades will be an entirely simple matter. A certain minimum of learning is necessary both to achieve that overthrow and to understand that the overthrow has been achieved. In the end, there are no short cuts. But we are making steady progress, and more and more people – not least thanks to WattsUpWithThat – are coming to an understanding that the game is up and the scare is over.

    • Louis Hooffstetter, I made the graphic for my 2018 AGU poster on solar irradiance extremes. A few days ago someone emailed me about this graphic and made a similar remark. Thank you both. I have one correction to make on the poster unrelated to this graphic, and if I can get it done today I’ll be back with a link. If you look long enough I’ll eventually get around to posting it.

      Christopher Monckton – I understand what you’re doing, and yes I question it’s necessity.

      I challenge you to examine the quantitative relationship between SST and atmCO2 as indicated in my graphic and then quantitatively pluck out the man-made part. If you really like doing science that should be a fun work for you. Or anyone else.

      A simple test for CO2 supposed sensitivity: with CO2 at record levels in 2016, CO2 did not uphold either the ocean or atmospheric temperatures during the almost three years of cooling since the peak in 2016. CO2 is therefore virtually powerless. BTW I say this as the sole person to have predicted both the solar cause of the 2016 El Nino and the solar cause of the temperature decline thereafter, mathematically, empirically.

      Here’s a quick story from the AGU meeting:

      A young man probably in his late twenties to early thirties who worked for NOAA had a poster that started out in the abstract talking about ‘deniers’. I told him that was a pejorative word that had no place in a science paper.

      He went on to argue with me that while there is no discernible year-to year CO2 influence on temperatures, he said CO2 is controlling the overall background rise in temperature such as over 20 years, his figure used solely as an example.

      I proceeded to show him a 20-year shuffle, where for every few years CO2 isn’t doing anything per his assertion, but then all of a sudden after 20 years, it controls the background. I said, “Don’t you see the contradiction in your position?” How can it not control year-to year variability while controlling 20 or more year variability? I ended the conversation with a handshake and said “we’re at an impasse, and I think we should stop now.” To which he agreed.

      People like him and others are not dealing with objective reality, rather ‘consensus reality’, as are those promoting ever-lower ECS figures. Obviously I don’t consider ECS calculations scientifically legitimate.

      While CM is a fine person with good intentions, I disagree with these presentations on principle.

      • Bob Weber: I proceeded to show him a 20-year shuffle, where for every few years CO2 isn’t doing anything per his assertion, but then all of a sudden after 20 years, it controls the background. I said, “Don’t you see the contradiction in your position?” How can it not control year-to year variability while controlling 20 or more year variability? I ended the conversation with a handshake and said “we’re at an impasse, and I think we should stop now.” To which he agreed.

        Accurate estimates of the effects of the internal dynamics of the energy flows and of all other external “forcings” are required in order to get accurate estimates of the sensitivity of temperature to CO2 change. You are quite right about that, and that such accurate estimates are not available. However, CM of Bs work is still valuable in showing that even if you take the IPCC conceptual model seriously, the estimates of CO2 sensitivity are quite low.

        • I am grateful to Mr Marler for his support, and would add that once it is accepted – as it must be – that feedback processes respond not merely to anthropogenic reference sensitivity but also to natural reference sensitivity and, most importantly, to the emission temperature that arises from the observable fact that the Sun is shining (a fact that takes climatologists by surprise), one does not require a very precise knowledge of the values of the relevant climate variables.

          For the transfer function that encompasses the entire action of temperature feedback on climate at any given moment is the ratio not of minuscule sensitivities but of absolute temperatures which exceed the reference sensitivities by two orders of magnitude.

          Climatology takes the transfer function solely as the ratio of sensitivities. Therefore, even a small uncertainty in the sensitivities entails a large uncertainty in the transfer function.

          Mainstream science, by contrast, takes the transfer function not merely as the ratio of sensitivities but as the ratio of absolute equilibrium temperature at a given moment to absolute reference temperature at that moment. Even quite large uncertainties in the values of equilibrium and reference temperature entail only small uncertainty in the value of the transfer function that is their ratio. The transfer function, therefore, falls on the interval [1.1, 1.3, and not, as at present imagined, on [2.1, 4.7].

          Accordingly, equilibrium sensitivity to doubled CO2 concentration is not 3.4 K, as the models currently imagine, but about 1.2 K, with a very small uncertainty either side of that value. And there is an end of the climate scare.

      • Mr Weber seems to be saying that because the profiteers of doom are “not dealing with objective reality”, attempting to address their Party Line by scientific methods is pointless, or even objectionable.

        Well, I was brought up to appreciate the value of objective truth, to try to find it in my scientific studies, and to present my findings once they were ready, regardless of whether those findings were congenial to some faction or another, however venerable, and regardless of whether those findings ran counter to some transient consensus, however widespread. The truth is the truth and, whether or not it be convenient to Mr Weber, it remains the truth.

        If we are right, then it does not matter how many hysterical fanatics proclaim that we are wrong. In the end, the truth will prevail, provided that there is someone with the diligence to find it and the guts to proclaim it in the face of indifference or even hostility.

        On this question of the definition of feedback, I shall be happy to retire from the field if a rational, scientific argument can be presented to the effect that we are scientifically incorrect. We shall not, however, be one whit deterred by the news that, to the likes of Mr Weber, our result – whether right or wrong – is merely inconvenient.

    • “since the climate follows the ocean,”
      seems a little presumptuous and too certain to me.
      While a fan of warmer oceans causing more CO2 you seem to be missing a small step, namely what is actually causing the ocean warming and cooling. To say the sun, only, is too simplistic. I like Roy Spencer’s cloud cover theory and clouds are part of climate. As are currents, land masses, Air circulation and water vapor.
      Given this the little bit of CO2 added by man can certainly play a part as well and denying it any validity is just as bad as saying it is the one and only control knob.

    • Many thanks to Mr Magness. I’ve been ill but am now recovering and look forward to a much stronger year than last year.

      • Lord Monckton,

        There are always several competing pro- and anti- AGW boothes at CPAC https://cpacregistration.com/

        I think you would enjoy engaging with the movers and shakers and well as the >10,000 young people who will attend this year. It is so awesome that you would spend some of your valuable time posting here.

        All the best health and success to you!

          • We need to plan for next year I’m afraid. But yes, you SHOULD be the herald of the end of the AGW scam at CPAC.

  2. A typically excellent post … but your calculations appear to have overlooked the effect of the vast amount of hot air put into the atmosphere as a result of Brexit!

    • Henry Keswick is right: the totalitarian flatulence from the Brussels Broadcasting Commissariat alone in its lamentably alarmist coverage of Britain’s campaign for independence has proven enough to keep the United Kingdom comfortably warm in a mild winter while in the United States and in Europe there has been record snowfall.

  3. but also as the ratio of the entire, absolute equilibrium temperature to the entire, absolute reference temperature.
    ======+=++
    While it has been some months, this to me was most significant. Perhaps because it was counter intuitive. Yet the mathematics was inescapable.

    In science, progress comes when we discover something we did not expect.

    • Most grateful to Ferd for his kind comment. The result came as quite a surprise to us, too, when we first fumbled our way to it. I began about seven years ago by plotting (by hand) the graph of equilibrium sensitivity in response to various values of the feedback fraction. There was a startling discontinuity [at] a feedback fraction of 1. That suggested there was something wrong with climatology’s treatment of feedbacks, but it took us a long time to realize just how basic was the mistake that official climatology had made in ignoring the fact that the transfer function is the ratio not only of equilibrium to reference sensitivity but also of absolute equilibrium to reference temperature. Once that fact is established (and we have proven it by reference to a useful result in number theory), deriving and constraining equilibrium sensitivities about one-third of those imagined by official climatology becomes straightforward.

  4. Using what are apparently differing methods, Lord Brenchley is deriving much the same level of ESS as Judith Curry, which gives me some confidence that the conclusions are valid.

    • Tom Halla’s point is excellent. Coherence of a result derived by distinct methods indeed makes it possible that both methods are reaching the correct answer.

  5. Christopher Monckton of Brenchley

    Go’on yersel big man!

    Hopefully, some enterprising politicians and MSM editors will pick up on this when it’s published.

    I mean, what a coup for a politician in the UK considering the state of our politics right now. There’s also Trumps rejection of Paris, Brazil’s reluctance to continue, Poland’s contempt for renewables, China withdrawing subsidies, and the increasingly obvious failure of renewables to do anything but cost people pots of money and blight our home country.

    Stupid, useless SNP, imagine blighting Scotland with wind farms when their tourist industry is driven by the natural environment. Cut their nose off to spite their face.

    A courageous politician could seize on this and turn the tide (metaphorically of course). Get the UK back to work after Brexit, ditch the ridiculous Climate Change Act and start fracking (we note all’s gone quiet on that front with no more ‘catastrophic’, barely detectable earth tremors). Nigel Farage perhaps? He’s bound to be looking for a new angle to have a run at Theresa May’s job when it comes up for grabs soon.

    Like him or loathe him, he’s a determined man.

    Good luck Chris. Resist the Marxist hordes!

    • You’ve forgotten the German in vivo experiment called “Energiewende” which is about to show the world that “renewables” a) dont’t work properly and b) that they are a most expensive and disappropiating piece of botchery of liars politicians.

    • In response to HotScot, I’m a huge admirer of Nigel Farage, without whom Britain would not have been given the chance to win back her independence. We are quietly nursing our paper through peer review and, if it passes, that will indeed be the end of the climate scam, and politicians everywhere can stop subsidizing unreliables and stop shutting down profitable and clean coal-fired power stations such as Longannet, the last of its kind in Scotland, which now has to order cinder-blocks for housing from hundres of miles away because builders can no longer get the fly-ash from Longannet. And that’s just one of many downstream losses from shutting the coal-fired plant. Meanwhile, coal delivers precisely 38% of total worldwide electricity demand, just as it did 30 years ago. The screeching of the climate fanatics has achieved precisely nothing.

      • Hello HotScot and Lord Monckton,

        Not only has coal maintained its share of global primary energy, but so have fossil fuels in total.

        Fully ~85% of global primary energy is from fossil fuels (oil, natural gas and coal), essentially unchanged in decades. The remaining 15% is mostly hydro and nuclear, and less than 2% is green energy, despite trillions in squandered subsidies.

        Global warming alarmists advocate the elimination of fossil fuels – do that tomorrow and almost everyone in the developed world will be dead in a month from starvation and exposure.

        Best regards, Allan

    • In reply to Chaamjamal, he or someone has gone to a lot of trouble to produce all those graphs. The challenge, in my submission, is to produce graphs that even politicians can understand. Hence the dial.

      • …is to produce graphs that even politicians can understand.

        Good luck, Mylord. ManySome of that ilk even have problems with straight lines.

      • The challenge, in my submission, is to produce graphs that even politicians can understand.

        That’s a massive challenge indeed, if US politicians are anything to judge by.

      • An excellent article. Howver, your dial diagram might be improved if the different pointers were of a suitable length to point to the particular scale to which they refer

        • The whole point of the three dials is to show what would happen at each of three stages in the evolution of global temperature. That is why the needles are designed as they are.

  6. I object to the anti-science pro-warming bias
    of using surface temperature “data”
    in years after 1979 when more accurate,
    less biased, much less infilling, UAH weather
    satellite data are available.

    This article uses GISS and HadCRUT data
    in years where better UAH data are available.

    For surface “data”, a majority of surface grids
    have no data, or incomplete data, so there is
    wild guessing by government bureaucrats
    required to compile a global average temperature.

    The “majority infilled” surface data show more warming
    than weather satellite and weather balloon
    temperature data.

    Therefore, the surface data are suspect, and should
    be ignored, especially because better data are available
    after 1979.

    The starting point for the
    “era of man made greenhouse gases”
    is roughly 1940.

    Not 1950, as the IPCC might say.

    Not 1979, simply because satellite
    temperature data collection
    began that year.

    Since 1940, global warming has been
    mild, harmless, irregular and not even
    “global” most of the time — definitely
    not matching the steadier, global rise of CO2.

    “Irregular” =
    no global warming from 1940 to 1975
    and a flat trend from the 2003 peak through 2018.

    “Not global” =
    no warming of Antarctica since the 1960s,
    and much more warming in the northern half
    of the Northern Hemisphere, than in the
    southern half of the Southern Hemisphere
    since 1975.

    The causes of climate change
    are a list of the usual suspects,
    with no one knowing the actual
    causes with any precision.

    Without that precision,
    a correct climate change
    physics model can not exist.

    That means the so called
    general circulation models
    are nothing more than opinions
    … and obviously wrong opinions,
    because they lead to wrong
    climate forecasts when compared
    with temperature observations.

    It is unfortunate, and a huge
    conflict of interest, that actual
    temperature observations
    are controlled by the same
    government bureaucrats who
    have made “climate model”
    predictions of significant
    global warming.

    So it’s no surprise to me that after
    their many “adjustments”, their surface data
    show more warming than satellite and
    weather balloon data.

    The little real science behind
    “climate change”, the infrared
    spectroscopy done in laboratory
    closed system experiments,
    and climate observations
    since 1940 (using more accurate
    UAH weather satellites after 1979)
    both suggest the same thing:
    The TCS of CO2
    is no more than
    roughly +1.0 degrees C.
    per doubling of CO2.

    A +1.0 TCS would lead to
    a harmless worst case of
    +1 degree C.
    of global warming
    in the next 200 years,
    assuming +2 ppm
    of CO2 increase per year.

    Which all adds up to the
    obvious conclusion:
    Adding CO2 to the air has caused
    no harm so far, and is unlikely
    to cause any harm in the future.

    If you also consider the positive effect
    of increasing CO2 levels on plant growth,
    as done inside most greenhouses,
    then adding CO2 to the air is beneficial
    for our planet.

    My climate science blog,
    with over 29,000 page views:
    http://www.elOnionBloggle.Blogspot.com

      • By “wild guessing”, I mean
        the infilled numbers can never
        be verified … and even though
        the surface data are “contradicted”
        by the weather satellite and weather
        balloon data, that are similar to each other
        the surface data apparently
        can never be falsified !

        GUESSTEMP is a perfect name for
        all surface temperature “measurements”
        (it’s hard to use the term “measurements”
        when the surface global average temperature
        consists of more infilling than actual
        measurements … and the minority
        of actual measurements that are used,
        are “adjusted” before they are used !)

  7. You are generous to a fault regarding the El Nino NOT followed by a La Nina. Continent-sized cold blobs in both hemispheres have decoupled ENSO to a considerable degree from its domination of global temperature trends. If temperature keepers can keep their thumbs off the scales, there is an excellent chance we will see the return to a lengthening ‘Pause’.

  8. Observed: catastrophic climate change hits the Alpine winter resorts – too much snow.
    Our grandchildren
    sadly will never get to know
    the Alps without the winter snow.
    /sarc

  9. The simplest way to illustrate the very large discrepancy between predicted and observed warming over the past 40 years is to show the results on a dial.

    With three different scales, and four different hands, each giving three different measurements, it is certainly not simple. I would suggest 3 dials each clearly labelled what they measure.

    • A single large dial shows the entire picture in a larger scale than three smaller dials would. A minimum of effort will allow the observer to read any of the three dials.

    • I agree. The dials are complicated. Pollies have trouble reading a simple dial with just pointer and big numbers. Trouble is if it does not look a little bit complex they will conclude that thr story behind it is simple and not worth following. Somehow or another we have to get each of the powers that be to adopt a sciency, engineering type as an independent advisor on all the tech physicsy stuff stuff they have to digest.

      • In response to 4 Eyes, the dial was designed to illustrate a scientific paper. That is why there is so much information in it. But anyone can see at a glance that the observed warming rate does not fall anywhere within the enormous interval of official predictions.

        If 4 Eyes would like to design a graphic that will convey the information to scientifically illiterate politicians, I should be most interested to see it.

  10. The prolonged el Niño of 2016-2017, not followed by a la Niña, has put paid to the great Pause of 18 years 9 months in global warming that gave us all such entertainment while it lasted.

    Two things:

    1. According to the NOAA ENSO Index there have been 2 periods of la Niña conditions since the end of the last el Niño, and ENSO neutral conditions since the second of these ended: http://origin.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ONI_v5.php

    2. The “great Pause of 18 years 9 months in global warming” does not exist in any current global temperature data set. It was an artifact of RSS TLT v3, which the producers of that data set repeatedly said was the result of a known cooling bias in their processing procedure. RSS V3 was replaced by v4 in 2017, accompanied by a peer reviewed paper. The “great pause” did not survive the transition and the warming rate in RSS v4 since 1998 (the start of the “great pause” referred to) is currently 0.153 ±0.157 °C/decade (2σ); which is 0.004 °C/dec shy of being a statistically significant warming trend.

    • “The “great Pause of 18 years 9 months in global warming” does not exist in any current global temperature data set.”

      Doesn’t exist in UAH?

      • Between early 1997 and late 2015 there was indeed a Pause in the UAH data of about the same length as the original RSS Pause. That Pause is evident in the current UAH dataset.

        • There is a best estimate zero warming trend in UAH TLT v6 of 18 years and 6 months between July 1997 and Dec 2015. I can’t find one longer than that. RSS TLT v4 shows a best estimate of slight warming over the same period.

          • ” RSS TLT v4 shows a best estimate of slight warming over the same period.”

            Is v4 before or after the mearsization of RSS

          • “best estimate zero warming trend in UAH TLT v6 of 18 years and 6 months”

            So you admit that there was a pause.

        • “Pause” is a bad (misleading) word.

          It implies the trend before the “pause”
          is expected to continue.

          But no one knows that.

          Our global average temperature measurements,
          especially the surface data / infilling,
          (that keeps “changing” every year),
          when the year-to-year variations
          are within a 1.0 degree C. range,
          are not precise enough for statistical analyses
          unless you completely ignore reasonable margins of error!

          There are only three possible trends that make sense:
          An up trend
          A down trend
          A flat trend.

          Any attempt to be more precise
          is ignoring reasonable margins of error,
          which is not real science!

          • 2 sigma margins of error are applied to the trends quoted at University of York site: http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html

            The original “great Pause” referred to the RSS v3 TLT data set, with a negative trend that began in June 1997 and ended early 2016. This was a ‘best estimate’ trend that ignored the 2 sigma error margins (-0.000 ±0.171 °C/decade (2σ)).

            Using the updated RSS v4 TLT data set over the same range results in ‘best estimate’ warming of 0.100 ±0.177 °C/decade (2σ) and from June 1997 to the present the ‘best estimate’ warming in RSS v4 is now statistically significant, at 0.155 ±0.149 °C/decade (2σ).

          • DWR54 January 10, 2019 at 2:53 pm
            I find it astonishing that anybody of integrity could put forward datasets that have obviously been modified to remove non CAGW confirming data like the Sea Temp and Surface Temps as a Scientific proof that the non conformancy did not exist.
            Shane on you.

    • DWR54
      The pause was not an artefact.
      It existed for varying lengths of time in a lot of data sets as LM has demonstrated numerous times in numerous past articles.
      The longer temperatures stayed low the longer the pause grew time wise in both directions.
      It is puerile to argue that a pause did not exist and cannot be seen.
      It is facile to argue that RSS does not now show a pause but yet admit that it did do so before the data was manipulated to remove it in 2017.
      “There is a best estimate zero warming trend in UAH TLT v6 of 18 years and 6 months between July 1997 and Dec 2015.”
      That seems long. Enough to independently qualify and back up LM on the existence of a great pause in another data set that currently exists.
      Even for nit pickers.

    • DWR54,
      Your series of posts on January 10 are very confusing:

      1) At 8:04 am listed time, you stated: “The ‘great Pause of 18 years 9 months in global warming’ does not exist in any current global temperature data set. It was an artifact of RSS TLT v3 . . . The ‘great pause’ did not survive the transition and the warming rate in RSS v4 since 1998 (the start of the “great pause” referred to) . . .”

      2) Then at 9:44 am you posted: “There is a best estimate zero warming trend in UAH TLT v6 of 18 years and 6 months between July 1997 and Dec 2015.”

      3) Then at 2:53 pm you posted: “The original ‘great Pause’ referred to the RSS v3 TLT data set, with a negative trend that began in June 1997 and ended early 2016. . . . Using the updated RSS v4 TLT data set over the same range results in ‘best estimate’ warming of 0.100 ±0.177 °C/decade (2σ) and from June 1997 to the present the ‘best estimate’ warming in RSS v4 is now statistically significant, at 0.155 ±0.149 °C/decade (2σ).

      Are your taking issue with the fact RSS TLT v3 showed 18 years 9 months of “pause” whereas UAH TLT v6 today shows 18 years 6 months of pause (a duration shorter by only 3 months, or 1.3%)? Therefore, doesn’t the equivalent of a “great pause” really exist in UAH TLT v6 based on Item 2 above, and in contradiction to your first sentence in Item 1 above?

      Taking your asserted RSS TLT v4 2σ “best estimates” of warming on face value (not that that means they are credible), you appear to ignore that based on your statistical analysis there must be a much greater than 55% increase in warming rates after “early 2016” to drive the warming rate from June 1997 up to the present so much higher than the period of June 1997 through “early 2016” (0.155 vs 0.100 °C/decade, respectively). That 0.55 °C/decade difference in warming slope is certainly statistically significant based on the ±2σ uncertainties that you stated. So, one must conclude that your asserted 55% change in warming rate occurred relatively rapidly despite atmospheric CO2 content increasing continuously upward on a slight exponential trend. How do you explain that???

      And while you’re at it, please also explain how the UAH TLT v6 zero warming trend of 18 years 6 months (that you admit happened per Item 2 above) happened despite the ever increasing atmospheric CO2 content.

  11. We are waiting for a new section from our professor of control theory …

    That’s what’s missing from most published papers as far as I can tell. Usually scientists blithely throw math around without actually understanding what they’re doing. Give them a half hour tutorial on Matlab and they have at their hands more different ways to torture data than they ever knew existed. If they throw enough different tools at the data they will find one that results in a publishable p value and they’re off to the races. /rant

    • My applied math prof had a simple rule; trust the eye before the math.

      I was actually quite surprised at the time, because I had graphed the data, then ran it through a statistical package. He pointed to the the graph, and remarked “this I trust”. Then he pointed to the stats and said “this not so much”. At the time, it was the complete opposite of what I had expected.

    • In reply to Mr Anderson, just look at the steampunk graph. Predicted equilibrium sensitivities with the red needle showing the midrange estimate) occupy approximately the right-hand half of the graph. Sensitivities based on observation are the blue and green arrows, clustering about the “revised” interval of predictions that arises from our correction of climatology’s misdefinition of temperature feedback. They are on the left-hand side of the graph. It is visible that the true rate of warming is about a third of the mid-range officially-predicted rate. And by using the three scales you can see what happened from 1850 to 2011, what would happen at doubling of CO2 and (the sum of these two) what would happen from 1850-2000. Read the caption and study the graph a little and all will become clear.

  12. RSS, whose chief scientist publicly describes those who disagree with him about the climate as “deniers”, revised its dataset to eradicate the Pause, it has tended to show the fastest apparent rate of global warming, now at 2 C°/century equivalent.

    Before UAH altered its dataset, it used to show more warming than the others. Now it shows the least, at 1.3 C°/century equivalent.

    Both RSS and UAH changed their data for the same reason, inconsistencies in the record from different satellites (change in orbits and decay in the sensors), notably after the switch from MSU to AMSU. In the case of UAH they changed their method and produced a different product which is sensitive to a higher region of the troposphere (4km vs 2km), although it still has the same name.

    • Here are the real reasons why UAH changed their method:

      “Version 6 of the UAH MSU/AMSU global satellite temperature dataset is by far the most extensive revision of the procedures and computer code we have ever produced in over 25 years of global temperature monitoring. The two most significant changes from an end-user perspective are (1) a decrease in the global-average lower tropospheric (LT) temperature trend from +0.140 C/decade to +0.114 C/decade (Dec. ’78 through Mar. ’15); and (2) the geographic distribution of the LT trends, including higher spatial resolution. We describe the major changes in processing strategy, including a new method for monthly gridpoint averaging; a new multi-channel (rather than multi-angle) method for computing the lower tropospheric (LT) temperature product; and a new empirical method for diurnal drift correction. We also show results for the mid-troposphere (“MT”, from MSU2/AMSU5), tropopause (“TP”, from MSU3/AMSU7), and lower stratosphere (“LS”, from MSU4/AMSU9). The 0.026 C/decade reduction in the global LT trend is due to lesser sensitivity of the new LT to land surface skin temperature (est. 0.010 C/decade), with the remainder of the reduction (0.016 C/decade) due to the new diurnal drift adjustment, the more robust method of LT calculation, and other changes in processing procedures.”

      It will be seen from this description that the “lesser sensitivity of the new lower-troposphere dataset to land-surface skin temperature” is only 0.01 K/decade, implying that that lesser sensitivity affects the global value by only 0.003 K/decade.

  13. The difference in temperature for a cloudless winter night compared with one with cloud cover can be as much as 20°F. The problem investigated here is what can account for this extreme temperature difference. The argument being presented is that the effect of greenhouse gases alone cannot account for that temperature difference. The greenhouse effect is where molecules in the atmosphere absorb infrared radiation and radiate it in all directions. This means that that about one half is radiated downward toward Earth’s surface. The term cloud blanket effect is used to denote phenomenon in which the underside of a cloud reflects back down the infrared radiation that the Earth’s surface is radiating upward.

    The greenhouse effect from a thin layer of the atmosphere can result in at most 50 percent of the thermal radiation absorbed from the surface being returned to the surface. The amount of radiation absorbed even from a thick layer of the atmosphere is quite small. The amount returning due to reflection from the underside of clouds can be higher. The albedo of cumulus clouds for the visible light range can be as high as 90 percent and that for the infrared range is of the same order of magnitude.

    Reflection of electromagnetic radiation generally occurs at the interface between two types or two densities of a conducting media. This means that when a cloud is resting on the land surface as a fog there is no reflection of infrared radiation. Instead the effect of fog on surface temperature would be entirely through the greenhouse effect. This would mean that all other conditions being equal the surface temperature on a foggy night would be noticeably cooler than when there are clouds but no fog. The foggy night, however, would be warmer than a clear night.

    Clouds are an overwhelming influence on the climate of the Earth. Despite this the focus of climate modelers on the greenhouse effect of carbon dioxide has resulted in a neglect of cloud phenomena. The climate models fail to adequately represent the matter of clouds and cloudiness.
    The usual focus of this display is that the climate modelers, one and all, do not know much about cloud coverage. But another salient element is the difference between actual cloud coverage in the Arctic compared with the Antarctic. At the North Pole it is 70 percent; at the South Pole it is about 3 percent. Since carbon dioxide is supposedly well mixed in the atmosphere the greenhouse effect of carbon dioxide should be the same at both poles. But consider the record for temperature by latitude.

    Robert C. Balling, Jr. gives a graph relevant to comparing a model prediction with the actual record. It is given in his article, “Observational Surface Temperature Records versus Model Prediction,” which is published in Shattered Consensus: The True State of Global Warming (page 53).
    Here we have the real world in all its complexity. Over the period 1970 to 2001 the Arctic region did have a greater temperature increase than the tropical region. It was not a five to one ratio however. The north polar region increased in temperature about 83 percent more than the tropic region. However in the south polar region there was no larger increase than the tropics, If anything the south polar region increased less in temperature than the tropics with Antarctica actually decreasing in temperature. If the increase in temperature in the north polar region is taken as a verification of the theory and global warming model then the record in the south polar regions is a denial of the validity of the theory and model. This is the real world in all its complexity and the climate models definitely do not capture that complexity. In particular, the models, driven as they are simply by the level of carbon dioxide in the atmosphere, cannot account for the discrepancy in temperature change in the two polar regions. The cloud blanket effect does account for that difference.
    The Greenhouse Effect With and Without Cloud Cover

    At the first level of analysis the greenhouse effect can be estimated using Beer’s Law. Beer’s Law implies that the proportion P of radiation absorbed in passing through a medium is
    P = 1−e-D

    where D is the optical depth of the medium and this is given by
    D = ∫0Lαρ(s)ds

    where α is the absorption coefficient of the material of the medium, ρ is the molecular density and L is the physical depth of the medium. If there are two or more absorbing substances in the medium the optical depth for each is determined and their sum is the overall optical depth.

    The absorption coefficient for a substance may vary with the wavelength of the radiation. There may be certain critical wavelengths at which the medium absorbs. For example, the absorption spectra for water vapor and for carbon dioxide are given in Radiative Efficiencies. What is needed is an average absorption coefficient over a range of wavelengths. It would be a weighted average based upon the distribution of radiation of different wavelengths. The radiation from a body depends upon its temperature. The total energy in that radiation is proportional to the fourth power of the absolute temperature. The wavelength distribution of that radiation looks something like the following.

    The frequency scale runs from left to right whereas the wavelength scale runs from right to left.

    There does not appear to be available simple absorption coefficients for water vapor and carbon dioxide averaged over the range of infrared radiation relevant for Earth’s emissions. In lieu of and while continuing to pursue the technical information needed to carryout an estimation of the absorption of infrared radiation using Beer’s Law some ballpark estimates will be made using the bits and pieces of relevant information that is available.

    One datum that appears to be relevant is the estimate that 90 percent of the infrared radiation emitted by the Earth’s surface does not go out into Space. This 90 percent would be made up of several components, which include:

    The absorption by greenhouse gases in the clear sky. The cloud coverage averages about 60 percent so there is about 40 percent clear sky.
    The absorption by greenhouse gases in the space below the cloud cover.
    The absorption by water, liquid and solid, in the clouds.
    The reflection from the undersides of clouds.

    Let α be the proportion absorbed by greenhouses gases in a clear sky. The amount reradiated downward would then be (1/2)(0.4)α. Suppose the proportion absorbed by greenhouses gases before the infrared radiation reaches the undersides of the clouds is one half of that for traversing the full atmosphere; i.e., ½α. This would then be 0.6(½α). The infrared radiation reaching the clouds would be 1-0.6(½α). If the reflectivity of the clouds to infrared radiation is 70 percent (as opposed to 90 percent for visible light) then infrared reflection accounts for 0.7[1-0.6(½α)]. The rest would be absorbed in the clouds and half radiated back down to the surface. The other half would heat the cloud and eventually that heat would find its way to the top surface of the cloud and half be radiated out into Space. In terms of that which does not going into Space it is [0.7+0.3(0.5+0.25)][1-0.6(½α)] . Thus
    0.5(0.4)α + (0.925)[1-0.6(½α)] = 0.9
    which reduces to
    0.2α – 0.2775α = 0.9 – 0.925
    and hence
    -0.0775 = -0.025
    which means that α would have to be
    α = 0.322

    This would mean that the cloud blanket effect (reflection of infrared radiation from the undersides of clouds) accounts for about 63 percent of the return of energy to the Earth’s surface and the greenhouse effect accounts for only about 37 percent. In an area without clouds there would be only 16 percent of the infrared radiation returned to the surface instead of the 86 percent returned to the surface under clouds. This 86 percent is made up of 59 percent from the cloud reflectivity, 8 percent from the effect of the greenhouse gases below the clouds and 19 percent from the greenhouse effect in the clouds. This is compatible with the experience of the cold clear winter night compared with a cloud-covered night.

    A proportion absorbed of 0.322 means that 0.618 is transmitted. Thus the optical depth of the atmosphere due to the greenhouse gases is -ln(0.618)=0.48. At an altitude that included one half of the greenhouse gases the transmission would be exp(-0.48/2)=0.8055 and thus the absorption would be 0.1945.

    Some insights may be gained by looking at equilibrium temperatures. However the nighttime temperatures are not equilibrium temperatures. At night the temperature is decreasing roughly according to a negative exponential curve.
    Energy Balance Models for Equilibrium Temperature

    Without greenhouse gases or clouds the equilibrium temperature of a planet’s surface would be given by
    πR²(1-α)ψ = 4πR²σεT4
    which reduces to
    (1-α)ψ = 4σεT4
    which means that
    T = [(1-α)ψ/(4σε)]1/4

    where T is the equilibrium absolute temperature, R is the planet radius, ψ is the intensity of the solar radiation, ε is the surface emissivity, α is the planet surface albedo and σ is the Stefan-Boltzmann constant.

    If the greenhouse gases in the atmosphere absorb a proportion β and radiate half of it back to the surface then the equilibrium temperature satisfies the condition:
    (1-α)ψ = 4σε(1-½β)T4
    and hence
    T = [(1-α)ψ/(4σε)(1-½β)]1/4

    Now clouds can be brought into the picture. Let α0 be the albedo of the surface, α1 the albedo of the top of the clouds to short wave radiation and let α2 be the albedo of the bottom of clouds to long wave radiation. Let β0 be the proportion of short wave radiation absorbed by atmospheric greenhouses gases below the clouds and β1 be proportion absorbed by those gases in the atmosphere above the lower level of the clouds. Let β2 be the proportion absorbed by the greenhouse substances in the clouds.

    Then for a clear sky
    Tclear = [(1-α0)ψ/(4σε)(1-½(β0+β1))]1/4

    For the case of the cloud cover
    (1-α0)(1-α1)ψ = 4σε(1-½(β0+β2)-α2)T4cloudy
    and hence
    Tcloudy = [(1-α0)(1-α1)ψ/(4σε)(1-½(β0+β2)-α2)]1/4

    Ratio of the clear and cloudy equilibrium temperatures is then:
    Tcloudy/Tclear = [(1-α1)(1-½(β0+β1))/(1-½(β0+β2)-α2)]1/4

    As seen above the common factors like the solar intensity and short wave surface albedo are eliminated.
    The Dynamics of Diurnal Temperature Cycles

    The equation for the dynamics of temperature is
    C(dT/dt) = S0ψ(t) − S1εσT4

    where C is the heat capacity of the body, T is its absolute temperature, S0 is the surface over which the body receives solar radiation, S1 is the surface area over which the body emits thermal radiation. The heat capacity is proportional to the body volume; say C=γV, where γ is the heat capacity per unit volume.

    The net inflow of radiant energy ψ(t) is a cyclic function of time. Let ψmean and Tmean be the mean energy inflow and temperature, respectively. Then
    0 = S0ψmean − S1εσT4mean

    This equation may be subtracted from the dynamic equation to give:
    C(dΔT/dt) = S0Δψ(t) − S1εσ(T4-T4mean)

    where ΔT and Δψ are (T-Tmean) and (ψ-ψmean), respectively.

    The term (T4-T4mean) on the right can be approximated by 4T3meanΔT. Thus the equation for the dynamics of diurnal temperature is of the form
    C(dΔT/dt) = S0Δψ(t) − S1εσβΔT
    where
    β = 4T3mean

    For material on the solution to this type of equation see Diurnal Temperature.

    Comparison of a Case of Ground Temperature
    With and Without Cloud Cover

    On November 22, 2008 John Bryant of the WMCTV Weather Team in Memphis, Tennessee noted that the temperature at the airport under cloud cover was about 42°F whereas at Dyersburg, a small city near Memphis, sky was clear and the temperature at midnight was almost twenty degrees colder.

    Comparing night time temperatures is not a matter of comparing equilibria. After the sun goes down the temperature is in disequilibrium and it decreases approximately like a negative exponential function; i.e.,
    T(t) = T0e-γt

    Suppose the temperature under the clear sky at sunset was 45°F and at midnight seven hours later it was 22°F. In absolute temperature these were 505°R and 482°R, respectively. The value of the coefficient in a negative exponential function for these two temperatures is
    γ = −ln(Tmid/T0)/7 = -ln(482/505)/7 = 0.00666 per hour.

    According to the case data the value of γ for the cloud covered situation was 0.

    Let β1 be the proportion of the thermal radiation absorbed in a clear sky. The value of γ1 for the clear sky case is
    γ1 = K(1−½β1)

    where K is a coefficient depending upon all of the other factors besides the greenhouse gas absorption.

    Let β2 be the proportion of the thermal radiation absorbed by the atmosphere under the cloud layer and α2 the proportion of thermal radiation returned to Earth from the underside of the clouds or the greenhouse effect in the clouds. Then the γ2 for the cloud cover case is
    γ2 = K(1−½β1)(1−α2)

    Since for the case under examination γ2=0, this means α2 must equal 1.0. This can occur only with reflection of the thermal radiation. Since β2 can be at most 1.0 and thus the proportion of radiation returned to Earth can be at most 0.5 this means at the extreme 0.5 of the thermal radiation would be returned to Earth by the greenhouse effect of the atmosphere and 0.5 by the reflectivity of the clouds and their greenhouse effect. If β2 is 0.3 then 15 percent of the thermal radiation would be returned by the greenhouse effect of the atmosphere and 85 percent by the reflectivity and greenhouse effect of the clouds. The overwhelming proportion of the effect of the clouds has to come from their reflectivity of infrared radiation.
    Conclusion

    Most of the effect of clouds in moderating the night temperatures is from their reflectivity of infrared radiation.
    ********************************************************************************************************The above was the work of Thayer Watkins. Below are my conclusions.

    Therefore, clouds overwhelm the Downward Infrared Radiation (DWIR) produced by CO2. At night with and without clouds, the temperature difference can be as much as 11C. The amount of warming provided by DWIR from CO2 is negligible but is a real quantity. We give this as the average amount of DWIR due to CO2 and H2O or some other cause of the DWIR. Now we can convert it to a temperature increase and call this Tcdiox.The pyrgeometers assume emission coeff of 1 for CO2. CO2 is NOT a blackbody. Clouds contribute 85% of the DWIR. GHG’s contribute 15%. See the analysis in link. The IR that hits clouds does not get absorbed. Instead it gets reflected. When IR gets absorbed by GHG’s it gets reemitted either on its own or via collisions with N2 and O2. In both cases, the emitted IR is weaker than the absorbed IR. Don’t forget that the IR from reradiated CO2 is emitted in all directions. Therefore a little less than 50% of the absorbed IR by the CO2 gets reemitted downward to the earth surface. Since CO2 is not transitory like clouds or water vapour, it remains well mixed at all times. Therefore since the earth is always giving off IR (probably a maximum at 5 pm everyday), the so called greenhouse effect (not really but the term is always used) is always present and there will always be some backward downward IR from the atmosphere.

    When there isn’t clouds, there is still DWIR which causes a slight warming. We have an indication of what this is because of the measured temperature increase of 0.65 from 1950 to 2018. This slight warming is for reasons other than just clouds, therefore it is happening all the time. Therefore in a particular night that has the maximum effect , you have 11 C + Tcdiox. We can put a number to Tcdiox. It may change over the years as CO2 increases in the atmosphere. At the present time with 409 ppm CO2, the global temperature is now 0.65 C higher than it was in 1950, the year when mankind started to put significant amounts of CO2 into the air. So at a maximum Tcdiox = 0.65C. We don’t know the exact cause of Tcdiox whether it is all H2O caused or both H2O and CO2 or the sun or something else but we do know the rate of warming. This analysis will assume that CO2 and H2O are the only possible causes. That assumption will pacify the alarmists because they say there is no other cause worth mentioning. They like to forget about water vapour but in any average local temperature calculation you can’t forget about water vapour unless it is a desert.
    A proper calculation of the mean physical temperature of a spherical body requires an explicit integration of the Stefan-Boltzmann equation over the entire planet surface. This means first taking the 4th root of the absorbed solar flux at every point on the planet and then doing the same thing for the outgoing flux at Top of atmosphere from each of these points that you measured from the solar side and subtract each point flux and then turn each point result into a temperature field and then average the resulting temperature field across the entire globe. This gets around the Holder inequality problem when calculating temperatures from fluxes on a global spherical body. However in this analysis we are simply taking averages applied to one local situation because we are not after the exact effect of CO2 but only its maximum effect.
    In any case Tcdiox represents the real temperature increase over last 68 years. You have to add Tcdiox to the overall temp difference of 11 to get the maximum temperature difference of clouds, H2O and CO2 . So the maximum effect of any temperature changes caused by clouds, water vapour, or CO2 on a cloudy night is 11.65C. We will ignore methane and any other GHG except water vapour.

    So from the above URL link clouds represent 85% of the total temperature effect , so clouds have a maximum temperature effect of .85 * 11.65 C = 9.90 C. That leaves 1.75 C for the water vapour and CO2. CO2 will have relatively more of an effect in deserts than it will in wet areas but still can never go beyond this 1.75 C . Since the desert areas are 33% of 30% (land vs oceans) = 10% of earth’s surface , then the CO2 has a maximum effect of 10% of 1.75 + 90% of Twet. We define Twet as the CO2 temperature effect of over all the world’s oceans and the non desert areas of land. There is an argument for less IR being radiated from the world’s oceans than from land but we will ignore that for the purpose of maximizing the effect of CO2 to keep the alarmists happy for now. So CO2 has a maximum effect of 0.175 C + (.9 * Twet).

    So all we have to do is calculate Twet.

    Reflected IR from clouds is not weaker. Water vapour is in the air and in clouds. Even without clouds, water vapour is in the air. No one knows the ratio of the amount of water vapour that has now condensed to water/ice in the clouds compared to the total amount of water vapour/H2O in the atmosphere but the ratio can’t be very large. Even though clouds cover on average 60 % of the lower layers of the troposhere, since the troposphere is approximately 8.14 x 10^18 m^3 in volume, the total cloud volume in relation must be small. Certainly not more than 5%. H2O is a GHG. Water vapour outnumbers CO2 by a factor of 25 to 1 assuming 1% water vapour. So of the original 15% contribution by GHG’s of the DWIR, we have .15 x .04 =0.006 or 0.6% to account for CO2. Now we have to apply an adjustment factor to account for the fact that some water vapour at any one time is condensed into the clouds. So add 5% onto the 0.006 and we get 0.0063 or 0.63 % CO2 therefore contributes 0.63 % of the DWIR in non deserts. We will neglect the fact that the IR emitted downward from the CO2 is a little weaker than the IR that is reflected by the clouds. Since, as in the above, a cloudy night can make the temperature 11C warmer than a clear sky night, CO2 or Twet contributes a maximum of 0.0063 * 1.75 C = 0.011 C.

    Therfore Since Twet = 0.011 C we have in the above equation CO2 max effect = 0.175 C + (.9 * 0.011 C ) = ~ 0.185 C. As I said before; this will increase as the level of CO2 increases, but we have had 68 years of heavy fossil fuel burning and this is the absolute maximum of the effect of CO2 on global temperature.
    So how would any average global temperature increase by 7C or even 2C, if the maximum temperature warming effect of CO2 today from DWIR is only 0.185 C? This means that the effect of clouds = 85%, the effect of water vapour = 13.5 % and the effect of CO2 = 1.5%.

    Sure, if we quadruple the CO2 in the air which at the present rate of increase would take 278 years, we would increase the effect of CO2 (if it is a linear effect) to 4 X 0.185C = 0.74 C Whoopedy doo!!!!!!!!!!!!!!!!!!!!!!!!!!

    • If the greenhouse gases in the atmosphere absorb a proportion β and radiate half of it back to the surface then the equilibrium temperature satisfies the condition:

      That is true in the upper atmosphere but it’s not what happens near the surface where thermalization is the predominant mode of heat transfer from the excited CO2.

      • If by thermalization you mean convection, that does not affect the argument of my post. Convection is going on whether you have clouds or not. My sole point in the post is to show the maximum possible effect that CO2 could have had in last 68 years. It may well be ZERO, but if Thayer Watkins is correct, then CO2 can not have had any more effect than 0.185C since mankind has been emitting major amounts of CO2 into the atmosphere. That is an average increase of 0.00272 C per year, or 0.2 C per century, well under Lord Monckton’s or any other scenario of climate sensitivity.

    • That 11 C temperature variation due to clouds at night can occur daily. It’s like saying since day and night solar insolation varies from 1,000 W/m^2 to zero, therefore solar variability of 1 or 2 W/m^2 has negligible effect on climate. Go figure

  14. Hi Louis
    Recently posted:

    https://wattsupwiththat.com/2019/01/09/a-sea-surface-temperature-picture-worth-a-few-hundred-words/#comment-2583524

    In the Vostoc cores, peak CO2 was never able to maintain peak temperature; in fact, peak CO2 WAS CAUSED BY temperature BECAUSE CO2 always LAGGED TEMPERATURE IN TIME.

    CO2 TRENDS LAG TEMPERATURE TRENDS AT ALL MEASURED TIME SCALES.
    – by hundreds of years in the ice core record;
    – by ~9 months in the modern data record.

    REFERENCES:

    CARBON DIOXIDE IN NOT THE PRIMARY CAUSE OF GLOBAL WARMING: THE FUTURE CAN NOT CAUSE THE PAST
    by Allan MacRae
    http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/

    http://www.woodfortrees.org/plot/esrl-co2/from:1979/mean:12/derivative/plot/uah5/from:1979/scale:0.22/offset:0.14

    THE PHASE RELATION BETWEEN ATMOSPHERIC CARBON DIOXIDE AND GLOBAL TEMPERATURE
    by Ole Humlum, Kjell Stordahl, Jan-Erik Solheim
    Global and Planetary Change, Volume 100, January 2013, Pages 51-69
    https://www.sciencedirect.com/science/article/pii/S0921818112001658

    [I deleted the earlier, identical comment. Mod]

  15. What is the justification of using 4.83 instead of 5.35 to be multiplied by the ratio of after/before C)2 levels to obtain W/m^2 forcing from the change?

    What is the justification of using 1 degree K as reference sensitivity after deriving 1.15 K? (I figure 1.11 K using 5.35 times ln (2) times .3)

    • In reply to Mr Klipstein, the CMIP5 model ensemble (whose outputs are summarized in Andrews+ 2012) has a mean CO2 radiative forcing of 3.346 K. The product of 3.346 K and the Planck sensitivity parameter 0.3 gives the reference sesitivity to doubled CO2: it is 1.0 K.

      The value 1.15 K is the centennial midrange predicted warming on the basis of IPCC’s RCP 6.0 declared CO2 forcing of 700 ppmv in 2100 compared with 368 ppmv in 2000, enhanced by 20% to allow for other anthropogenic forcings.

  16. One thing to consider is that the forcings and temperature rises predicted by the RCPs include forcings from increase of GHGs other than CO2 such as methane. The RCPs are named after W/m^2 forcings from all GHGs (other than water vapor) added to the atmosphere by human activity.

    Another thing to consider is that warming after 2000 is not limited to that caused by GHGs added to the atmosphere after 2000, because of lags.

    • One need not worry about lags in the operation of the short-acting feedbacks, precisely because they are short-acting, with delays of years at most and usually hours or days.

        • The atmosphere does not warm the oceans. The temperature direction is the other way through evaporation. When water evaporates it loses heat. Put a pail of room temperature water inside a room in the tropics where temperature does not vary much. There will be a steady evaporation of the water to the air in the room until there will be no water left. In the earth system, the oceans on average are slightly warmer than the air temperature despite the fact that 77.76 W/M^2 of evaporation goes on and 8.64 W/m^2 of transpiration happens. The reason is that the oceans are receiving 114 W/m^2 of sunlight and the land is receiving 49 W/m^2 of sunlight.

          • Uhhhh . . . I believe that, absent cloud effects, both lands and oceans RECEIVE the same amount of sunlight at the same latitude for any particular day of the year.

  17. I’ve always been suspicious about the ignoring of the feedback and would be interested in reading your paper. I am very interested in reading it. If you have it on hand, I’d also like to see a good reference to the Climatologies perspective on it, just so I can see the most recent stuff.

    Is it possible for you to send it to me?

    • If Mr Combs were to email me at monckton{at}mail.com, I should be able to send him a short version of our paper, which will present the argumentum ex definitione in Classical logic that is, on its own, enough to establish our main point.

      The paper currently under review contains a much more mathematical treatment of the subject, providing formal proof that what is in any event self-evident in Classical logic is also demonstrably true in physics and in mathematics.

      • Many thanks to Mr Marler for his kind comments. And I’m glad he likes the dial. it takes a bit of getting used to, but it does give a very clear presentation of the wide divergence between excitable prediction and sober observed reality.

        • Lord Monckton-

          I have been a fan all your activities, both scholarly and in the activist realm (who can forget your actions at the COP meeting in South Africa.) And the work you describe in this post is eye-opening. Thanks for taking the time to discuss it here.

          However, I would like to humbly comment on your “dial” graphic. I would describe your graphic as elegant, but I am not sure that elegance is what you should be striving for. I recently had cause to go back and review what I knew of Edward Tufte’s work in the visual display of quantitative information. He points out that the purpose of visual display of information is to make the difficult and complex more easily understood. I am not sure that the dial graphic has made your results easier to understand.

          After a forty year career in engineering research, having had to present my fair share of information graphically, I had a hard time understanding what you were trying to convey. (true, I’ve been retired for 20 years, so I could be a little rusty). And not that the dial itself is a bad idea, I just think that there is too much information crammed into too little space.

          • In response to Old Engineer, I should be most grateful if he would create, and convey to me at monckton[at]mail.com, a simpler graphic that conveys the same information.

            Most people who take one look at the graph can see at once that there is a very large discrepancy between both the magnitude and the interval of official predictions (cunningly labeled “official predictions” in large letters) and the magnitude and interval of observed warming.

        • Lord Monckton, it is said that a picture is worth a hundred words. In the case of your dial graphic I fear it needs more than a 100 words to explain just what the graphic is trying to express. Perhaps bar charts would convey the message in a simple and digestible manner. That said, may I thank you for sharing the results of your teams sterling efforts.

          • Perhaps oldscouser would like to draw a suitable bar-chart for me. Most people who have seen the dial get the point at once that the needles indicating the measured or inferred observed temperature do not fall anywhere within the enormous interval of warming predicted by official climatology.

  18. “Therefore, the 21st-century warming that IPCC should be predicting, on the RCP 6.0 scenario and on the basis of its own estimates of CO2 concentration and the models’ estimates of CO2 forcing and Charney sensitivity, is 3.37 x 1.15, or 3.9 K.

    Yet IPCC actually predicts only 1.4 to 3.1 K 21st-century warming on the RCP 6.0 scenario”
    No, it should not be predicting that. As said, 3.37 is equilibrium sensitivity. The change when the effects of the rise in CO2 during C21 have settled. That won’t have happened by end century.

    “Finally, UAH, which Professor Ole Humlum (climate4you.com) regards as the gold standard for global temperature records. Before UAH altered its dataset, it used to show more warming than the others. Now it shows the least, at 1.3 C°/century equivalent.”
    So, more or least. Which one is gold?

    • In reply to Mr Stokes, the short-acting feedbacks have delay times of years at most, and usually hours or days. There is, therefore, a clear discrepancy between the detuned predictions of 21st-century warming made by IPCC, of which the RCP 6.0 prediction is studied in the head posting, and IPCC’s far more extreme prediction of equilibrium sensitivity to doubled CO2.

      The second question raised by Mr Stokes is not for me but for Professor Humlum.

      • In reply to Mr Stokes, the short-acting feedbacks have delay times of years at most, and usually hours or days.

        To my limited understanding that seems at odds with what the IPCC say, based on CMIP5.

        According to AR5 (WG1 Box 12.2) the CMIP5 exercise produces an ECS of 2.1°C to 4.7°C, but they also say that CMIP5 gives an estimate for TCR of 1.2°C to 2.4°C.

      • “In reply to Mr Stokes, the short-acting feedbacks have delay times of years at most, and usually hours or days.”
        The time of actions of feedbacks have little to do with the time scale. It’s a much simpler issue; how fast does a kettle heat when you turn on the gas. That has nothing to do with feedbacks; it is just thermal inertia. You have to maintain a heat flux for a while to add enough heat to raise the temperature. Same with AGW and the oceans, and the scale there can be centuries.

        Bellman makes the right point. TCR predicts only half the heating of ECS on its timescale. And that timescale is seventy years (of rising flux).

        • “The time of actions of feedbacks have little to do with the time scale.”
          Please Nick.
          Time is time.
          Thermal inertia is part of how fast a feedback works, they are interdependent.
          Your argument, such as it is, needs to be expressed differently.

          • Nick Stokes is still thinking within the frame of reference of the existing, unduly limited transfer-function equation.

            In 1850 the equilibrium temperature was 287.5 K. Even allowing for Mr Stokes’ 70-year delay, it was still 287.5 K, for there was no trend in global temperature for 80 years after 1850 (HadCRUT4). The reference temperature was somewhere between 220 and 265 K. The transfer function accordingly fell on [1.1, 1.3].

  19. The result came as quite a surprise to us, too
    ========
    MoB, I would recommend submitting your results for peer review by physicists with a background in theoretical mathematics rather than climatologists. From your previous postings, we saw that the climatology definition of feedback is so highly ingrained in climatology that you are unlikely to overcome human bias, even when the climatologist is sympathetic.

    The problem is not PhD climatology. It is first or second year physics and mathematics. It is the age old problem of specialization: Knowing more and more about less and less.

    • In reply to Ferd Berple, I agree that the journal should really get a professor of applied control theory to read what our professor of control theory has written. It would also be a good idea for the journal to find a number theorist to read the latest draft of our paper, for it contains the standard number-theoretic demonstration that the feedback fraction is simply the sum of an infinite convergent geometric series under the convergence condition that the absolute value of the feedback fraction be less than 1.

      The current draft of the paper provides multiple demonstrations – one in Classical logic (which is far and away the simplest for third parties to understand); one in number theory; and one in control theory. In this way, we have left no room for doubt.

  20. The result came as quite a surprise to us, too
    ========
    Here is a simple example to demonstrate this result; showing that feedback works on both change and absolutes, contrary to the formal theory of climatology.

    This single contradiction is sufficient via falsification to show that climatology theory is incorrect. It also shows that MoB’s paper is correct in the specific. Readers can easily test this for themselves.

    Typically, passenger cars have feedback in their steering, as a result of the geometry of the suspension. The purpose of this feedback is for safety; to return the vehicle to a straight course when the driver releases the steering wheel and the vehicle is in motion forward.

    Select a large empty lot. A vehicle with manual steering works best, but power steering will also work. The forces will simply be less obvious.

    Put the vehicle in motion. To test positive feedback, drive the vehicle in reverse.
    To test negative feedback, drive the vehicle in forward. For our test, drive the vehicle in reverse because climatology theory is that feedback is positive.

    Hold a constant speed, and turn the steering wheel to the left. You will feel feedback from the steering geometry via the steering wheel as you turn. Stop turning the steering wheel, but do not allow it to come back to center (or to slam outward to the left because you are in reverse). Hold the steering wheel in a constant position to the left.

    In both cases, when you turned the steering wheel from neutral (absolute zero) and when you hold the steering wheel in a constant turn (no change) you will continue to feel feedback from the suspension geometry, via the steering wheel. This feedback does not disappear simply because there is no further change in the system.

    Now turn the steering wheel further to the left while maintaining constant speed. You will notice that the further you turn to the left, the more force that will feedback from the suspension geometry to the steering wheel. This is the feedback from a delta.

    Now hold the steering wheel at the extreme left position. The feedback via the steering wheel does not go to zero, rather it remains at the highest level of the entire test. In fact, if you are going fast enough in reverse, you may have considerable trouble holding the steering wheel in position against the feedback force. This is the no change feedback.

    Under climatology theory, this “no change” feedback should go to zero when you hold the steering wheel in a constant position, but it does not. The feedback only goes to zero when the steering geometry returns to the neutral position.

    • Now hold the steering wheel at the extreme left position.
      ===========
      note: obviously, not so extreme that the suspension hits the stops.

    • Ferd Berple’s feedback analogy is quite a nice one. However, we were not surprised by the fact that the transfer function is the ratio of equilibrium to reference temperature, and not merely the ratio of equilibrium to reference sensitivity. The matter is self-evident from the mathematics and physics, as well as ex definitione. We were, however, surprised by the fact that official climatology simply had no idea that the transfer function is the ratio of absolute equilibrium to reference temperature.

  21. Lord Monckton,

    I strongly hope your paper will be published shortly. The sooner, the better. In my opinion it’s strong, elegant and just brief enough for a lay person to understand (I’m not sure about Dutch politicians). In the Netherlands policy makers are about to take the most absurd measures to reduce carbon dioxide emissions. To them it’s a pollutant and more dangerous than any other matter around. Please keep us posted which journal will publish . As soon as it is available I buy a couple of dozen to hand on personally to our policy makers in The Hague!

    • Mr Duiker is very kind. We, too, hope the paper will be published in due course. The fact that the editor of the journal withdrew his rejection of the paper based on the manifest inadequacies of the reviews is a promising start. As soon as the paper is published, WUWT will be the first to know.

      If official climatology had not made the mistake of adopting an erroneously restrictive definition of temperature feedback, no one would ever have tried to maintain that global warming caused by us could possibly be catastrophic. The whole nonsense is based on a fundamental and elementary error of physics perpetrated by climatologists borrowing mathematics from another branch of physics without understanding what they had borrowed.

  22. If a paper can be rejected because reviewers hadn’t bothered to read the paper, it’s probably the case that papers can be accepted under the same circumstances. The deciding factor on acceptance seems to be based not on the quality of the science, but rather on the extent to which the paper conforms to the groupthink.

    • The paper should certainly not have been rejected on the basis of the three reviews the editor received. The editor himself, however, realized this and was good enough to withdraw his rejection, allowing us to submit the paper again. My offer to reinforce the argument with additional demonstrations, which I outlined in correspondence with the editor, was accepted with alacrity, and we now await the final confirmatory section from our professor of applied control theory.

  23. So much for the predictions. But what is actually happening, and does observed warming match prediction? Here are the observed rates of warming in the 40 years 1979-2018. Let us begin with GISS, which suggests that for 40 years the world has warmed at a rate equivalent not to 3.9 C°/century nor even to 2.2 C°/century, but only to 1.7 C°/century.

    I haven’t waded through all the arguments above, but I’m guessing this 3.9°C / century figure is an example of Lord Monckton’s creative reinterpretation of the IPCC.

    The IPCC AR5 actually predicts (for scenario RCP 6.0) 2.2°C warming over 95 years (from 1986-2005 to 2081-2100). This would amount to 2.3°C / century, with a likely lower bound of 1.5°C / century. They project a lower rate of warming over the first 30 years.

    Is the complaint here that the IPCC are not being alarmist enough?

    • How typical of Bellhop to comment without reading the head posting he is commenting on. The answer to his point will be found in the head posting.

      • I presume the relevant part is where you say

        Therefore, the 21st-century warming that IPCC should be predicting, on the RCP 6.0 scenario and on the basis of its own estimates of CO2 concentration and the models’ estimates of CO2 forcing and Charney sensitivity, is 3.37 x 1.15, or 3.9 K.

        As I suggested you are say here what you think the IPCC should be predicting. But then go on to say what the IPCC actually predict. So who agrees with your calculations, and why do you think the IPCC gave a lower prediction?

        • Surely the question is why the IPPC is predicting something other than that which is derived from its own models. That is a question for the IPPC not MoB

          • They’re not. Monckton of Brenchley is the only one deriving his high prediction from the IPCC models, possibly because he doesn’t understand the difference between ECS and TCR – see comments above.

            I really struggle to see how this argument makes sense. He’s saying that the CMIP5 models predict 2.2°C warming by the end of the century, then he’s saying the sensitivity derived from the same models can be used to calculate much more warming. But this sensitivity is based on how much warming the models show.

          • Savage is right and Bellman wrong. Let us set out the argument step by step:

            1. Since the CO2 forcing (taken as the mean of 15 CMIP5 models’ results: Andrews 2012) is 3.346 Watts per square meter, and the current value of the Planck sensitivity parameter is 0.3 Kelvin per Watt per square meter, the reference sensitivity to doubled CO2 is the product of 3.346 and 0.3: i.e., 1 K.

            2. Since the general form of the CO2 forcing function is the product of a coefficient and the natural logarithm of the proportionate change in concentration, the coefficient implicit in the CMIP5 models’ values of the forcing function is 3.346 / ln(n), or 4.83.

            3. Since RCP 6.0 posits 700 ppmv CO2 concentration by 2100 compared with 368 ppmv in 2000, the 21st-century predicted CO2 forcing is 4.83 ln(700 / 368), or 3.1 Watts per square meter. This should be increased by about 20% to give the net all-sources centennial anthropogenic forcing of 3.8 Watts per square meter. The product of this value and the Planck parameter is 1.14 K.

            4. The midrange CMIP5 models’ estimate of Charney sensitivity (i.e., equilibrium sensitivity to doubled CO2) is 3.37 K. Since the reference sensitivity is 1 K (see step 1 above), the implicit midrange transfer function is 3.37.

            5. The product of the 1.14 K anthropogenic reference sensitivity and the transfer function is about 3.9 K.

            6. Yet IPCC says its midrange warming for RCP 6.0 is only 2.2 K. It has thus detuned its predictions of 21st-century warming to bring them within shouting distance of observation. But it has not correspondingly reduced its headline estimates of Charney sensitivity. Q.E.D.

          • Savage is right and Bellman wrong. Let us set out the argument step by step:

            I’d be happy to accept I’m wrong if you show where CMIP5 actually predicts 3.9°C warming by the end of the century. But if your assumptions are wrong then no amount of repeating your calculations will prove your conclusions.

            Your first 3 steps are redundant as the IPCC say that CMIP5 gives an ECS of between 2.1°C to 4.7°C, with a mid point exactly as you state in step 4. So we can all agree that CMIP5 suggests equalibrium sensitivity of around 3.4°C. And I’ll assume your derived 3.9°C warming is correct in step 5.

            Which just leaves the question as to when this 3.9°C warming should arrive. You seem to believe that this should happen at the end of the century, but in step 6 you accept that the CMIP5 models say there will be only 2.2°C warming by the end of the century.

            Your conclusion is that the IPCC “detuned” the model predictions in order to show less warming then they actually predict. I find it difficult to understand why they would do this, if as I keep being told the IPCC is an alarmist organization who al ways exaggerate the amount of warming we can expect. But I’m also puzzled as to how they do this as if they are detuning the CMIP5 models to show less warming that would also mean the sensitivity derived from the models should decrease.

            To my mind a simpler explanation is that the CMIP5 models show warming of 2.2°C out to the end of the century and then show more warming after that as we approach equilibrium. And this is supported by the IPCC stating that transient climate response (TCR), as estimated by the CMIP5 models is much lower than the ECS.

            You might not agree with this and think that TCR should be almost the same as ECS, but you cannot use that argument to claim that the models are predicting faster warming than they actually are.

          • In response to Bellman, IPCC states quite clearly that it has substituted its “expert judgment” for the outputs of models. If it had, as Bellman ingeniously but inaccurately attempts to suggest, merely provided predictions in line with the models, it would not have had to make any such statement.

            I was an expert reviewer for AR5 and have the advantage of having seen the earlier drafts of the report. When several of us protested that IPCC’s predictions were wildly in excess of observation, IPCC responded by reducing those predictions by almost half, but, inconsistently, leaving its headline predictions of Charney sensitivity unaltered.

            It is because of that declared detuning that IPCC’s midrange CMIP5 estimate is only 2.2 K rather than the equilibrium warming of 3.9 K that Bellman now accepts is consistent with the midrange CMIP5 prediction of Charney sensitivity, from which the crucial coefficient in the CO2 forcing function is derivable.

            One can, of course, quibble about whether the equilibrium response to 21st-century warming will have come through in full by 2100. Looking at the IPCC graph, one would expect about 85% of it – i.e. 3.3 K – to have eventuated by 2100. However, the great Professor Lindzen, whom I once consulted about the response time, considered that it was much shorter than IPCC imagined. Either way, it is clear that IPCC should have reduced, but did not reduce, its estimated interval of Charney sensitivies.

          • Monckton of Brenchley,

            In response to Bellman, IPCC states quite clearly that it has substituted its “expert judgment” for the outputs of models.

            That had occurred to me, but the figure of 2.2°C was taken from table 2.1 of the synthesis report, which says that it is based on CMIP5.

            I cannot find the reference to “expert judgement” being used for the long term warming projection, only to assessing risks and uncertainties. Could you point me in the right direction?

            In any even, if true, it adds to the question why you would be using raw model output for your projections, rather than referring to the IPCC’s expert judgement.

            It is because of that declared detuning that IPCC’s midrange CMIP5 estimate is only 2.2 K …

            Now I’m confused. Did they use their expert judgement or did they change the models?

            rather than the equilibrium warming of 3.9 K that Bellman now accepts is consistent with the midrange CMIP5 prediction of Charney sensitivity, from which the crucial coefficient in the CO2 forcing function is derivable.

            I don’t think AR5 gives a figure for ECS. Nor do they base their likely range of values on models alone.
            They actually say that ECS is likely between 1.5 and 4.5°C, which is based on multiple lines of evidence, not the models which show slightly higher ranges.

            Looking at the IPCC graph, one would expect about 85% of it – i.e. 3.3 K – to have eventuated by 2100.

            Which graph would that be?

            Figures 2.1 and 2.8 of the Synthesis Report show temperatures consistent with the 2.2°C by the end of the century, as do various graphs in WG1 (Figure 12.5 for example).

            BTW, the 2.2°C figure isn’t for what will have “eventuated” by 2100. It’s for the mean of temperatures between 2081-2100, i.e. the 20 years centered on 2090.

            Either way, it is clear that IPCC should have reduced, but did not reduce, its estimated interval of Charney sensitivies.

            As I said above, the IPCC’s interval for ECS is not based on the models.

            But you again fail to mention the estimates for TCR from the models. Is it your contention that the models have been changed to lower TCR but not ECS?

          • I am disinclined to play Bellman’s game of picking nits so as to circumvent the main point. The main pont is that I read the earlier drafts of AR5, and between the earlier drafts and the published draft IPCC inserted its statement about substituting its “expert judgment” for models’ outputs and, at the same time, approximately halved its estimate of medium-term warming, while leaving the equilibrium sensitivities unaltered.

            Now, there is a real inconsistency worth of Bellman’s attention, if it can only raise its attention from counting nits to look up at the stars instead.

            Unfortunately, Bellman is not interested in the objective truth. So it wastes its time on multiplying futilities. The objective truth is that IPCC’s definition of feedback is inconsistent with the long-proven mainstream definition, and that, if the mainstream definition is used, all equilibrium sensitivities are about a third of official climatology’s midrange estimates.

          • The main pont is that I read the earlier drafts of AR5…

            I thought the main point was your claim that there was a large gap between current trends and what IPCC models predicted.
            My argument being that this gap relies upon your reimagining of what the models should say.

            You never mentioned the draft report in the head post, and I cannot see why it is relevant – it’s the final IPCC report that describes what they are predicting, and that which should be used to determine the size of the gap.

            …and between the earlier drafts and the published draft IPCC inserted its statement about substituting its “expert judgment” for models’ outputs…

            And my nit-pick is that you haven’t pointed to where in the IPCC report they said that.

            Another nit-pick is that you claimed that there was a graph showing 3.3K warming by the end of the century. I asked what graph that was – I still await an answer.

            You’d probably also consider it a nit-pick if I pointed out that 3.3K is still a lot less than the 3.9K you claimed the models actually showed.

            Another nit-pick was your claim that the IPCC based their range of values for ECS on model output. I pointed out the IPCC specifically said they didn’t and that their range was lower than that obtained from the models.

            Finally I point out the difference in the IPCC report between model estimates for ECS and TCR and the fact that they show TCR as being a lot lower than ECS.
            You ignore this point yet keep wanting me to explain why ECS shows more warming than predicted for the end of the century.

            The objective truth is that IPCC’s definition of feedback is inconsistent with the long-proven mainstream definition…

            I think were are going to have to disagree about what “objective truth” means. But this is the point you were arguing all last summer, and is completely irrelevant to the question of what the IPCC predicted and who close it is to the current trend which is what I thought was your point in the post.

          • Bellman should not whine at me but at the IPCC: it is there that the inconsistencies to which it draws attention arise.

          • Nah, I think it’s more productive to whine (ask questions) of you than of the IPCC, as I’m not the one seeing inconsistencies in the IPCC report. I’m more interested in your own inconsistencies such as saying the models predict 3.9K warming, but then claiming a graph shows 3.3K warming, and failing to say which graph that is.

          • Bellman now admits his prejudice in that he is not interested in the large discrepancy between the centennial warming originally predicted by IPCC in the early drafts of its 2013 report and that which was published in the final version; and he is not interested in the discrepancy between the IPCC’s near-halving of its predictions for the coming decades and its failure to make any corresponding adjustment to its equilibrium-sensitivity predictions. He says he is not aware of these inconsistencies, even though they have been pointed out to him.

            The basis of calculation for the values in the head posting is set out therein.

          • It’s not that I’m not interested in the large discrepancy, it’s that I don’t think it exists. You think there is a discrepancy, so I assume you’ve asked the IPCC about it. What was their response?

            You still have not acknowledged that the IPCC report shows model estimates of ECS and TCR are very different, and fail to understand that this explains why the models estimate for warming by the end of the century will be different to that derived from ECS alone.

            You still haven’t said whether you think the CMIP were predicting 3.9 or 3.3K of warming, or provided a reference to the graph you claimed shows 3.3K of warming.

            You claim the IPCC says ECS is ~3.4K. You don’t say where they say that. You imply they use the model output to determine their range of ECS, when I’ve pointed out that they say the opposite and their likely range for ECS is lower than what the models estimate.

            These are some of the inconsistencies I’d like answered, and I don’t think the IPCC is in a position to answer them.

          • Bellman, not having read the early drafts of IPCC’s 2013 report, is in no position to comment on whether or not the published draft is consistent with the earlier drafts.

          • I’ve made no comment on differences between the draft and final versions.

            If, when you suggested I contact the IPCC about the alleged inconsistencies, I assumed you meant within the final IPCC report. If you meant those in the draft, then as you say how can I ask about something I haven’t seen? And why should I care what was in a preview version?

  24. The missing or minimalist la nina is interesting. I wonder if Dr Easterbrook has any take on that seeing as ENSO is his specialty.

    • In reply to TRM, I frequently noted in my monthly postings during the Great Pause that it might well be brought to an end by an el Nino, and occasionally also mentioned that el Ninos are not always followed by equally significant la Ninas. Welcome to natural variability.

      • Quite right in the general sense. This type of behaviour is to be expected from the semi-chaotic nature of ENSO. I was thinking more about his thoughts (if anyone knows) about this one in particular.

  25. Alan: I like your post. Where is the program which will predict cloud cover in the long term?
    Carbon dioxide: just too many questions, like whether CO2 leads or follows temperature change.
    I also question the calculations of the time the CO2 remains excited before re-radiating.
    Since the whole game is one of taking control of the world economy, and has nothing to do with weather, I don’t buy any of it.

    • In the long run cloud cover varies around 60%. However we do not need to know that. My analysis simply takes Thayer Watkin’s number of 85% effectiveness of clouds in reflecting the back radiation and calculates the maximum possible temperature effect based on the real increase of observed temperatures since 1950. The real temp effect may well be ZERO, but at least I have shown that the maximum effect has been 0.185C. With increasing amounts of net CO2 in the atmosphere in the coming years, that maximum CO2 effect will go up of course, but unless you are arguing a reverse logarithmic effect of CO2 , the increased effect will be very small indeed. The only possible negative effect in the long run will be that in 500 years the net CO2 may pass 5000 ppm, and we would start to become lethargic at that point. However, I believe that we could burn all the fossil fuels on earth and never put that much CO2 into the atmosphere.

      • Sorry Alan, but Cloud Cover variation is not that simple.
        A reduction in cloud cover may reduce the reflected LWIR from the surface, but increases the much more potent Sunlight (ie UV & White Light) IN to the Oceans.
        Take a look at the graph of Cloud cover versus Tropical Sea Surface Temps during thre late 20th century.

    • The leftest socialist MSM refuses to mention the real cause of the world wide economic slowdown. It is the increasing cost of energy all over the world. Energy was beginning to be less and less an overall cost in the consumer budget until governments started to tax the hell out of it and ballooned the price of electricity with the solar and wind subsidy scams and the CO2 taxes. We need a world wide yellow vest protest. Vive La France.

      • Could any of us ever have wondered what a mess the science world could ever have gotten itself into? I shake my head in amazement everyday and can barely believe that we all are in a real life nightmare fantasy of climate scientists.

        • I’ve had the same thoughts. I don’t have enough knowledge of where to go to get the information needed to associate electricity costs to inflation and product costs. Still, it has to have some effect as manufacturers don’t use direct fossil fuel to power factories, nor do assembly plants or retail stores. Consequently there must be some pressure put on prices due to the cost of electricity.

          I would hope that Trump would have the government working on this since it would validate much of his energy policy.

          • Mr Tomalty and Mr Gorman are both correct: the cost of electricity is startlingly high – about five or six times what it would have been were it not for global warming policies. This cost, inflicted selectively on the West (Russia and China do not have to endure it, for instance), is a strategic threat to our economies.

  26. Milord Brenchley,

    Excellent discussion of the key scientific issue behind the global warming debate.

    Unfortunately, your comment that RCP 8.5 “can safely be ignored” is not widely agreed by government bureaucrats. Quite the contrary. The bureaucrats who calculated the “social cost of carbon” based their figures on RCP 8.5, not the more plausible (and greatly reduced) concentrations from RCP 4.5 or 2.6.

    The result is that “social cost of carbon” numbers used in government are hugely inflated, well out of contact with reality. But it is these inflated figures which the government bureaucrats propose to write into regulations and tax structure.

    It is only with greatly inflated “social cost of carbon” figures that favored treatment of non-dispatchable electricity from wind and solar, and the continued persecution of fossil fuels, can be justified.

    Thank you for your essay above, and best of luck with your paper. For the last 25 years publishing anything which contradicts even a jot or tittle of the catastrophist dogma has been a modern-day Labor of Sisyphus

    • In response to Mr McIntyre, I ignore RCP 8.5 on scientific grounds. The probability of that scenario actually occurring is as near nil as makes no difference. Politically, I well understand that the totalitarians and the profiteers of doom will wish to cling to it as to a liferaft, but it will not save them from sinking.

  27. “… and that, scientifically speaking, will be the end of the climate scam.”

    It will not!

    When money and prestige are at stake you may be assured that those with vested interests will defend them to the end. (I wonder how much trouble there would be getting the paper past the review process had it reached a more ‘mainstream’ conclusion.)

    If publication in a mainstream, peer-reviewed journal proves impossible, then the paper can be routinely dismissed by the faithful as deficient or worthless. The gatekeepers aren’t about to let their cash-cow be slaughtered. Real science in action!

    • In response to “skeptical lefty”, I was careful to state that, scientifically speaking, our result will be the end of the climate scam. Politically speaking, it will take the totalitarians some time to adjust to the fact that the “settled science” is indeed settled, but in a direction diametrically opposite to what they had imagined.

  28. Christopher Monckton of Brenchley
    M’Lord
    My compliments on your clarity of presentation, breakthrough, and persistence. May I refer to:
    McKitrick, R. and Christy, J., 2018. A Test of the Tropical 200‐to 300‐hPa Warming Rate in Climate Models. Earth and Space Science, 5(9), pp.529-536.
    https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2018EA000401
    Abstract: “Abstract Overall climate sensitivity to CO2 doubling in a general circulation model results from a complex system of parameterizations in combination with the underlying model structure. We refer to this as the model’s major hypothesis, and we assume it to be testable. We explain four criteria that a valid test should meet: measurability, specificity, independence, and uniqueness. We argue that temperature change in the tropical 200- to 300-hPa layer meets these criteria. Comparing modeled to observed trends over the past 60 years using a persistence-robust variance estimator shows that all models warm more rapidly than observations and in the majority of individual cases the discrepancy is statistically significant. We argue that this provides informative evidence against the major hypothesis in most current climate models.
    In plain language, McKitrick & Christy (2018) invalidated IPCC GICM global climate models.
    I encourage you to further model the climate sensitivity of this anthropogenic “hotspot” (aka majority anthropogenic tropospheric tropical temperature warming) with your absolute method vs the IPCC method and compare both against this independent Satellite and Balloon temperature evidence.
    Following McKitrick & Christy (2018) will give you the most sensitive and far sounder test of your climate sensitivity model and method vs the IPCC.

    • Mr Hagen is right: we should certainly cite McKitrick and Christy on the absence of the tropical mid-troposphere hot spot, and the corresponding absence of the model-predicted increase in specific humidity at that pressure altitude. The consequence of these highly significant absences is that the water vapor feedback must necessarily be small, wherefore the feedback transfer function must also be small.

  29. Lord Monckton of B

    I have to ask what may be an elementary question, but I haven’t been able to find an answer in what I’ve seen of your excellent and admirable contributions:

    You repeatedly, over the years, have used the term “noncondensing greenhouse gases”. I had previously assumed that you were referring to CO2, CH4, N2O etc and not water vapour, which does condense from time to time. Indeed, when I lived in your part of the world, it seemed to be condensing all the bloody time.

    In your comment upthread at 1:13 pm PST, you use the phrase “water vapor, the key noncondensing greenhouse gas”. So may I ask what is a condensing greenhouse gas?

  30. It is also worth showing the Central England Temperature Record for the 40 years 1694-1733, long before SUVs, during which the temperature in most of England rose at a rate equivalent to 4.33 C°/century, compared with just 1.7 C°/century equivalent in the 40 years 1979-2018.

    CET between 1979-2018 was warming at 3.02°C / century. Not 1.7°C / century.

      • Alan Tomalty

        Neither of your graphs show the Central England Temperature, or indicate what the trend is between 1979-2018.

    • During the early period of the Central England Temperature Record, it was largely uncontaminated by the urban heat-island effect. The rapid warming following the Little Ice Age is evidenced in many historical sources. Now, however, Britain is one of the most densely-populated countries on Earth, and the CETR is for this reason likely to show a considerable warm bias. South-East England, for instance, is the most densely-populated region on Earth with the sole exception of Bangladesh.

      Therefore, I used the CETR as a proxy for global temperature change over the 40 years 1694-1733, and HadCRUT4 as the index of global temperature change over the 40 years 1979-2018.

      It is, of course, self-evident that the 4.33 K/century warming during the former period is greater than the 3 K/century shown in CETR during the latter period.

      • Monckton of Brenchley,

        During the early period of the Central England Temperature Record, it was largely uncontaminated by the urban heat-island effect

        Anything up to 1720 is considered unreliable, based on extrapolations of readings of highly imperfect instruments.

        Therefore, I used the CETR as a proxy for global temperature change over the 40 years 1694-1733, and HadCRUT4 as the index of global temperature change over the 40 years 1979-2018.

        Then you are not making an equivalent comparison. You cannot compare changes in a small island with unusual weather patterns to global temperatures.

        Also, if you want to compare CET with global temperatures you shouldn’t be using monthly absolute values for your trends, that exaggerates the warming trend. Use annual temperatures and the trend drops around 4°C / century.

        • I do not propose to quibble about whether the Central England Temperature Record shows 4 K or 4.33 K warming for the recovery from the Little Ice Age. The usual rule in determining a statistical trend is to aim for as many degrees of freedom as possible: and there are 12 times as many of those in a monthly record than in an annual record.

          • The usual rule in determining a statistical trend is to aim for as many degrees of freedom as possible: and there are 12 times as many of those in a monthly record than in an annual record.

            Not when the data are seasonal.

          • It would be most helpful if Bellman would provide a reference so that I may discern the error that may arise owing to seasonal contamination. Of course, it is self-evident that global temperature records are not significantly influenced by seasonality: indeed, I provided evidence of the lack of a seasonal signal in the global monthly temperature records here some years ago.

            Either way, it makes very little difference. Bellman says that using annual data a 4.33 K warming becomes a 4 K warming. So we are in quibble territory here – as is so often the case with Bellman, who appears uninterested in the objective truth.

          • This is becoming a bit of a distraction from my main point, which is you cannot compare trends over small parts of the globe with global trends.

            I don’t have a specific reference for why it’s a problem to base linear trends on seasonal data, but the problem seems self evident.
            Depending on where you start in the cycle you will get a positive or negative adjustment to the trend, simply because you can be starting in a cold or hot part of the cycle.
            I’m sure any competent text book on time series analysis will explain how to adjust for seasonality.

            Here’s the Wikipedia page on Seasonality

            https://en.wikipedia.org/wiki/Seasonality

            Note the line “The resulting seasonally adjusted data are used, for example, when analyzing or reporting non-seasonal trends over durations rather longer than the seasonal period”.

            I should also have mentioned the problem with your claim that using monthly data improves the accuracy of a trend by multiplying the degrees of freedom by 12. This only works if each monthly value is independent.

            Of course, it is self-evident that global temperature records are not significantly influenced by seasonality

            That’s because global temperatures use monthly anomalies, not absolute temperatures as you used for CET.

            “Either way, it makes very little difference. Bellman says that using annual data a 4.33 K warming becomes a 4 K warming. So we are in quibble territory here – as is so often the case with Bellman, who appears uninterested in the objective truth.

            I agree it makes little difference, I only mentioned it in passing. The main point is that global trends are likely to be weaker than local trends. But it’s odd that you complain I’m not interested in the objective truth whilst saying it’s quibbling to worry about losing almost 10% of the warming.

          • Bellman says one cannot compare regional with global trends. However, if the only trend available is a regional trend, then that is all one has to go on. And if that regional trend was higher in the early 18th century than in the 21st, then there is nothing special about today’s rate of global warming. No more futile quibbling, please.

          • Disappointing. Do you seriously think global trends are going to be the same as for a small patch of England?

          • Monckton says: ” if that regional trend was higher in the early 18th century than in the 21st, then there is nothing special about today’s rate of global warming.”

            Monckton makes a serious error here. Monckton is comparing a regional trend with a global trend. That is like comparing an apple to an orange.

          • Mr Heins is too hasty in finding fault. The Central England Temperature Record is just about the only record we have that goes back as far as 1659. And, like it or not, it shows 4.33 K/century equivalent global warming for 40 years from 1694-1733, while the rate for the most recent 40 years is less than that. So far, then, global warming does not seem to have caused a warming rate in central England that exceeds the natural warming rate from 1694-1733. Why, then, should we worry about global warming?

            In the world as a whole, there is an even lower rate of warming than in central England. Yet the rate of warming in central England has not been damaging: it is cold that is the real killer here, not heat. Indeed, warming in the whole of Europe would be net-beneficial to human life, as the European Kommissars found out to their horror when they had hoped to demonstrate the opposite.

            The truth is that the world is warming at a far slower rate than the Central England Temperature Record shows to have been the case in the late 17th and early 18th centuries, when solar activity and consequently temperature recovered after the Maunder Minimum. There are other historical indications of how cold it was during that Grand Minimum: see, for instance, the Dutch and English paintings of frozen waterways that have not thus frozen since. But was the sudden, sharp warming harmful? No, of course not. So why should a slow, gentle further warming prove to be net-harmful now?

          • Bellman seems to think that the least-squares linear-regression trend on a dataset expressed as absolute values will differ from that on the same dataset expressed as anomalies.

            He also seems to think that a difference of 0.33 Celsius degrees between an annual and a seasonal temperature anomaly constitutes a 10% difference. Since the absolute temperature in question is of order 287 K, a difference of 0.33 K over a century is actually a difference of 0.1%, which, in the context of climate sensitivity discussions, is de minimis. His percentage estimate, therefore, is excessive by two orders of magnitude. That is what happens if one attempts to divert threads such as this away from their main point by futile quibbling, which is Bellman’s specialty.

          • Bellman seems to think that the least-squares linear-regression trend on a dataset expressed as absolute values will differ from that on the same dataset expressed as anomalies.

            I pointed out that using anomalies is a way to remove seasonal cycles. I seem to think that, Monckton of Brenchly does not say if it seems to him if this is true or not. I assume you’ve tried it for yourself. What was the result?

            He also seems to think that a difference of 0.33 Celsius degrees between an annual and a seasonal temperature anomaly constitutes a 10% difference.

            I said almost 10%. If you prefer it’s about 8.2%.

            Since the absolute temperature in question is of order 287 K, a difference of 0.33 K over a century is actually a difference of 0.1%, which, in the context of climate sensitivity discussions, is de minimis.

            What? No. We are talking about the rate of change. It makes no difference if you measure it in Celsius, Fahrenheit, or Kelvin.

            That is what happens if one attempts to divert threads such as this away from their main point by futile quibbling, which is Bellman’s specialty.

            I mentioned in passing that it would be better to use annual rather than monthly temperatures. It’s mostly incidental to my complaint that you cannot compare a change in global temperatures of 1.7°C / century with a regional change of 4.3°C / century. It’s Monckton who has spent the last few days disputing this and trying to divert attention from the central issue.

          • I thought I better test this myself. Here are my results:

            The trend using absolute monthly temperatures from January 1694 to December 1733 is 4.33°C / century.
            The same linear regression having converted the monthly data to anomaly with the base period 1694:1733 drops to 4.07°C / century. Similar to the figure obtained using annual averages.

            Lets see what happens if we start in the summer.
            Using temperature data the linear trend from July 1694 to June 1733 is 3.77°C / century.
            Using anomaly data the linear trend from July 1694 to June 1733 is 4.09°C / century.

            So, yes. I do seem to think that the least-squares linear-regression trend on a dataset expressed as absolute values will differ from that on the same dataset expressed as anomalies.

          • Bellman continues to quibble. He now finds that the difference between the trends on a dataset expressed as anomalies and those on a dataset expressed as absolute values constitutes less than 0.1% of the absolute temperatures, which is de minimis. He finds a similar de-minimis difference between annual and monthly results. So what?

            The central issue is that official climatology has erroneously defined temperature feedback in such a way as not to include any mention of the fact that feedback processes respond not only to perturbations in the input signal but to the entire reference signal, which includes the input signal. In the climate, the input signal is the Earth’s emission temperature, which arises from the observable fact that the Sun is shining. Climatology, at a vital point in its calculations, has forgotten to take account of the fact that the Sun is shining.

            And Bellman maunders on and on and on and on about differences of 0.1% in absolute temperatures.

          • By “quibble” you mean addressing the points you raised, and going to the trouble of checking my assumptions, something you seem unwilling to do.

            He now finds that the difference between the trends on a dataset expressed as anomalies and those on a dataset expressed as absolute values constitutes less than 0.1% of the absolute temperatures, which is de minimis.

            Fine. I tried to explain above why this is wrong. We are talking about the difference in the rate of change, not the difference in absolute temperatures.

            The central issue is that official climatology has erroneously defined temperature feedback…

            You know this isn’t the central issue of this discussion? We were discussing the appropriateness of your use of CET as a proxy for global temperatures, which then got diverted into an attempt to explain to you some elementary time-series analysis.

            And Bellman maunders on and on and on and on about differences of 0.1% in absolute temperatures.

            If you are now so certain that the only way to measure temperatures is as percentages of absolute temperatures, what has this post been about? When you claim there is a big divergence between what you think the IPCC should have predicted and what they actually predicted, what is this difference as a percentage of absolute temperature? When you maunder about how damaging the Maunder Minimum was, why should you care if the UK was less than 0.5% colder?

  31. Is the global average temperature anomaly is global warming? This website is filled with article after article using trend as global warming. Are such reports have any meaning? In fact the trend includes local changes not associated with anthroipogenic greenhouse gases effect.

    Dr. S. Jeevananda Reddy

    • conti—

      Also, includes the natural variability — 60 year cycle. This also provide ups and downs

      sjreddy

      • Dr Reddy raises a sensible point. In our approach, we have assumed ad argumentum that all of the warming of the past century and a half was anthropogenic. To the extent that some fraction of the warming was a continuation of the natural recovery from the Little Ice Age, equilibrium sensitivity empirically derived via our method would, a fortiori, be even less than the values we have found.

        • Wouldn’t this imply that even with the finding of a CO2 climate sensitivity of 1.14 in the calculations above that there is still the possibility that CO2 has no special climate sensitivity at all (as Nikolov and Zeller claim)?
          Cause the calculations – as climate model do – are based on the assumption that CO2 has a special effect which would be some kind of a circular argument.

          I really appreciate that this new calculation clearly shows why climate models fail so greatly as they didn’t do the math right on a fundamental level. But only the future will show if CO2 really acts as a GHG or not.

          It would be great for the planet of we wouldn’t have to care about CO2 rising as it definitively helps the planet greening and will help to nurture the growing world population and fossil fuels will not be replaceable in the near future to fight poverty.

          • yes, Ron, Lord M’s approach was to take as true all the assumptions of official climatology except where his team could specifically show that official climatology was wrong. which means assuming “that CO2 has a special effect” as you put it.

            Thing is, if Lord M’s team is correct and climate sensitivity is as low as they have concluded, that there is no need to worry about CO2 rising regardless of it’s specialness (not that there is any particular reason to worry, CO2 is net beneficial even if it causes the amount of warming claimed by official climatology), as the increase in temperatures would be so small as to be insignificant and vastly outweighed by the benefits CO2 brings to plant life the world over.

          • Mr Endicott is right: our approach is to accept all of official climatology except what we can disprove.

            It is not really possible to state that greenhouse gases in the atmosphere have no warming effect: for it can be measured in the laboratory, and it is understood theoretically down to the quantum level. However, it is possible to argue that it has only a modest warming effect. If we are right that a CO2 doubling will cause between 1.1 and 1.3 K global warming, rather than the 3.4 [2.1, 4.7] K imagined in the CMIP5 models, then our result falls well within the error margin in Nikolov and Zeller’s results.

            One problem with their model, as applied to the Earth, is that it assumes a bare-rock Earth with a regolith surface similar to that of the Moon. However, the Earth is 71% covered with water to a depth of several miles. Therefore, the emission temperature – if the Earth were an ice planet with open water in a belt around the Equator – would be about 255 K, and not the 197 K they imagine, for, in the tropical region, where most of the sunlight that reaches Earth comes in, the albedo would only be 0.06.

            Also, there is no adequate physical explanation for their imagined warming owing merely to the thickening of the atmosphere closer to the surface.

        • It should also be remembered that 0.5C to 0.6C of the increase in the last 150 years of records is a direct result of the various Adjustments made to the historical Thermometer Temperature values as per Mene at el 2009 paper.

          • A. C. Osborn makes an excellent point. Whether or not the numerous and usually ever-upward adjustments of global temperature in recent decades with the effect of incresaing the apparent global warming rate are legitimate, the fact that these alterations constitute nearly all of the global warming of the past 150 years does raise legitimate questions about whether our current methods of attempting to measure global temperature are at all capable of measuring it with sufficient accuracy even to state with certainty that global warming has occurred at all.

            However, our approach is to accept ad argumentum all of official climatology, including its ridiculously uncertain and constantly-varying estimates of observed global warming, so as to focus only what we can disprove. Official climatology’s definition of feedback is manifestly erroneous, in that it does not include the fact that feedback processes respond not only to some arbitrarily-selected perturbation in global temperature but to the entire reference temperature, which includes not only the natural perturbation driven by the pre-industrial noncondensing greenhouse gases but also the emission temperature caused by the observable fact – neglected by climatology in its feedback calculations – that the Sun is shining.

  32. Dear Lord Monckton,
    please have a look at our paper: Climate Pattern Recognition in the Holocene (1600-2050).
    the PART 8. In: http://www.knowledgeminer.eu/climate-papers.html
    We show that since the LIA, the 17th century, temps moved upward to reach a peak
    around 2004 – producing, until this date, an inclined upward line. Since 2004, the top plateau was reached, at UAH +0.25C above their 40 year average line. Here the level, where the horizontal line of temp evolution ( the plateau) begins. This plateau, of course, will always be perforated by El Nino upward and La Nina downward spikes. Obviously, those two types of spikes, up and down, must be separated from the continuing horizontal temp line.
    This present plateau will continue on the +0.25C level, while those spikes will abate (which is the case visibly in UAH).
    The Climate Pattern analysis starts in 8,500 BC, and the calculation formulae were empirically derived from this date on. The PART 1 is most important, because it shows how natural temp swings over those past 10,000 years develop and continue after coming out of the past ice age.

    • Mr Seifert makes some interesting points about the behavior of the historical temperature record. There is plenty of evidence that such warming as has arisen in the past few decades is largely attributable to natural variability.

      However, to bring the climate scam to an end it is necessary to demonstrate that it arose solely from an elementary and significant error of physics. That is what we think we have done. Once our paper is published, there will of course be vigorous attempts to shoot it down: but, as far as we can see, those attempts are likely to prove unsuccessful. As Churchill once said, “This is not yet the beginning of the end: but it is perhaps the end of the beginning.” [He was fond of chiasmus].

      • If you’re going to quote Churchill there’s no excuse for not getting it right:
        “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.”

          • And you still wouldn’t have got it right, if you put text between quotation marks you have a responsibility to be accurate. Surely they taught you that when you were studying for your diploma in Journalism Studies at UC Cardiff or when you were a cub reporter for The Yorkshire Post?

  33. Such climate equations suffer from the ‘butterfly effect’; when you change one variable slightly the outcome can vary significantly. I don’t think it is currently possible to ‘nail down’ some of the key climate variables accurately enough to constrain the outcomes to even a reasonably reliable longer-term estimate, especially if variable climate variables are non-linear.

    Both alarmists and skeptics often claim to have key climate variables ‘nailed down’ to a certain value or range, but in reality the values and ranges of these are often just plucked out of thin air.

    A long while ago I came across a paper by Hansen and others (can’t remember the title exactly-but relating to equilibrium and the energy budget in the atmosphere), where I was really determined to find out how they came to ‘nail-down’ one key climate variable. What I found, after some research, was that this key variable had been adeptly ‘nailed down’ by the following invalid process: simply by quoting another paper’s conclusions on the variable as a fact, when the other paper actually used nothing but circular assumptions.

    I could scarcely believe it, but the quoted paper’s argument went something like this, because we know the climate is out of equilibrium, the value of the atmosphere’s ‘radiative constant’, (or whatever it was-I can’t quite remember), was X. ‘X’ was just plucked out of thin air, but you would never know this from reading other papers which quoted it. The process went something like this:

    -publish a paper (gu)estimating a key climate variable, and use this estimate to calculate important mathematically-based climate conclusions. (Often the best ways to do this is to assume a related outcome is a fact when it is actually uncertain, which ‘fact’ then allows you to derive the value of the key variable).
    -Play down or simply fail to note any assumptions behind the estimate in both the abstract and in the conclusions, and certainly never mention that the ‘outcome’ may not even be true in the first place.
    -have a like-minded colleague then quote your results and conclusions, but without reference to any of the assumptions and caveats contained in the previous paper, to come to another important conclusion.
    -Repeat the process several times, until no one notices any of the original assumptions, including using key variables as mathematical ‘facts’, which were originally (gu)estimated out of thin air.
    -All you need to do when using a key variable value in your new equations is to reference papers which invalidly (gu)estimated it in the first place.

    I would have to conduct the same kind of research to test the equations and key variables in the article above, but my faith in such a process has long ago been lost.

    • In response to thingadonta, consider the following argumentum ex definitione, which begins with two definitional propositions. The conclusion of the argument follows from the definitions.

      Proposition 1: There subsists at any chosen moment of radiative equilibrium a global mean surface “equilibrium temperature” after all short-term feedbacks have acted.

      Proposition 2: In the absence of any temperature feedback, there would subsist at that chosen moment a global mean surface “reference temperature”, defined as the sum of the emission temperature of a waterbelt Earth and any natural or anthropogenic perturbations thereto.

      Conclusion: The transfer function, the ratio of equilibrium temperature to reference temperature, necessarily encompasses the entire action of the short-term temperature feedbacks on global temperature.

      In my submission, the premises are self-evidently true (there would be one temperature before feedback and there is another after feedback), and the premises self-evidently entail the conclusion.

      Put some numbers in, for science is quantitative. At today’s insolation, the emission temperature of a waterbelt Earth, making due allowance for the albedo 0.06 of the equatorial open ocean and the ice albedo 0.66 of the remaining two-thirds of the surface and also allowing for Hoelder’s inequalities between integrals, is about 255 K, coincidentally about the same as the emission temperature of a planet with mean albedo 0.3 but making no allowance for Hoelder’s inequalities.

      The reference sensitivity to the pre-industrial greenhouse gases present in 1850 is about 11.5 K, a good midrange estimate.

      The equilibrium temperature in 1850 (it was an equilibrium, for there would be no change in temperature for another 80 years) was 287.5 K (HadCRUT4).

      Therefore the transfer function in 1850 was 287.5 / (255 + 11.5), or 1.1. By messing around with the quantities a bit, one might push that to 1.3, but not much higher than that. Why? Because once one recognizes, as one must from the argumentum ex definitione, that the transfer function is the ratio not merely of equilibrium to reference sensitivities but of equilibrium to reference temperatures, even quite large uncertainties in absolute temperatures, two orders of magnitude greater than the sensitivities, entail little uncertainty in the transfer function.

      For this reason, knowing to a unattainably great precision and certainty the values of the variables that inform the transfer function expressed (as at present) as the ratio merely of sensitivities becomes unnecessary once one deploys the mainstream equation for the transfer function, expressing it as the ratio of absolute temperatures. Even with imperfect knowledge of these quantities, it is possible to derive and constrain the transfer function and hence all equilibrium sensitivities, which turn out to be approximately one-third of current midrange estimates.

      • The equilibrium temperature in 1850 (it was an equilibrium, for there would be no change in temperature for another 80 years) was 287.5 K (HadCRUT4).

        . . . the transfer function is the ratio not merely of equilibrium to reference sensitivities but of equilibrium to reference temperatures . . .

        Pardon me if my questions are stolid . . . layman here. It would seem your argument depends upon at least the presupposition of an equilibrium temperature as well as your transfer function remaining constant.

        If so, is it enough to establish the assumption of an equilibrium temperature that there “would be no change in temperature for another 80 years,” or has this assumption been established in some other way? Why does an equilibrium temperature necessarily follow from no change in temperature for any particular stretch of time, especially in this case (or so it seems to me) for such a short historical time period? Could we assume a new equilibrium temperature if, say, in some future time period temperature remained constant for 50 years? What about 40, or 30, or 20, etc?

        Further, given the assumption of an equilibrium temperature is established (regardless of time period), is it then enough to claim that the reference period in question (1850) is a proper reference point from which to calculate your transfer function (and I’m assuming this function determines the sensitivity of temperature to CO2, along with other feedbacks, and therefore must remain constant to be valid), or are there other possible past reference points that might also be just as valid? If there are other valid reference points, couldn’t the ratios in the transfer function change? In other words, when you say there is “little uncertainty in the transfer function,” does the lack of uncertainty follow necessarily from accepting as true the assumption of an equilibrium temperature in 1850, or is the lack of uncertainty applicable to any period where a different equilibrium temperature might be derivable (if such a thing is possible)?

        Or have I missed the boat entirely here?

        Thanks for your patience.

        • Sycomputing raises some sensible questions, which can be answered quite simply a posteriori.

          Consider the position in 1850, with the following illustrative data: emission temperature 255 K (the usual value adopted by climatology, albeit that it ignores Hoelder’s inequalities between integrals); preindustrial reference sensitivity to noncondensing greenhouse gases 11 K; equilibrium temperature in 1850 287 K. The transfer function is then 287 / (255 + 11), or 1.1.

          Suppose the temperature in 1850 had not yet reached equilibrium following the Little Ice Age. Suppose there was another 1 K to go before equilibrium was attained. In that event the transfer function is 288 / 266, or 1.1.

          The point is that the moment one accepts, as one must, that the transfer function is not only the ratio of minuscule sensitivities but also of absolute temperatures two orders of magnitude greater than the sensitivities, even large uncertainty in the value of equilibrium or reference temperature entails only a small uncertainty in the transfer function and hence in equilibrium sensitivity.

          • Actually, if you have another moment and the question isn’t so dense as to deserve a hearty rebuking, I’m still puzzled as to the following:

            Why does an equilibrium temperature necessarily follow from no change in temperature for any particular stretch of time, especially in this case (or so it seems to me) for such a short historical time period? Could we assume a new equilibrium temperature if, say, in some future time period temperature remained constant for 50 years? What about 40, or 30, or 20, etc?

          • I have already answered Sycomputing’s point, by explaining that even if the global mean surface temperature were not in equilibrium in 1850 the value of the transfer function would not be much altered, because the absolute temperatures of which it is the ratio is so large, while the subsequent anthroipgoenic perturbation is two orders of magnitude smaller by comparison.

          • I have already answered Sycomputing’s point . . .

            Well I thank you for the flattery, but I wouldn’t reference anything I’ve said heretofore as attempting to make a point, rather, my ignorance of the subject matter only allows questions.

            It would seem a different question to query one about the very assumption of an “equilibrium temperature” versus to query one about using said assumption for any particular purpose, e.g., deriving a transfer function.

            Nevertheless, I appreciate your time. I see there are other avenues for further study I may choose.

          • Sycomputing should understand that our objective is to derive sensible and scientifically defensible estimates of equilibrium sensitivity. The moment one accepts, as one must, that such feedback processes are are present at a given moment must perforce act not only upon the anthropogenic perturbation of emission temperature but also upon the pre-industrial, natural perturbation forced by the presence of the noncondensing greenhouse gases already present in 1850, and also upon the emission temperature that obtains owing to the observable fact that the Sun is shining, all one needs is an approximate idea of what the reference and equilibrium temperatures were in 1850.

            If the temperature was not at equilibrium in 1850, the implication is that not all of the warming caused by the recovery of solar activity following the Maunder Minimum has yet come through, in which event some fraction of the warming that has occurred since 1850 was natural and not anthropogenic. Reducing the anthropogenic component reduces climate sensitivity below the values we have obtained by assuming, ad argumentum and per impossibile, that the climate was indeed at equilibrium in 1850 and also that all warming since then was anthropogenic.

          • Thanks again for your time. I’d wish you luck on your endeavor, but since luck is a myth – Godspeed!

  34. send the paper out for review again. I’ll keep you posted. If we’re right, Charney sensitivity (equilibrium sensitivity to doubled CO2) will be 1.2 [1.1, 1.3] C°, far too little to matter, and not, as the models currently imagine, 3.4 [2.1, 4.7] C°, and that, scientifically speaking, will be the end of the climate scam.

    And hopefully in the real world too.

  35. ” At today’s insolation, the emission temperature of a waterbelt Earth, making due allowance for the 0.06 of the equatorial open ocean and the ice albedo 0.66 of the remaining two-thirds of the surface and also allowing for Hoelder’s inequalities between integrals, is about 255 K, coincidentally about the same as the emission temperature of a planet with mean albedo 0.3 but making no allowance for Hoelder’s inequalities.”
    That’s better, I was wondering about the previous
    ” Starting with an ice planet of albedo 0.66, the global mean surface temperature (which is also the emission temperature in the absence of any forcing or temperature feedback) would be 221.5 K. Add to that the reference sensitivity of about 11.5 K to the noncondensing greenhouse gases as they were in 1850. Define 233 K, therefore, as the reference temperature (before accounting for feedback) in 1850.”
    and “equilibrium global mean surface temperature In 1850, before any anthropogenic perturbation, that temperature was the observed temperature of about 287.5 K.”

    For the layman then.
    The earths temp, moins GHG and extra water ice albedo, would be like the moon Black-body temperature (K) moon 270.4 earth 254.0 difference due to albedo
    Geometric albedo moon 0.12 earth 0.434
    Real Temp 1850 287.5 K or 14.5C?
    Real Temp 2018 288.5 K or 15.5C?

    Raises the interesting point that if the emission temp is merely the incident energy less the reflected energy for all planets what part does CO2 actually play?
    Of course the trap here is that the layer of atmosphere with CO2 in adjacent to the surface is only part of the whole surface layer which can have parts hotter [land] or colder [sea] and atmosphere [variable] and can be quite different to the temperature at the putative emission height.

    • In answer to Angech, the emission temperature is derived before taking account of any forcings from greenhouse gases etc., and before taking account of any feedbacks.

  36. My Lord of the Rings,
    The same flawed paper you posted many times that Dr. Roy Spencer rejected?

    “Just the place for a Snark!” the Bellman cried,
    As he landed his crew with care;
    Supporting each man on the top of the tide
    By a finger entwined in his hair.

    “Just the place for a Snark! I have said it twice:
    That alone should encourage the crew.
    Just the place for a Snark! I have said it thrice:
    What I tell you three times is true.”

    – The Hunting of the Snark

    • The furtively pseudonymous “Dr Strangelove”, from behind a craven security blanket of anonymity, says that our paper is “flawed”. But he has not read it. And he says, on no evidence that Dr Spencer rejected it. Dr Spencer has not reviewed it, though he has commented that he does not thinks feedbacks respond to the entire, absolute reference temperature. Well, they do.

      Perhaps “Dr Strangelove” would be kind enough to reveal who it is, and to state whether it has read the latest draft of our paper [hint: he hasn’t], and to state why, not having read it, it considers itself in a position to make pronouncements on whether it is flawed, and to state what flaws the paper it has not read contains, and how it knows that the paper it has not read contains flaws. It is out of its depth here.

      • I see my Lord of the Rings has not corrected the error that Dr. Spencer reported. No wonder his paper has been rejected repeatedly.

        “Insanity is doing the same thing over and over again and expecting different results.”
        – Albert Einstein

        Is it Lewis Carroll or Alice Liddell? Only the Cheshire Cat knows 🙂

        • And what error is it that the furtively pseudonymous “Dr Strangelove” conceives that Dr Spencer said we had made, and what evidence does it offer that, if there were such an error, it has survived in the current draft of our paper [which it has not read]?

  37. So, basically, by doing the total math and creating a conversion factor we find that the entire claim that CO2 increases atmospheric temperature is well within the margin of error of historic measurements of total solar irradiation heating of the Earth.

    Can we move on now to pointing out that environmental destruction is causing the increases in global CO2 concentrations?

  38. I join Lord Monckton in believing that, to the extent that it’s a meaningful concept, equilibrium climate sensitivity is low. But Lord Monckton’s theory is a poor argument for that proposition.

    Suppose that in the previous second a car has traveled 100 feet along a highway. Of course, you don’t know how far it will travel in the next second. But what would your estimate be?

    Since it traveled 100 feet in the previous second, 100 feet would be my estimate; that is, I would base my extrapolation on the local slope of distance as a function of time, i.e., on a ratio of “perturbations.” If we followed the logic of the “end of the global warming scam in a single slide” in Lord Monckton’s WUWT piece from last summer, though, we would base the extrapolation on average slope rather than local slope: if the car has traveled 5000 feet in the entire 100 seconds since it left the garage, for example, we would estimate 50 feet instead of 100 feet. That means we would assume a deceleration of about 3 g’s.

    To see that, consider Lord Monckton’s slide. Equilibrium temperature E (analogous to the distance the car traveled) changes by 1.0 K for the previous 0.7 K change in what he calls the “reference” temperature R (analogous to the time since the car started). For the previous 0.34 K change in reference temperature R, that is, the equilibrium temperature E changed about 1.0/0.7 × 0.34 K = 0.49 K. But the slide says that instead of another 0.49 K for the next 1.04 K – 0.7 K = 0.34 K change in reference temperature R the equilibrium temperature E will change by only 1.17 K – 1.0 K = 0.17 K.

    Thus replacing local slope (“perturbation” ratio) with average slope (ratio of “entire” quantities) yields a highly questionable result. Why should E change by 0.17 K for one 0.34 K change in R when it changed by 0.49 K for the immediately adjacent 0.34 K change in R?

    Of course, Monckton-theory partisans will argue that the analogy is flawed because the car scenario includes no feedback. But they won’t be able to explain how that distinction makes a difference. If Lord Monckton ever does reveal his feedback-implementing “test rig,” moreover, we will see that the normal, “perturbation” approach he criticizes works just fine. And it is straightforward to show that the “perturbation” approach works better than his proposed entire-quantity approach when the test rig is nonlinear, as the earth system is. For all Lord Monckton’s talk of test rigs and professors of applied control theory, he has provided little in the way of specifics.

    In contexts such as this, Lord Monckton in wont to divert attention from the question at hand. He may observe, for instance, that using local slope as I would on this slide numbers rather than the average slope as he does would still imply a relatively low equilibrium climate sensitivity (“ECS”). And so it would. Moreover, there are indeed many good reasons for believing that ECS is low.

    But none of this establishes that it is error as Lord Monckton contends to employ perturbations, much less that to do so is the error responsible for the high ECS values in IPCC reports. For all his talk of logic, Lord Monckton leaves a yawning logical gap between his arguments and the conclusions he would have us draw from them.

    I am sorry to hear that Lord Monckton has been ill, and I hope he is indeed recovering and will have “a much stronger year than last year.” But I also hope his renewed strength will enable him to acknowledge his theory’s errors and disabuse WUWT denizens of the misapprehensions into which he has led so many of them.

      • Mr Born is confused, and adds to his own confusion by using an analogy which, like all analogies, breaks down at some point.

        Let us start with the fact – long enough established both in theory and by millions of empirical demonstrations over the past century – that such feedback processes as subsist in a dynamical system at a given moment will perforce respond to the entire reference signal then present, and not merely to some arbitrary fraction thereof.

        Consider the position in 1850. The initial emission temperature (taking ad argumentum the extreme starting-point of an icebound Earth at today’s insolation, which is an impossibility) is 221.6 K. The reference sensitivity to the pre-industrial, naturally-occurring greenhouse gases (again taking the extreme low estimate) is 8.9 K. The reference sensitivity is the sum of these two: say, 230.5 K. But the equilibrium temperature in 1850 was 287.5 K. The ratio of equilibrium to reference temperature in 1850, the transfer function that encompasses the entire action of feedback on climate, was thus 1.3. Using more realistic values, make that 1.1. The transfer-function interval in 1850 thus fell on [1.1, 1.3].

        Now add 0.7 K midrange anthropogenic reference sensitivity from 1850-2011, deducible from IPCC (2013, fig. SPM.5), to reference temperature, bringing it up to 231.2 K. To establish the transient period sensitivity, take the 0.7 K reference sensitivity and multiply it by 2.3 / (2.3 – 0.6), where 2.3 Watts per square meter was IPCC’s midrange estimate of net period anthropogenic forcing, while 0.5 Watts per square meter was the midrange estimate of the radiative imbalance to 2010 (Smith+, 2015). The transient sensitivity, then, was 0.9 K. Add 10% to yield equilibrium sensitivity, and one gets around 1.0 K.

        Add this 1.0 K to the 287.5 K equilibrium sensitivity in 1850 to get the equilibrium sensitivity applicable to 2011. And what is 287.5 / 231.2? It is 1.3, a transfer function identical to that which obtained in 1850.

        Compare this with official climatology’s approach, which would give a transfer function 1 / 0.7 = 1.4 in 2011. That is not much greater than the 1.3 derived using the mainstream transfer function, but it is subject to much greater uncertainty than the mainstream transfer function.

        For one thing, we do not know what fraction of the warming from 1850-2011 was anthropogenic. Our calculations assume ad argumentum that it was 100%: but only 0.3% of the abstracts of 11,944 papers on climate and related topics published in the learned journals of climatology and related topics in the 21 years 1991-2011 stated that at least half of the warming of recent decades was anthropogenic.

        For another, we do not know how much delay is caused by the very large heat capacity of the ocean. If there is little delay, there is little difference between transient and equilibrium sensitivity. If, however, the delay may be hundreds of years, as Mr Stokes argues upthread, then some of the warming we are now seeing may be the result of the restoration of global temperatures following the end of the Little Ice Age.

        it is precisely owing to complexities of this kind that one cannot place much reliance on equilibrium sensitivities derived from climatology’s current transfer function.

        It is self-evident that in a linear, time-invariant feedback regime the transfer function used by climatology and the mainstream transfer function will be identical, but that where the regime is nonlinear there will be a difference between the two transfer functions.

        We can, however, start with the reliable interval [1.1, 1.3] for the mainstream transfer function to 1850 and again to 2011; and we can observe that, even if we take the extreme position of imagining that all of the warming of recent decades was anthropogenic (for which, as I have said, there is remarkably little explicit support in the journals), the transfer function obtained climatology’s way is just 1.4 or thereby in 2011, suggesting that the nonlinearity in the feedback regime is small.

        Since all of the short-acting temperature feedbacks relevant to the derivation of equilibrium sensitivity self-cancel except that of water vapor, one need only test the extent to which the water vapor feedback (to some extent countervailed against by the lapse-rate feedback) is nonlinear.

        Though the Clausius-Clapeyron relation mandates that there should be 7% more water vapor in the atmosphere per Kelvin of warming, a mildly nonlinear response, water vapor (unlike CO2) is not a well-mixed greenhouse gas. In the upper mid-troposphere, at a pressure altitude of 300 mb, the burden of specific humidity is not rising by 7% per Kelvin, as it is at the surface. it is falling, directly contrary to the predictions of the models (see Paltridge 2009 for an interesting discussion). Therefore, the model-predicted tropical mid-troposphere hot spot, without which there cannot be a large water vapor feedback, does not in fact exist: temperature is rising no faster in the tropical mid-troposphere than at the surface. See, for instance, McKitrick & Christy 2017 for the latest discussion of this glaring discrepancy between prediction and observation.

        Why is this consideration of the water-vapor feedback important to the present discussion? Without a large water vapor feedback, there is no reason to suppose that a markedly nonlinear departure from the transfer function on [1.1, 1.3] derivable for 1850 and again for 2011 using the mainstream transfer-function equation will occur. Therefore, the expectation is that the transfer function likely to obtain in the remainder of the 21st century will be little different from that which currently obtains – and the ballpark value for that transfer function is only reliably discernible using the mainstream transfer function.

        Why is this? The reason is that even quite large uncertainties in the absolute temperatures whose ratio is the mainstream transfer function entail only small uncertainties in the transfer function, because the absolute temperatures exceed the sensitivities by two orders of magnitude, while even quite small uncertainties in the minuscule sensitivities whose ratio is climatology’s variant transfer function entail the large uncertainty therein that is the principal cause of the extraordinarily broad [1.5, 4.7] K interval of official Charney sensitivities.

        Mr Born, therefore, is perhaps being unduly hasty in finding our result wanting and in demanding, on no sound evidence and on nothing but an uncommonly confused argument, that we should recant.

        He should recall that among my co-authors are a professor of applied control theory, a professor of climatology, a professor of statistics, an award-winning solar astrophysicist, several control engineers, an engineer who teaches control theory where it matters, at a nuclear power station, etc., etc. The undoubted expertise of our team does not guarantee that we are right, but it does mean that there is a statable case that the argument we are putting forward is not as unsoundly founded as Mr Born, with no particular expertise or experience in any relevant field, would like readers here to believe.

        • A very poor analogy indeed.
          His car has just descended a 1 in 30 hill and is fast approaching 1 in 10 uphill section of road.
          What use his “local” trend now?

          Or you could use Solar Cycles 22 & 23 compared to Solar Cycles 24 & 25.

          • Even a vestigial grasp of high-school physics enables one to see that A C Osborn’s example actually proves my point.

            Since all Mr. Osborn has given us is the road’s grades, let’s assume it’s a frictionless track. About 94 seconds from the start, the vehicle will have traveled 4711 feet, 100 feet of that in the previous second. That means my “perturbation” method predicts 100 feet for the next second, whereas Lord Monckton’s entire-quantity method predicts 50.3 feet.

            If the grade suddenly changes from 1 in 30 down to 1 in 10 up as Mr. Osborn says, the actual distance over that next second will be 98.9 feet. So Lord Monckton’s approach results in over forty times as much error as my conventional approach.

            I leave to others to judge how Lord Monckon’s approval (complete with the gratuitous Latin) of Mr. Osborn’s aperçu reflects on his grasp of the subject.

          • Even a vestigial grasp of high-school physics enables one to see that A C Osborn’s example actually proves my point.

            Since all Mr. Osborn has given us is the road’s grades, let’s assume it’s a frictionless track. About 94 seconds from the start, the vehicle will have traveled 4711 feet, 100 feet of that in the previous second. That means my “perturbation” method predicts 100 feet for the next second, whereas Lord Monckton’s entire-quantity method predicts 50.3 feet.

            The actual distance over that next second if the grade suddenly changes from 1 in 30 down to 1 in 10 up as Mr. Osborn says, will be 98.9 feet. So Lord Monckton’s approach results in over forty times as much error as my conventional approach.

            I leave to others to decide how Lord Monckon’s approval (complete with the gratuitous Latin) of Mr. Osborn’s aperçu reflects on his grasp of the subject.

        • As is his wont, Lord Monckton evades the central issue and jumps to another—where he begs the question.

          Lord Monckton’s theory is that the reason for high ECS estimates is that the IPCC bases its calculations on “perturbations” rather than “entire” quantities. In “the end of the global warming scam in a single slide” he has summarized the entire-quantity approach he would use instead of the perturbation approach.

          In that slide he says the with-feedback temperature E changed by 1.0/0.7 = 1.43 times the corresponding change in without-feedback temperature R from 254.8 to 255.5. By contending that it’s better to extrapolate by using average slope than, as is more typical, by using local slope (perturbation ratio), he draws the implausible inference that the further change in E corresponding to the further change in R from 255.5 to 255.84 will not be 1. 43 times that 0.34 change in R but will only be 0.5 times that change: 1.17 K – 1.0 K = 0.17 K.

          He evaded the obvious question: Does he really believe it’s plausible that E would change by only half the change in R from 255.5 to 255.84 when it changed by 1.43 times the change in R from 254.8 to 255.5? What brought about this abrupt change in the relationship between “reference” temperature R and equilibrium temperature E?

          Instead of addressing that obvious question directly he produced went off into various reasons why ECS should be low and how he came up with the slide numbers (or what he’s using in their stead these days; he’s pretty whimsical about the values he uses). But the issue isn’t whether ECS is low, it’s whether Lord Monckton’s views of feedback theory justify concluding that the IPCC’s erroneous use of perturbations is the reason for the IPCC’s high ECS values.

          And on that issue he fails. We just showed, in fact, that using perturbations as he contends the IPCC uses them doesn’t result in ECS values nearly as high as what the IPCC gets.

          Moreover, he has based a lot of his arguments over the past year on Hendrik Bode and feedback in electronic circuits, and his comment above again trots out that tired old makeweight that “among my co-authors are a professor of applied control theory, a professor of climatology, a professor of statistics, an award-winning solar astrophysicist, several control engineers, an engineer who teaches control theory where it matters, at a nuclear power station, etc., etc.”

          Really, he should put up or shut up. I could lay out a feedback-amplifier circuit illustrating that the perturbation approach is superior to his, for example, and he could have his platoon of co-authors make the case that I’m all wet. That should be easy, shouldn’t it? I mean, all those eminences against a single tired old lawyer. How could they lose?

          Unless, of course, they aren’t all the makes them out to be. We are entitled to speculate that this is why he won’t accept that challenge.

          Now, among his confused ramblings there actually is an added argument that we could clean up to something that’s at least internally consistent. It would go something like the following.

          The extrapolation slope is a ratio of with- to without-feedback temperatures. Noise in the with-feedback temperature E, i.e., in the numerator of that ratio, detracts much more from slope accuracy if the denominator is a small, perturbation value than if it’s a large, entire-quantity value. So it’s better to use Lord Monckton’s large-signal, entire-quantity-based slope value even if in the absence of noise the small-signal, perturbation-based slope value would be a little more accurate.

          That noise argument has a surface plausibility, but as a justification for substituting Lord Monckton’s approach it begs the ultimate question: it assumes that ECS isn’t high.

          To appreciate this we need to step back and recognize that “climatology” doesn’t really estimate ECS in the way Lord Monckton claims it does. By assuming, e.g., net-positive water-vapor feedback it instead calculates high ECS values from forcings three or more times carbon-dioxide doubling’s direct effect, necessarily making the change in with-feedback temperature E three or more times the (small) change in the without-feedback temperature R.

          In other words, high water-vapor feedback necessarily implies that the small-signal slope of E as a function of R is high even if the large-signal slope E/R is only, say, 1.13. Conversely, therefore, the noise argument’s assumption that if measured accurately the function’s real small-signal slope wouldn’t be much greater than its large-signal slope is tantamount to assuming the ultimate conclusion, which is that ECS is low. In other words, Lord Monckton’s noise argument begs the question.

          If anyone is really interested in digging into Lord Monckton’s theory with an open mind, he could do worse than start by using values from “the end of the global warming scam in a single slide” to make a graph of E as a function of R and asking himself whether that function is even remotely plausible. Then he could apply both Lord Monckton’s entire-quantity extrapolation technique and the conventional perturbation extrapolation technique to an electronic amplifier that has nonlinear feedback and observe which approach works better.

          Spoiler alert: the conventional technique will be superior.

          • Mr Born does not seem to wish to get to grips properly with our approach. As I had carefully explained in my earlier answer to him, the difficulty with using climatology’s unduly restrictive definition of the transfer function is that the values of the sensitivities of which it is the ratio are so small that even small uncertainties in those sensitivities entail so large an uncertainty in the transfer function as to render it unconstrainable, which is why the interval of equilibrium sensitivities has remained so very large for so very long. Using climatology’s variant, one can tweak the numbers to obtain any desired result.

            Climatology has not hitherto understood that the transfer function is also expressible as the ratio of absolute output to reference signals. Since the absolute equilibrium and reference temperatures exceed the sensitivities by two orders of magnitude, even quite large uncertainties in those absolute temperatures entail only a quite small uncertainty in the transfer function.

            In our submission, therefore, climatology should accept that the definition of “climate feedback” on p. 1450 of the 2013 Fifth Assessment Report is incorrect, in that it does not encompass the derivation of the transfer function as the ratio of absolute temperatures.

            One can then obtain a reliable transfer function for 1850: it falls on the interval [1.1, 1.3], depending chiefly on the view one takes of the magnitude of the initial emission temperature: it is 1.1, for instance, if the usual 255 K is adopted.

            The question remains whether the nonlinearities that subsist in the temperature feedback regime are such as to deliver a transfer function significantly furth of the interval which can be demonstrated to have obtained in 1850. Considering that the magnitude of the anthropogenic reference sensitivity is of order 0.7 K, it would be necessary to posit some nonlinearity extravagant enough to engender a transfer function significantly different from that which obtained in 1850. Using climatology’s transfer function, that simply cannot be done, because the uncertainties in the tiny industrial-era sensitivities, and in the radiative imbalance, etc., etc., are simply too large. We do not even know what fraction of the industrial-era warming is anthropogenic: official climatology assumes that all of it was anthropogenic, but only 0.3% of climate-related papers say that.

            The difficulty that official climatology now faces is this. Given that the global temperature in 1850 appears to have been at equilibrium, in that there was no trend for 80 years thereafter, and given that the transfer function in 1850 was small, what evidence is there that a small perturbation of a mere 0.7 K in global mean surface temperature will engender so large a disequilibrium as to increase the equilibrium rate of global warming from 1.3 K per CO2 doubling to 3.4 or 4.7 or even 10 K?

            If official climatology had realized that the transfer function in 1850 was at most 1.3, would it have dared to suggest that the transfer function is now, by some magical process, 3.4? Of course not.

            And consider the difficulty faced by the modelers of NASA GISS. Their view was that the pre-industrial transfer function was 4.0 and that that is the value that should also obtain henceforth. But the industrial-era transfer function is about 1.3 (one does not know exactly, because the uncertainties are too great). However, it really does not seem to have been as high as 4.0. How can one legitimately justify a progression of transfer functions 4.0, then 1.3 or thereby, then 4.0 again.

            With respect, it seems that those who follow what Mr Born is pleased to call “the conventional technique” have a lot of questions to answer. And, whether he likes it or not, the mainstream definition of the transfer function is to the effect that it is the ratio not merely of the output signal some arbitrarily-selected fraction of the reference signal but also of the absolute output signal to the reference signal. See any elementary textbook of control theory. That, like it or not, is the conventional definition; and official climatology, in having borrowed feedback theory from control theory, did not realize it.

          • the mainstream definition of the transfer function is to the effect that it is the ratio not merely of the output signal some arbitrarily-selected fraction of the reference signal but also of the absolute output signal to the reference signal. See any elementary textbook of control theory. That, like it or not, is the conventional definition; and official climatology, in having borrowed feedback theory from control theory, did not realize it.

            There’s little to be gained by my responding to most of Lord Monckton’s last comment. Basically, he has merely repeated his remarkable position that in a feedback context the average slope, not the local slope, should be used for extrapolation. And, of course, he has ducked my offer to lay out a circuit that will illustrate my position so that his eminent co-authors can attack it.

            But for the benefit of those who have little familiarity with control theory I’ll just mention that the excerpt I quoted above completely mischaracterizes “any elementary textbook of control theory.” Although I left mine at my summer home, I’m familiar enough with it to know that (1) the large-signal treatment it does start out with does not imply that average rather than local slope should be used for extrapolation in non-linear systems and (2) it deals mostly with linearized treatments, which, for non-linear systems, are indeed based on “perturbations” rather than entire quantities.

          • Mr Born seems temperamentally incapable of telling the truth. When I draw his attention, patiently, to the readily-verifiable fact that the textbook definition of feedback is to the effect that the transfer function is expressible not only to the ratio of perturbations (in climate, of sensitivities) but also to the ratio of the absolute output signal to the absolute reference signal (in climate, of equilibrium to reference temperature), he says I am saying instead something quite different: i.e., that “in a feedback context the average slope, not the local slope, should be used for extrapolation”. Yet nothing in the comment by me to which he was replying came anywhere close to saying that.

            Instead, I pointed out that the difficulty that official climatology now faces is this. Given that the global temperature in 1850 appears to have been at equilibrium, in that there was no trend for 80 years thereafter, and given that the transfer function in 1850 was between 1.1 and 1.3, implying Charney sensitivity 1.1-1.3 K, what evidence is there that a small perturbation of a mere 0.7 K in global mean surface temperature will engender so large a nonlinearity as to increase the equilibrium rate of global warming from 1.3 K per CO2 doubling to 3.4 or 4.7 or even 10 K?

            I also pointed out that, however desirable it might be to derive the transfer function as the ratio of sensitivities, the sensitivities in question were very small and that, therefore, even small uncertainties in those sensitivities entailed very large uncertainty in the transfer function, rendering climatology’s transfer function useless in practice, however desirable it might have been in theory.

            However, precisely because the absolute temperatures exceed the sensitivities by two orders of magnitude, even quite large uncertainties in absolute temperatures entail only a small uncertainty in the transfer function. That is how we know that in 1850 the transfer function was between 1.1 and 1.3. The question that Mr Born keeps ducking remains: how can one plausibly jump from a transfer function of (at most) 1.3 in 1850 to a transfer function of 3.4 or 4 or 4.7 or even 10 in 2011? Where is the enormous nonlinearity that justifies so wildly implausible a leap in the transfer function? On this, as on much else, Mr Born is tellingly silent.

            Mr Born does, however, offer to design some sort of electronic circuit. But he is an ex-lawyer, and not a scientist and from the very outset of these exchanges he has been utterly uninterested in the truth and all too willing to write what he knows to be untruthful. Perhaps he took no oath as a lawyer to tell the truth, but in this forum he has a reputation for not telling it – as instanced by his artful mischaracterization of the comment by me to which he was replying. He has earned himself a reputation for being scientifically incompetent, ill-tempered, snide, mendacious, sneering and often vicious.

            And now he proposes to substitute his non-expertise and his ill intent for the expertise of three control theorists, one of them a tenured professor in the subject, and of a government laboratory, which has already designed and operated a perfectly satisfactory circuit, demonstrating beyond all doubt that feedbacks necessarily respond not only to arbitrarily selected perturbations of the input signal but also to the entire reference signal, including the input signal. It is of course possible to design a circuit in which feedbacks respond only to the perturbations, but there is absolutely no evidence that any such circuit represents the climate. Thanks, but no. The debate has now passed well beyond Mr Born – or, rather, well over his head. Our paper will be reviewed by competent scientists in the relevant disciplines, and that will be that.

          • Since much of Lord Monckton’s technique is to use changing and unconventional terminology to mask his physics and logic errors, he calls it “mendacious” for someone to replace his obscure terms with clear ones.

            As is readily seen in his “end of the global warming scam in a single slide,” his theory concerns the with-feedback equilibrium temperature E as a function of the without-feedback, “reference” temperature R. I merely dispelled the obscurity by calling quantities by the terms we learned in high school: the average and local slopes of that function.

            In contrast, Lord Monckton’s latest terminology for those slopes is “transfer function.” This is a non-conventional use of that term. Those of us who actually do have some familiarity with control systems know that it’s most frequently used for the ratio of a response’s Laplace, Fourier, or z transform to the stimulus’s, not, as he uses it, for a ratio of closed-loop gain to open-loop gain.

            The latest incarnation of his theory seems to be “that, however desirable it might be to derive the transfer function as the ratio of sensitivities, the sensitivities in question were very small and that, therefore, even small uncertainties in those sensitivities entailed very large uncertainty in the transfer function, rendering climatology’s transfer function useless in practice, however desirable it might have been in theory.” I note in passing that he originally based his criticism of “climatology’s transfer function”—i.e., local slope—on its failing in theory, not only in practice. It was only around the middle of last year, after I and others had pointed out the local slope’s theoretical superiority, that he quietly switched to the new explanation—although he still characterizes his paper as “explaining climatology’s error of physics in omitting from its feedback calculation the observable fact that the Sun is shining.”

            However that may be, I expressed his theory in more-accessible terms as that “in a feedback context the average slope, not the local slope, should be used for extrapolation.” Obviously, this formulation is applicable independently of whether theoretical reasons or practical ones are what led Lord Monckton to avoid local-slope-based extrapolation. Yet Lord Monckton responded to my more-accessible formulation by saying I’m “temperamentally incapable of telling the truth.”

            That’s a classic case of projection.

          • The question that Mr Born keeps ducking remains: how can one plausibly jump from a transfer function of (at most) 1.3 in 1850 to a transfer function of 3.4 or 4 or 4.7 or even 10 in 2011? Where is the enormous nonlinearity that justifies so wildly implausible a leap in the transfer function? On this, as on much else, Mr Born is tellingly silent.

            I just realized that I actually had ignored that question—because it’s a silly one. Lord Monckton may as well have asked how one can explain why Mao Zedong consistently voted Republican: the premise is wrong. There’s no “wildly implausible leap in the transfer function”; he makes it sound as though there is by comparing apples to oranges, or, more precisely, average slopes to local slopes.

            The “transfer function of (at most) 1.3 in 1850” he’s referring to is the average slope of the with-feedback equilibrium temperature E as a function of the with-feedback, “reference” temperature R, whereas the “transfer function of 3.4 or 4 or 4.7 or even 10 in 2011” he’s referring to is that function’s local slope. Lord Monckton is confusing two different quantities.

            Perhaps Lord Monckton could benefit from a little remedial math. Local and average slopes are uniformly equal only for linear functions. They are not in general equal for nonlinear functions. Suppose that E is related to R by E = kR^a, for example, where k and a are constants. Then the local slope is a times the average slope. There’s no “wildly implausible leap in the transfer function” just because the local slope doesn’t equal the average slope.

            To see this let’s take a = 3. With k = 1.738 x 10^(-5) we have the 1850 values Lord Monckton gave us in his “end of the global warming scam in a single slide”: R = 254.8 K, E = 287.55 K. And the average slope, which Lord Monckton calls a “transfer function” expressed as “the ratio of the absolute output signal to the absolute reference signal” is indeed 1.13, as the slide says.

            With the 1.04 K value given by that slide as the amount by which the “reference signal” R changes in response to a doubling of CO2, Lord Monckton’s method therefore tells us that the corresponding change in E—i.e., the equilibrium climate sensitivity—is 1.17 K even though it’s actually 3.54 K. So the fact that Lord Monckton’s technique produces a small equilibrium-climate-sensitivity value gives us little basis for believing the value is indeed that low.

            And the equilibrium climate sensitivity is 3.54 K without any “wildly implausible a leap in the transfer function.” Between the 1850 and doubled-CO2 values the average slope has changed only from 1.13 to 1.14, and the local slope has changed only from 3.39 to 3.41. The “leap” Lord Monckton thinks he sees results from his inability to distinguish between the two slope varieties.

          • Mr Born, having been caught out in yet another lie, tries to wriggle out with his characteristic snide, arrogant bluster. He is an ex-lawyer, not a scientist. He knows nothing of control theory, and certainly nothing in comparison to the tenured professor and two practitioners who are co-authors of our paper. He considers that the ratio of the output signal to the reference signal is not known in control theory as the transfer function: but, as usual, he is wrong. This term is often used in this context, which is why we use it.

            For his education, a function maps one set of variables to another. The transfer function is, therefore, inescapably a function. Get over it.

            He imagines that the reference temperature is a “with-feedback” temperature when, ex definitione, it is the temperature that would prevail in the absence of feedback. The rest of his nonsense follows from that error.

            His bluster also conceals the fact, which he is eventually going to have to confront, that the transfer function (or, if he prefers, the system-gain factor, or the Pearl of the East, for there are many names for it) is expressible as the ratio not only of a perturbation in the output signal to some arbitrary perturbation in the input signal but also of the absolute output signal to the absolute input signal. Whether he likes it or not, IPCC is in error in not including the latter part of the definition in its own definition of temperature feedback.

            Where the feedback regime is close to linear, the values of two transfer functions are near-identical. Where it is nonlinear, they differ.

            However, even small uncertainties in the values of the sensitivities whose ratio is climatology’s transfer function lead to large uncertainties in the transfer function. That is why, on its own, it is valueless for predicting climate sensitivity. That is why the interval of equilibrium sensitivities has remained unconstrained at [1.5, 4.5] K per CO2 doubling – a whopping 3 K interval – for 40 years since Charney 1979.

            Derivation of a more credible and less broad interval of Charney sensitivities only becomes possible when one uses the mainstream transfer function, which takes account of the observable fact that the Sun is shining by including the Earth’s emission temperature and the naturally-forced warming from the pre-industrial greenhouse gases with the anthropogenically-forced warming to constitute the reference temperature (before accounting for temperature feedback). The mainstream transfer function at the temperature equilibrium in 1850 is readily found. It falls on [1.1, 1.3].

            Now consider the position from 1850-2011, in the industrial era. Using IPCC’s midrange estimates of net period anthropogenic forcing and a midrange estimate of the radiative imbalance that allows for the time-delay arising from the large heat capacity of the global ocean, it is again easy to determine climatology’s transfer function as being about 1.4. Now that, of course, is only an estimate, because the uncertainties, as previously explained to Mr Born, are very large.

            It is perfectly possible that some – perhaps even most – of the global warming since 1850 was natural. it is perfectly possible, indeed likely, that the net anthropogenic forcing, stripped of the aerosol fudge-factor that IPCC has long used as a means of keeping it down and hence trying to force up climate sensitivity, is quite a bit larger than IPCC imagines. It is possible that the radiative imbalance is smaller than IPCC’s midrange estimate: its interval is very wide.

            But even if one takes IPCC’s value as gospel, one can see that there is not much nonlinearity over the industrial era, and certainly nothing like enough to justify a midrange transfer function of 3.4, which is the CMIP5 models’ current implicit midrange estimate. The game is up.

          • I just showed that Lord Monckton’s question that “Mr Born keeps ducking” is nonsensical: I demonstrated to a mathematical certainty that Lord Monckton’s premise was wrong: the IPCC’s high equilibrium-climate-sensitivity estimate does not imply a “jump from a transfer function of (at most) 1.3 in 1850 to a transfer function of 3.4 or 4 or 4.7 or even 10 in 2011.”

            Instead of standing up like a man and admitting that he was wrong to see a “jump” in the IPCC’s “transfer function” and that his error lay in confusing average slope with local slope, he resorted to the tactic he often uses when he’s faced with an incontrovertible mathematical fact: he claims that I’ve “been caught out in yet another lie”—without, of course, ever identifying precisely what that “lie” might have been.

          • He considers that the ratio of the output signal to the reference signal is not known in control theory as the transfer function: but, as usual, he is wrong. This term is often used in this context, which is why we use it.

            What I really said was that in control theory transfer function is used most frequently for what I said it was used for. My point was that Lord Monckton uses varying and imprecise terminology to obscure that he’s doing nothing more than extrapolating E’s value as a function of R and that he’s given us no reason to believe that ordinary extrapolation techniques should not apply.

            There are indeed occasions in which transfer function is used for other things, but that’s among the evils of using it here: it adds another layer of ambiguity. And that’s why I used plain, specific language, i.e., why I expressed things in terms of a function’s average and local slopes: those are concepts that the casual reader can readily grasp.

            No, transfer function didn’t confuse me. But it does tend to make it harder for the casual readers to see Lord Monckton’s errors.

          • His bluster also conceals the fact, which he is eventually going to have to confront, that the transfer function (or, if he prefers, the system-gain factor, or the Pearl of the East, for there are many names for it) is expressible as the ratio not only of a perturbation in the output signal to some arbitrary perturbation in the input signal but also of the absolute output signal to the absolute input signal.

            What I call it is the slope of E as a function of R. I call it that to make the matter clear to as many readers as possible.

            Contrary to Lord Monckton’s assertion, moreover, I have emphasized, not concealed, the fact that slope can be either local (“the ratio . . . of a perturbation in the output signal to some arbitrary perturbation in the input signal”) or average (the ratio . . . of the absolute output signal to the absolute input signal”). I just used plain language to do so.

            But I have also shown that Lord Monckton’s fundamental error is failing to use the two types of slope properly.

          • He knows nothing of control theory.

            It’s true that I made my living as a lawyer, not as a scientist. But I had already made a study of control systems when Lord Monckton was still a schoolboy.

            It’s true that I didn’t speak with Hendrik Bode himself at the time. But I did discuss some of the finer points of nonlinear feedback with his Bell Labs colleagues.

            It’s true that they fill books with things I don’t know about control-systems theory. But I do know more about it than most people.

            In particular, if the silly things Lord Monckton writes at this site are any indication, I know vastly more about it than he does.

          • He imagines that the reference temperature is a “with-feedback” temperature when, ex definitione, it is the temperature that would prevail in the absence of feedback. The rest of his nonsense follows from that error.

            Yes, in one instance I did slip and call R the with-feedback temperature rather than the without-feedback temperature. But three times in this thread I had already referred to R as the without-feedback temperature, and what I meant would have been apparent even if I hadn’t.

            Contrary to what Lord Monckton said, moreover, nothing “follows from that error.” You will note that, try as he might, he was unable to identify anything that did.

          • Whether he likes it or not, IPCC is in error in not including the latter part of the definition in its own definition of temperature feedback.

            On the contrary, since (as Lord Monckton describes what it does) the IPCC uses a linearized treatment, the precisely correct approach is indeed to exclude “the latter part of the definition.” Nothing in the thousands of words Lord Monckton has spewed over the past year proves otherwise.

            Lord Monckton likes to refer to the fact that I made my living as a lawyer, not as a scientist. But it is Lord Monckton, not I, who uses tactics that lawyers are so often accused of: when he can’t argue the facts, he pounds the table.

  39. My only issue with your dial is that I start to view the dial from the top…..you need to rotate the dial to the right so zero is at the top.

    I understand that a dial starts at the bottom, but having it at the top really changes your perception in my humble opinion.

    • In response to Derg, nearly all dials start at SW and go via N to SE. I do not think it would be helpful to reinvent the dial. The dial in the head posting has the values plainly marked: one can see where 0 is, for instance, and it is about a point S of SW.

  40. In order to connect the increase in CO2 concentration to a change in surface temperature, a theoretical construct known as radiative forcing has been adopted.  This was introduced into climate simulation in its present form by Manabe and Wetherald in 1967.  It is assumed, without justification or validation, that long term averages of transient, non-equilibrium variables can be analyzed as a system that is in equilibrium.  The upward and downward fluxes at some rather ill defined boundary such as an ‘average tropopause’ are assumed to be equal and equivalent.  A change in CO2 concentration is introduced to ‘perturb’ this ‘equilibrium’ and the change in flux is used to calculate a new ‘equilibrium surface temperature’.  This calculated ‘equilibrium surface temperature’ produced by such models has no physical meaning.  It really assumes that the sun is shining all the time and that the unperturbed surface is always receiving and emitting a flux of 390 W/m².  The heat capacity of the surface is set to zero.  Small, 1 to 4 W/m² changes in flux in a stratospheric layer of air at 217 K and 0.22 atm. are assumed to be capable of warming a surface at 288 K through 11 km of warmer, higher density air.  This requires a flagrant violation of the Second Law of Thermodynamics.  The cooling of the surface by convection, the conversion of IR radiation into other forms of energy and the heat capacity of the surface are also conveniently ignored.

    — Roy Clark from “Dynamic Greenhouse Effect and the Climate Averaging Paradox

    Consequently, I think the concepts of ‘forcing’, and ‘climate sensitivity’, and reality of GCMs are pseudoscience. Why be surprised when alarmist ‘science’ makes no sense in reality, after it previously made no sense as theory?

    • Mr Pawelek may or may not be correct about the implementation in general-circulation models of the concepts of radiative perturbation of a dynamical system and of the sensitivity of such a system to that perturbation. Our own approach accepts ad argumentum all of official climatology that we cannot demonstrate to be false, so as to concentrate on correcting those aspects of the official position that arise from errors of physics that we can demonstrate to be false.

  41. Milord,

    Congratulations for your impressive work. And sharing that with us, here. Discovering such fundamental misunderstanding of feedback mechanisms used in the official climatology would be indeed quite remarkable in the history of science. Thus, good luck with your paper. I’m glad to hear that your article may be reviewed again and potentially published. Yet the official climatology won’t give up so easily. I anticipate that publication of your paper can be delayed ‘ad calendas graecas’ under any pretext. Alternatively, it will be some sort of damage control: ‘yes, there is some confusion which does not affect main message of the AGW’. Still, I’m sure your work will find ways to public. Good things are not easily lost!

    • In response to Paramenter, we think it likely that many difficulties will be strewn in our path as we try to get our result published: but, in the end, if we are right, the truth will not be silenced. If we are wrong, then we shall expect a rational response from the reviewers explaining in what respect we are wrong. And that won’t be easy, because our theoretical approach is supported not only by the fact that the rate of global warming is a great deal less than originally predicted but also by the fact that, even using IPCC’s wrong definition of temperature feedback, the transfer function is only 1.4 from 1850-2011. Therefore, anyone wanting to shoot down our argument must shoot all three ducks: the error of physics made by official climatology in its definition of temperature feedback; the manifest discrepancy between prediction and observed reality; and the fact that even the IPCC’s method gives a transfer function little more than a third of the one it tries to apply to Charney sensitivity.

  42. Lord Monckton, please can you say whether your professor in control theory took into account my objections raised back in August, at https://wattsupwiththat.com/2018/08/21/temperature-tampering-temper-tantrums/#comment-2444535 ?

    In that comment, building on earlier ones, I showed – using your own equations – that small changes in the feedback rate over time lead to relatively large changes in the estimate of Charney sensitivity. This is because they get multiplied by the whole of the temperature difference from 0 to 280k (or whatever), rather than by a smaller difference like 250K-280K when using IPCC’s version of feedback. The result, using your own figures, is that senstivity is more like 1.7K than 1.2K – still not too scary, providing that feedback rates do not increase in the future.

    Rich.

    • Let us do the math in reply to “see-owe to Rich”. In a feedback regime that is close to linear, the mainstream transfer function derived as the ratio of absolute temperatures will be near-identical to climatology’s variant derived as the ratio of sensitivities.

      Since emission temperature is somewhere between 210 and 260 K, and pre-industrial reference sensitivity to naturally-occurring non-condensing greenhouse gases is about 11 K, reference temperature in 1850 was between 221 and 271 K, while equilibrium temperature was 287.5 K. The transfer function expressed as ratio of equilibrium to reference temperature was, therefore, somewhere between 1.1 and 1.3.

      Now consider the de-minimis anthropogenic perturbation from 1850-2011. The net anthropogenic radiative forcing was 2.3 Watts per square meter, which, given a Planck sensitivity parameter 0.3 Kelvin per Watt per square meter, gives a period anthropogenic reference sensitivity of just 0.7 K. Now allow for the 0.6 Watts per square meter of midrange radiative imbalance given in Smith+ (2015). The equilibrium sensitivity, after allowing for that imbalance, is 0.7 x 2.3 / (2.3 – 0.6), or 0.9 K. The transfer function in 2011, expressed climatology’s way, was then 0.9 / 0.7, or 1.4, compared with (287.5 + 0.9) / (221 + 0.7) = 1.3 expressed using the mainstream transfer-function equation.

      But consider the numerous uncertainties doing things climatology’s way. The reference and equillibrium sensitivities over the period 1850-2011 are minuscule – two orders of magnitude smaller than the reference and equilibrium temperatures that mainstream control theory would start with. We do not know what the true anthropogenic forcing is: it too has been much tampered with, having been reduced compared with the original estimates so as to make climate sensitivity seem higher. We do not know what the radiative imbalance is: it is subject to a very large uncertainty. We do not even know what fraction of the warming of 0.75 K from 1850-2011 (HadCRUT4) was anthropogenic. So IPCC’s implicit midrange estimate of the period transfer function is little better than guesswork.

      It is precisely because very small uncertainties in the values of very small sensitivities entail a very large uncertainty in the transfer function that the interval of estimated Charney sensitivities has remained unconstrained at [1.5, 4.5] for 40 years since Charney (1979).

      If climatology had realized that it could derive a well-constrained interval of Charney sensitivities by using the absolute values of reference and equilibrium temperatures, it would at least have realized the ballpark in which its estimates should fall.

      We have no means of knowing whether there is anything other than a minuscule increase in the transfer function with warming, for the data are wholly inadequate to derive the necessary transfer function using climatology’s ratio of sensitivities.

      There is, therefore, no scientific basis for assuming that there will be a sudden departure from the small transfer function evident in 1850 and even in climatology’s transfer function for 1850-2011.

      The very same considerations that suggest there would be a very strong feedback response to a quite small change in the feedback regime with warming if one were to use the mainstream transfer function also suggest that any such change is likely to be minuscule.

      • Lord Monckton, I am afraid that you have not answered my question. Does anyone else here think you have, I wonder?

        In following your work I am denying myself any interest in what official climatology has done, as I merely want to understand what you have done. And I believe I have found a flaw in what you have done, which as per the comment in August referenced above, showed that a very small increase in feedback rate leads to a sizeable increase in equilibrium sensitivity S to doubled CO2. Using your very own figures, in which a feedback factor of 0.1139 might rise to 0.1147 (a small increase I’m sure you will agree) I showed that it increased S from 1.17K to 1.69K.

        Do you wish me to copy over the relevant mathematics to here, to help you understand the point?

        Or are you able to follow the link and then the mathematics?

        Or are you able to prove that feedback cannot increase by 8 parts in 1100, making my point moot?

        Or do you just want me to go away to leave you in a deluded sense of correctness of your theory?

        • Aha! The tone of the furtively pseudonymous “See-Owe to Rich” indicates that it is prejudiced.

          I have made it plain from the start that such feedbacks as subsist in the climate at any chosen moment respond to the entire reference signal, which is the sum of the emission temperature and any subsequent perturbations thereof.

          Like it or not, at the temperature equilibrium in 1850 the transfer function was 1.1-1.3. IPCC’s implicit best estimate of the industrial-era transfer function from 1850-2011, derived climatology’s way as the ratio of equilibrium to reference sensitivity, is 1.4. So let us apply that transfer function to the reference sensitivity of 212.6 +11.5 + 0.7 = 224.8 K (emission temperature plus reference sensitivities to the pre-industrial and anthropogenic greenhouse gas forcings) that obtained in 2011. That year, the equilibrium temperature, after allowing for the radiative imbalance that accounts for the delay owing to the vast heat capacity of the ocean, was 288.5 K. But 224.8 x 1.4 is equal to 314.7 K, or almost 90 K above the equilibrium temperature.

          It is precisely because the equilibrium temperature is in reality so small that we know the transfer function is not much changed from what it was in 1850. Therefore, absent some major but currently unforeseeable major phase-transition in one of the feedback processes, the expectation is that the transfer function will remain much as it has since 1850 – i.e., about 1.1-1.3.

          • Well, sorry about the tone… Being disappointed about being effectively ignored, i.e. my specific questions not answered, does not amount to prejudice. Or rather, I am prejudiced against bad mathematics (ignoring terms because they ought to be negligible but are not.

            Then again, has your mathematics changed since last August? I am finding it hard to square your recent analysis with that previously. Heretofore you had published on WUWT a summary of your mathematical theory – would you be willing to do so again with the latest version?

            I may have more specific questions on your numbers tomorrow.

          • Lord Monckton, here indeed are more specific questions on the numbers. First, for the sake of concreteness, may we go back to your original (August 2018) numbers for reference and equilibrium temperature? They are at least inside your new wider ranges, so that should not be a problem.

            So in 1850 we had R1 = 254.8 (reference temperature in Kelvin), E1 = 287.55 (equilibrium temperature), leading to the transfer coefficient A1 = E1/R1 = 1.1285.

            In 2011 we had R2 = 255.48, E2 = 288.57, so A2 = 1.1295.

            In the year when we reach CO2 doubling from 1850, we will have values R3, E3, A3. The value of R3 used in August was 255.84. Consider 3 possible values for A3, namely A1, A2, A2+A2-A1. These lead to

            E3 = 288.72, 288.97, 289.23 respectively, and S = E3-E1 = 1.17, 1.42, 1.68 respectively.

            Now, these putative values for A3 lie very close together, when compared with your large range 1.1-1.3 at the end of your last comment, but nevertheless the values result in noticeable differences in S (Equilibrium Climate Sensitivity (Charney) to doubled CO2).

            What is your position on the value of A3 and its uncertainty, and does the latest draft of your paper express that position clearly? (Again, if you were kindly to post the draft or a mathematical summary of it, that would become evident.)

  43. Lord Monckton,
    There is no inconsistency in the IPPC data which you show. They are self-consistent within the epistemic structure of the IPPC. The amplification factor which you have calculated was founded – via your own construct = on equilibrium temperature. As several people have already pointed out, you cannot hope to apply this factor to calculate a transient temperature in 2100.

    A fairer consistency test can be carried out by noting that the median Transient Climate Response (TCR) from the CMIP5 models is around 1.8 deg C, corresponding to a central estimate of effective radiative forcing (ERF) of 3.4 W/m2 for a doubling of CO2. An increase in ERF forcing of 3.8 W/m2 (your RCP 6.0 example) should then yield a temperature change of a little over 1.8×3.8/3.4 or around 2 deg C to be consistent with the model results for 2100. There is no evident consistency problem.

    I would have been extremely surprised if the IPCC had indeed accidentally under-forecast a temperature gain.

    • In response to Kribaez, I am wearily familiar with the (largely notional) distinction between transient and equilibrium sensitivity. However, one can gain some idea of the difference between the two by considering the 161-year industrial-era period from 1850-2011. IPCC’s mid-range anthropogenic reference sensitivity is a mere 0.7 K, which becomes all of 0.9 K after allowing for a putative (and probably much overstated “radiative imbalance”. And that gives us a transfer function – even if one uses climatology’s unduly restrictive definition – of 0.9 / 0.7, or all of 1.4, compared with 1.3 using the mainstream definition.

      The fact remains that IPCC has very greatly reduced its estimate of the 21st-century warming, but without having reduced its estimate of the equilibrium sensitivity. Indeed, it has all but halved that estimate. It has, of course, had to adjust various numbers to make the new and far lower estimate of 21st-century warming appear consistent with the unaltered equilibrium-sensitivity interval, but one should ask the straightforward question why, given that the estimate of 21st-century warming was all but halved, was the estimate of Charney sensitivity not also all but halved?

Leave a Reply

Your email address will not be published. Required fields are marked *