By Christopher Monckton of Brenchley
The prolonged el Niño of 2016-2017, not followed by a la Niña, has put paid to the great Pause of 18 years 9 months in global warming that gave us all such entertainment while it lasted. However, as this annual review of global temperature change will show, the credibility gap between predicted and observed warming remains wide, even after some increasingly desperate and more or less openly prejudiced ever-upward revisions of recent temperatures and ever-downward depressions in the temperatures of the early 20th century in most datasets with the effect of increasing the apparent rate of global warming. For the Pause continues to exert its influence by keeping down the long-run rate of global warming.
Let us begin with IPCC’s global warming predictions. In 2013 it chose four scenarios, one of which, RCP 8.5, was stated by its authors (Riahi et al, 2007; Rao & Riahi, 2006) to be a deliberately extreme scenario and is based upon such absurd population and energy-use criteria that it may safely be ignored.
For the less unreasonable, high-end-of-plausible RCP 6.0 scenario, the 21st-century net anthropogenic radiative forcing is 3.8 Watts per square meter from 2000-2100:

CO2 concentration of 370 ppmv in 2000 was predicted to rise to 700 ppmv in 2100 (AR5, fig. 8.5) on the RCP 6.0 scenario (thus, the centennial predicted CO2 forcing is 4.83 ln(700/370), or 3.1 Watts per square meter, almost five-sixths of total forcing). Predicted centennial reference sensitivity (i.e., warming before accounting for feedback) is the product of 3.8 Watts per square meter and the Planck sensitivity parameter 0.3 Kelvin per Watt per square meter: i.e., 1.15 K.
The CMIP5 models predict 3.37 K midrange equilibrium sensitivity to CO2 doubling (Andrews+ 2012), against 1 K reference sensitivity before accounting for feedback, implying a midrange transfer function 3.37 / 1 = 3.37. The transfer function, the ratio of equilibrium to reference temperature, encompasses by definition the entire operation of feedback on climate.
Therefore, the 21st-century warming that IPCC should be predicting, on the RCP 6.0 scenario and on the basis of its own estimates of CO2 concentration and the models’ estimates of CO2 forcing and Charney sensitivity, is 3.37 x 1.15, or 3.9 K.
Yet IPCC actually predicts only 1.4 to 3.1 K 21st-century warming on the RCP 6.0 scenario, giving a midrange estimate of just 2.2 K warming in the 21st century and implying a transfer function of 2.2 / 1.15 = 1.9, little more than half the midrange transfer function 3.37 implicit in the equilibrium-sensitivity projections of the CMIP5 ensemble.

Note that Fig. 2 disposes of any notion that global warming is “settled science”. IPCC, taking all the scenarios and hedging its bets, is predicting somewhere between 0.2 K cooling and 4.5 K warming by 2100. Its best estimate is its RCP 6.0 midrange estimate 2.2 K.
Effectively, therefore, given 1 K reference sensitivity to doubled CO2, IPCC’s 21st-century warming prediction implies 1.9 K Charney sensitivity (the standard metric for climate-sensitivity studies, which is equilibrium sensitivity to doubled CO2 after all short-acting feedbacks have operated), and not the 3.4 [2.1, 4.7] K imagined by the CMIP5 models.
Since official predictions are thus flagrantly inconsistent with one another, it is difficult to deduce from them a benchmark midrange value for the warming officially predicted for the 21st century. It is somewhere between the 2.2 K that IPCC gives as its RCP 6.0 midrange estimate and the 3.9 K deducible from IPCC’s midrange estimate of 21st-century anthropogenic forcing using the midrange CMIP5 transfer function.
So much for the predictions. But what is actually happening, and does observed warming match prediction? Here are the observed rates of warming in the 40 years 1979-2018. Let us begin with GISS, which suggests that for 40 years the world has warmed at a rate equivalent not to 3.9 C°/century nor even to 2.2 C°/century, but only to 1.7 C°/century.

Next, NCEI. Here, perhaps to make a political point, the dataset is suddenly unavailable:

Next, HadCRUT4, IPCC’s preferred dataset. The University of East Anglia is rather leisurely in updating its information, so the 40-year period runs from December 1978 to November 2018, but the warming rate is identical to that of GISS, at 1.7 C°/century equivalent, below the RCP 6.0 midrange 2.2 C°/century rate.

Next, the satellite lower-troposphere trends, first from RSS. It is noticeable that, ever since RSS, whose chief scientist publicly describes those who disagree with him about the climate as “deniers”, revised its dataset to eradicate the Pause, it has tended to show the fastest apparent rate of global warming, now at 2 C°/century equivalent.

Finally, UAH, which Professor Ole Humlum (climate4you.com) regards as the gold standard for global temperature records. Before UAH altered its dataset, it used to show more warming than the others. Now it shows the least, at 1.3 C°/century equivalent.

How much global warming should have occurred over the 40 years since the satellite record began in 1979? CO2 concentration has risen by 72 ppmv. The period CO2 forcing is thus 0.94 W m–2, implying 0.94 x 6/5 = 1.13 W m–2 net anthropogenic forcing from all sources. Accordingly, period reference sensitivity is 1.13 x 0.3, or 0.34 K, and period equilibrium sensitivity, using the CMIP5 midrange transfer function 3.37, should have been 1.14 K. Yet the observed period warming was 0.8 K (RSS), 0.7 K (GISS & HadCRUT4) or 0.5 K (UAH): a mean observed warming of about 0.7 K.
A more realistic picture may be obtained by dating the calculation from 1950, when our influence first became appreciable. Here is the HadCRUT4 record:

The CO2 forcing since 1950 is 4.83 ln(410/310), or 1.5 Watts per square meter, which becomes 1.8 Watts per square meter after allowing for non-CO2 anthropogenic forcings, a value consistent with IPCC (2013, Fig. SPM.5). Therefore, period reference sensitivity from 1950-2018 is 1.8 x 0.3, or 0.54 K, while the equivalent equilibrium sensitivity, using the CMIP5 midrange transfer function 3.37, is 0.54 x 3.37 = 1.8 K, of which only 0.8 K actually occurred. Using the revised transfer function 1.9 derived from the midrange predicted RCP 6.0 predicted warming, the post-1950 warming should have been 0.54 x 1.9 = 1.0 K.
It is also worth showing the Central England Temperature Record for the 40 years 1694-1733, long before SUVs, during which the temperature in most of England rose at a rate equivalent to 4.33 C°/century, compared with just 1.7 C°/century equivalent in the 40 years 1979-2018. Therefore, the current rate of warming is not unprecedented.
It is evident from this record that even the large and naturally-occurring temperature change evident not only in England but worldwide as the Sun recovered following the Maunder minimum is small compared with the large annual fluctuations in global temperature.

The simplest way to illustrate the very large discrepancy between predicted and observed warming over the past 40 years is to show the results on a dial.

Overlapping projections by IPCC (yellow & buff zones) and CMIP5 (Andrews et al. 2012: buff & orange zones) of global warming from 1850-2011 (dark blue scale), 1850 to 2xCO2 (dark red scale) and 1850-2100 (black scale) exceed observed warming of 0.75 K from 1850-2011 (HadCRUT4), which falls between the 0.7 K period reference sensitivity to midrange net anthropogenic forcing in IPCC (2013, fig. SPM.5) (cyan needle) and expected 0.9 K period equilibrium sensitivity to that forcing after adjustment for radiative imbalance (Smith et al. 2015) (blue needle). The CMIP5 models’ midrange projection of 3.4 K Charney sensitivity (red needle) is about thrice the value consistent with observation. The revised interval of global-warming predictions (green zone), correcting an error of physics in models, whose feedbacks do not respond to emission temperature, is visibly close to observed warming.
Footnote: I undertook to report on the progress of my team’s paper explaining climatology’s error of physics in omitting from its feedback calculation the observable fact that the Sun is shining. The paper was initially rejected early last year on the ground that the editor of the top-ten journal to which it was sent could not find anyone competent to review it. We simplified the paper, whereupon it was sent out and, after many months’ delay, only two reviews came back. The first was a review of a supporting document giving results of experiments conducted at a government laboratory, but it was clear that the reviewer had not actually read the laboratory’s report, which answered the question the reviewer had raised. The second was ostensibly a review of the paper, but the reviewer stated that, because he found the paper’s conclusions uncongenial he had not troubled to read the equations that justified those conclusions.
We protested. The editor then obtained a third review. But that, like the previous two reviews, was not a review of the present paper. It was a review of another paper that had been submitted to a different journal the previous year. All of the points raised by that review had long since been comprehensively answered. None of the three reviewers, therefore, had actually read the paper they were ostensibly reviewing.
Nevertheless, the editor saw fit to reject the paper. Next, the journal’s management got in touch to say that it was hoped we were content with the rejection and to invite us to submit further papers in future. I replied that we were not at all satisfied with the rejection, for the obvious reason that none of the reviewers had actually read the paper that the editor had rejected, and that we insisted, therefore, on being given a right of appeal.
The editor agreed to send out the paper for review again, and to choose the reviewers with greater care this time. We suggested, and the editor accepted, that in view of the difficulty the reviewers were having in getting to grips with the point at issue, which was clearly catching them by surprise, we should add to the paper a comprehensive mathematical proof that the transfer function that embodies the entire action of feedback on climate is expressible not only as the ratio of equilibrium sensitivity after feedback to reference sensitivity before feedback but also as the ratio of the entire, absolute equilibrium temperature to the entire, absolute reference temperature.
We said we should explain in more detail that, though the equations for both climatology’s transfer function and ours are valid equations, climatology’s equation is not useful because even small uncertainties in the sensitivities, which are two orders of magnitude smaller than the absolute temperatures, lead to large uncertainty in the value of the transfer function, while even large uncertainties in the absolute temperatures lead to small uncertainty in the transfer function, which can thus be very simply and very reliably derived and constrained without using general-circulation models.
My impression is that the editor has realized we are right. We are waiting for a new section from our professor of control theory on the derivation of the transfer function from the energy-balance equation via a leading-order Taylor-series expansion. That will be with us at the end of the month, and the editor will then send the paper out for review again. I’ll keep you posted. If we’re right, Charney sensitivity (equilibrium sensitivity to doubled CO2) will be 1.2 [1.1, 1.3] C°, far too little to matter, and not, as the models currently imagine, 3.4 [2.1, 4.7] C°, and that, scientifically speaking, will be the end of the climate scam.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
and that, scientifically speaking, will be the end of the climate scam.
Hardly. If you want it to end, proclaim the truth:
>i>CO2 is mainly produced by the ocean via outgassing per Henry’s Law, with atmospheric CO2 change following ocean temperature change. Therefore CO2 is not warming the ocean, and since the climate follows the ocean, CO2 doesn’t warm the climate either.
Therefore all calculations of ECS of CO2 are science fiction, including those in this post.
What is the supposed current w/m2 back radiation of CO2, and how do they measure that?
Bob Weber: Nice graph!
Can you provide a link to the paper/publication from which it came?
Thanks
Hi again Louis – I posted two references that are in moderation now, but will appear.
Regards, Allan
https://wattsupwiththat.com/2019/01/09/a-sea-surface-temperature-picture-worth-a-few-hundred-words/#comment-2583649
[excerpt]
I leave you with a plot from Humlum et al 2013, which is prettier than the ones in my originating paper of January 2008.
https://www.facebook.com/photo.php?fbid=1551019291642294&set=a.1012901982120697&type=3&theater
Those pictures on FB are ‘not available right now’. better use something like dropbox, no censorship.
Thank you Jaap – does this work for you?
Louis – the two references mentioned above (MacRae 2008 and Humlum 2013) wound up far down this page at
https://wattsupwiththat.com/2019/01/10/the-credibility-gap-between-predicted-and-observed-global-warming/#comment-2584020
Post 3 of 3 for Louis.
https://wattsupwiththat.com/2019/01/09/a-sea-surface-temperature-picture-worth-a-few-hundred-words/#comment-2583665
Further comments on MacRae 2008 and Humlum et al 2013, referenced above.
I generally agree with the first three conclusions from Humlum 2013, as follows:
1– Changes in global atmospheric CO2 are lagging 11–12 months behind changes in global sea surface temperature.
2– Changes in global atmospheric CO2 are lagging 9.5–10 months behind changes in global air surface temperature.
3– Changes in global atmospheric CO2 are lagging about 9 months behind changes in global lower troposphere temperature.
Points 2 and 3 are similar to my 2008 conclusions.
Critiques of Humlum failed to refute the three conclusions above. In general, I regard all the critiques of these three conclusions as specious nonsense, which tend to obfuscate the clear observations in these papers.
One hint: It is not necessary that ALL the increase in atmospheric CO2 is due to temperature – part of the CO2 increase can be due to other causes such as fossil fuel combustion, deforestation, etc., but part of it is clearly due to temperature – and that part demonstrates that CO2 trends lag, and do not lead temperature trends in the modern data record, and that observation DISPROVES the CAGW hypothesis.
Another highly credible disproof of the CAGW meme is that fossil fuel consumption accelerated strongly after 1940 as did atmospheric CO2 concentrations, but global temperatures COOLED from ~1945 to 1977, warmed for over a decade, and then were relatively constant since – so the correlation with increasing atmospheric CO2 was NEGATIVE, POSITIVE AND NEAR-ZERO. To claim that atmospheric CO2 is the “control knob” for global temperature is a bold falsehood, that is refuted by observations at all measured time scales.
Regards, Allan
I suggest that if one wants to understand the science, one has to understand the observations of MacRae 2008 and Humlum et al 2013 (references above, assuming that my post exists moderation).
If one wants to disprove global warming alarmism, then calculations of the maximum probable climate sensitivity based on full-Earth-scale data are a suitable way to do so. Below are two such calculations, both of which disprove the catastrophic human-made global warming hypothesis.
Regards, Allan
https://wattsupwiththat.com/2018/09/03/the-great-debate-part-d-summary/#comment-2447187
Excerpt:
Lewis and Curry (2018) estimate climate sensitivity at 1.6C/doubling for ECS and 1.3C/doubling for TCR, using Hatcrut4 surface temperatures. These surface temperatures probably have a significant warming bias due to poor siting of measurements, UHI effects, other land use changes, etc.
Christy and McNider (2017) estimate climate sensitivity at 1.1C/doubling for UAH Lower Tropospheric temperatures.
Both analyses are “full-earth-scale”, which have the least room for errors. Both are “UPPER BOUND” estimates of sensitivity, derived by assuming that ~ALL* warming is due to increasing atmospheric CO2. It is possible, in fact probable, that less of the warming is driven by CO2, and most of it is natural variation.
(*Note – Christy and McNider make allowance for major volcanoes El Chichon in 1982 and Pinatubo in 1991+).
The slightly higher sensitivity values for Curry and Lewis are due to the higher warming estimates of Hadcrut4 surface temperatures versus UAH LT temperatures.
Practically speaking, however, these sensitivity estimates are similar, and are far too low to support any runaway or catastrophic man-made global warming.
Higher estimates of climate sensitivity have little or no credibility and there is no real global warming crisis.
Increased atmospheric CO2, from whatever cause will at most drive minor, net-beneficial global warming, and significantly increased plant and crop yields.
The total impact if increasing atmospheric CO2 is hugely beneficial to humanity and the environment. Any politician who contradicts this statement is a scoundrel or an imbecile and is destructive to the well-being of society. It IS that simple.
Best, Allan
Most grateful to Allan Macrae for his long note on other papers whose authors have concluded that equilibrium sensitivity is a lot less than official climatology profits by asking us to believe. We should perhaps reference some of these in the final draft of our paper.
In our submission, the advantage of our approach is that it demonstrates official climatology’s definition of temperature feedback to be erroneous in that it considers the transfer function to be solely the ratio of equilibrium to reference sensitivities, failing to include the fact that the transfer function is also the ratio of absolute equilibrium to reference temperatures. The latter definition allows much more reliable derivation and constraint of equilibrium sensitivity, because even quite large uncertainties in absolute temperatures two orders of magnitude greater than the sensitivities entail only a small uncertainty in the transfer function, while even small uncertainties in the minuscule sensitivities entail large uncertainty in the transfer function, which is why the current interval of equilibrium sensitivities is so large and has resisted constraint for so long.
In response to Mr Weber, science is quantitative, not qualitative. To provide a formal demonstration that the influence of anthropogenic sins of emission on global temperature is small, it is not sufficient to demonstrate that changes in the rate of CO2 emission lag changes in sea-surface temperature by some months. It is necessary to demonstrate that, even if the CO2 radiative forcing is as official climatology imagines, the resulting warming will necessarily be small. That is what my team claims to have achieved.
Monckton
It appears that the original reviewers were intimidated by the mathematics. It would then seem that a possible solution to your impasse would be to ask the editor to find a mathematician, or someone with a double major in math and physics, to review your paper.
Mr Spencer raises an excellent point. Actually, all that we really need is someone with training in Classical logic. Our argument is in reality a very simple argumentum ex definitione, which runs thus:
There subsists an equilibrium global mean surface temperature after all temperature feedbacks of subdecadal duration have acted. In 1850, before any anthropogenic perturbation, that temperature was the observed temperature of about 287.5 K. We define that temperature as the equilibrium temperature in 1850.
In the absence of any temperature feedback, there would subsist a reference temperature in 1850. That temperature comprises the sum of emission temperature and the reference sensitivity (before accounting for feedback) to the presence of the noncondensing greenhouse gases as they were in 1850 (note in passing that water vapor, the key noncondensing greenhouse gas, is treated as a temperature feedback, not as a forcing). Starting with an ice planet of albedo 0.66, the global mean surface temperature (which is also the emission temperature in the absence of any forcing or temperature feedback) would be 221.5 K. Add to that the reference sensitivity of about 11.5 K to the noncondensing greenhouse gases as they were in 1850. Define 233 K, therefore, as the reference temperature (before accounting for feedback) in 1850.
Then, ex definitione, the transfer function that encompasses the entire action of temperature feedback on the climate is simply the ratio of absolute equilibrium to reference temperature: i.e., 287.5 / 233 = 1.3, and not the 3.4 [2.1, 4.7] that is the implicit interval of the transfer function in the CMIP5 models (Andrews+ 2012).
Of course, the 0.7 K industrial-era reference sensitivity from 1850-2011 (based on IPCC 2013 fig. SPM.5) and the 0.9 K period equilibrium sensitivity after allowing for the radiative imbalance are nowhere near enough to alter the small transfer function. And that’s it, really.
Actually, all that we really need is someone with training in Classical logic.
Oh dear . . .
“the action of temperature feedback on the climate is simply the ratio of absolute equilibrium to reference temperature: i.e., 287.5 / 233 = 1.3,”
Actually, all that we really need is someone with training in Classical logic and some more attempts at explaining it all in a more simple way, if possible, to those of us who get lost in the hyperbole and over complicated language and maths sometimes used.
ATTP has an article under discussion and in part states “Researchers should be aware of how the manner in which they present information might influence how people interpret the significance of that information.”
I really like LM’s past presentations on the pause and his current work on trying to address the ECS.
If his paper can do what he says it does it needs to be published, promulgated and discussed widely.
Unfortunately at the moment I feel he leaves 80% of his audience behind.
I wonder if Rud or Smith could rephrase it or soften it for more general appreciation though the latter might have to moderate his rhetoric as well.
I am not wanting the general theory of relativity re-explained and feel there is a lot here if it can only be better expressed.
Similar to ” Credit to Willis Eschenbach for setting the Nikolov-Zeller silliness straight”
Sorry.
Angech likes the content but not the form of our result. In defence of the apparently complicated mathematics, we can no longer rely on our audience to be familiar with the Classical modes of thought, in particular Classical logic, which allows the argument to be expressed in a very simple form.
Also, we are faced with sullen, bemused resistance from just about all of the climate establishment and just about all of the political establishment. Therefore, statement of a simple argumentum ex definitione, which is all that is really necessary to establish the rightness of our argument, will not be enough.
Therefore, it is necessary that, in presenting our result in the form of a scientific paper, we should not only state the Classical argument (which, on its own, is definitive) but also proceed to demonstrate it formally, both in the physics of systems theory and in the number theory of infinite convergent series.
Unfortunately, in an environment in which, for reasons of totalitarian conformity to the Party Line and commercial profiteering from the scientific illiteracy of politicians, simple and reasonable arguments are not enough, it is necessary to provide formal demonstrations that leave absolutely no room for argument.
That said, let me express the outline of our argument simply. Official climatology, at a vital point in its treatment of temperature feedbacks, has neglected to take account of the observable fact that the Sun is shining.
Climatologists do not realize that the feedback processes present in the climate at a chosen moment necessarily respond to the entire, absolute reference temperature at that moment, which comprises not only anthropogenic but natural warming as well as the emission temperature that would be present without either those warmings or temperature feedbacks.
Climatology’s formal definition of temperature feedback is confined to the notion that feedback processes respond only to an anthropogenic perturbation in temperature, driving the further perturbation that is the feedback response.
Frankly, one should not expect that overthrowing a Party Line that has been sedulously insisted upon for decades will be an entirely simple matter. A certain minimum of learning is necessary both to achieve that overthrow and to understand that the overthrow has been achieved. In the end, there are no short cuts. But we are making steady progress, and more and more people – not least thanks to WattsUpWithThat – are coming to an understanding that the game is up and the scare is over.
Thank you.
Louis Hooffstetter, I made the graphic for my 2018 AGU poster on solar irradiance extremes. A few days ago someone emailed me about this graphic and made a similar remark. Thank you both. I have one correction to make on the poster unrelated to this graphic, and if I can get it done today I’ll be back with a link. If you look long enough I’ll eventually get around to posting it.
Christopher Monckton – I understand what you’re doing, and yes I question it’s necessity.
I challenge you to examine the quantitative relationship between SST and atmCO2 as indicated in my graphic and then quantitatively pluck out the man-made part. If you really like doing science that should be a fun work for you. Or anyone else.
A simple test for CO2 supposed sensitivity: with CO2 at record levels in 2016, CO2 did not uphold either the ocean or atmospheric temperatures during the almost three years of cooling since the peak in 2016. CO2 is therefore virtually powerless. BTW I say this as the sole person to have predicted both the solar cause of the 2016 El Nino and the solar cause of the temperature decline thereafter, mathematically, empirically.
Here’s a quick story from the AGU meeting:
A young man probably in his late twenties to early thirties who worked for NOAA had a poster that started out in the abstract talking about ‘deniers’. I told him that was a pejorative word that had no place in a science paper.
He went on to argue with me that while there is no discernible year-to year CO2 influence on temperatures, he said CO2 is controlling the overall background rise in temperature such as over 20 years, his figure used solely as an example.
I proceeded to show him a 20-year shuffle, where for every few years CO2 isn’t doing anything per his assertion, but then all of a sudden after 20 years, it controls the background. I said, “Don’t you see the contradiction in your position?” How can it not control year-to year variability while controlling 20 or more year variability? I ended the conversation with a handshake and said “we’re at an impasse, and I think we should stop now.” To which he agreed.
People like him and others are not dealing with objective reality, rather ‘consensus reality’, as are those promoting ever-lower ECS figures. Obviously I don’t consider ECS calculations scientifically legitimate.
While CM is a fine person with good intentions, I disagree with these presentations on principle.
Bob Weber: I proceeded to show him a 20-year shuffle, where for every few years CO2 isn’t doing anything per his assertion, but then all of a sudden after 20 years, it controls the background. I said, “Don’t you see the contradiction in your position?” How can it not control year-to year variability while controlling 20 or more year variability? I ended the conversation with a handshake and said “we’re at an impasse, and I think we should stop now.” To which he agreed.
Accurate estimates of the effects of the internal dynamics of the energy flows and of all other external “forcings” are required in order to get accurate estimates of the sensitivity of temperature to CO2 change. You are quite right about that, and that such accurate estimates are not available. However, CM of Bs work is still valuable in showing that even if you take the IPCC conceptual model seriously, the estimates of CO2 sensitivity are quite low.
I am grateful to Mr Marler for his support, and would add that once it is accepted – as it must be – that feedback processes respond not merely to anthropogenic reference sensitivity but also to natural reference sensitivity and, most importantly, to the emission temperature that arises from the observable fact that the Sun is shining (a fact that takes climatologists by surprise), one does not require a very precise knowledge of the values of the relevant climate variables.
For the transfer function that encompasses the entire action of temperature feedback on climate at any given moment is the ratio not of minuscule sensitivities but of absolute temperatures which exceed the reference sensitivities by two orders of magnitude.
Climatology takes the transfer function solely as the ratio of sensitivities. Therefore, even a small uncertainty in the sensitivities entails a large uncertainty in the transfer function.
Mainstream science, by contrast, takes the transfer function not merely as the ratio of sensitivities but as the ratio of absolute equilibrium temperature at a given moment to absolute reference temperature at that moment. Even quite large uncertainties in the values of equilibrium and reference temperature entail only small uncertainty in the value of the transfer function that is their ratio. The transfer function, therefore, falls on the interval [1.1, 1.3, and not, as at present imagined, on [2.1, 4.7].
Accordingly, equilibrium sensitivity to doubled CO2 concentration is not 3.4 K, as the models currently imagine, but about 1.2 K, with a very small uncertainty either side of that value. And there is an end of the climate scare.
Mr Weber seems to be saying that because the profiteers of doom are “not dealing with objective reality”, attempting to address their Party Line by scientific methods is pointless, or even objectionable.
Well, I was brought up to appreciate the value of objective truth, to try to find it in my scientific studies, and to present my findings once they were ready, regardless of whether those findings were congenial to some faction or another, however venerable, and regardless of whether those findings ran counter to some transient consensus, however widespread. The truth is the truth and, whether or not it be convenient to Mr Weber, it remains the truth.
If we are right, then it does not matter how many hysterical fanatics proclaim that we are wrong. In the end, the truth will prevail, provided that there is someone with the diligence to find it and the guts to proclaim it in the face of indifference or even hostility.
On this question of the definition of feedback, I shall be happy to retire from the field if a rational, scientific argument can be presented to the effect that we are scientifically incorrect. We shall not, however, be one whit deterred by the news that, to the likes of Mr Weber, our result – whether right or wrong – is merely inconvenient.
“since the climate follows the ocean,”
seems a little presumptuous and too certain to me.
While a fan of warmer oceans causing more CO2 you seem to be missing a small step, namely what is actually causing the ocean warming and cooling. To say the sun, only, is too simplistic. I like Roy Spencer’s cloud cover theory and clouds are part of climate. As are currents, land masses, Air circulation and water vapor.
Given this the little bit of CO2 added by man can certainly play a part as well and denying it any validity is just as bad as saying it is the one and only control knob.
Welcome back Lord Monckton!
Your contributions have been missed.
Many thanks to Mr Magness. I’ve been ill but am now recovering and look forward to a much stronger year than last year.
Good to hear sir. Look forward to hearing a good outcome from upcoming reviews. Cheers.
Lord Monckton,
There are always several competing pro- and anti- AGW boothes at CPAC https://cpacregistration.com/
I think you would enjoy engaging with the movers and shakers and well as the >10,000 young people who will attend this year. It is so awesome that you would spend some of your valuable time posting here.
All the best health and success to you!
Should I not be speaking at CPAC? Perhaps Mr Schaefer would like to recommend me to the organizers.
We need to plan for next year I’m afraid. But yes, you SHOULD be the herald of the end of the AGW scam at CPAC.
A typically excellent post … but your calculations appear to have overlooked the effect of the vast amount of hot air put into the atmosphere as a result of Brexit!
Henry Keswick is right: the totalitarian flatulence from the Brussels Broadcasting Commissariat alone in its lamentably alarmist coverage of Britain’s campaign for independence has proven enough to keep the United Kingdom comfortably warm in a mild winter while in the United States and in Europe there has been record snowfall.
but also as the ratio of the entire, absolute equilibrium temperature to the entire, absolute reference temperature.
======+=++
While it has been some months, this to me was most significant. Perhaps because it was counter intuitive. Yet the mathematics was inescapable.
In science, progress comes when we discover something we did not expect.
Most grateful to Ferd for his kind comment. The result came as quite a surprise to us, too, when we first fumbled our way to it. I began about seven years ago by plotting (by hand) the graph of equilibrium sensitivity in response to various values of the feedback fraction. There was a startling discontinuity [at] a feedback fraction of 1. That suggested there was something wrong with climatology’s treatment of feedbacks, but it took us a long time to realize just how basic was the mistake that official climatology had made in ignoring the fact that the transfer function is the ratio not only of equilibrium to reference sensitivity but also of absolute equilibrium to reference temperature. Once that fact is established (and we have proven it by reference to a useful result in number theory), deriving and constraining equilibrium sensitivities about one-third of those imagined by official climatology becomes straightforward.
Using what are apparently differing methods, Lord Brenchley is deriving much the same level of ESS as Judith Curry, which gives me some confidence that the conclusions are valid.
Tom Halla’s point is excellent. Coherence of a result derived by distinct methods indeed makes it possible that both methods are reaching the correct answer.
Christopher Monckton of Brenchley
Go’on yersel big man!
Hopefully, some enterprising politicians and MSM editors will pick up on this when it’s published.
I mean, what a coup for a politician in the UK considering the state of our politics right now. There’s also Trumps rejection of Paris, Brazil’s reluctance to continue, Poland’s contempt for renewables, China withdrawing subsidies, and the increasingly obvious failure of renewables to do anything but cost people pots of money and blight our home country.
Stupid, useless SNP, imagine blighting Scotland with wind farms when their tourist industry is driven by the natural environment. Cut their nose off to spite their face.
A courageous politician could seize on this and turn the tide (metaphorically of course). Get the UK back to work after Brexit, ditch the ridiculous Climate Change Act and start fracking (we note all’s gone quiet on that front with no more ‘catastrophic’, barely detectable earth tremors). Nigel Farage perhaps? He’s bound to be looking for a new angle to have a run at Theresa May’s job when it comes up for grabs soon.
Like him or loathe him, he’s a determined man.
Good luck Chris. Resist the Marxist hordes!
You’ve forgotten the German in vivo experiment called “Energiewende” which is about to show the world that “renewables” a) dont’t work properly and b) that they are a most expensive and disappropiating piece of botchery of
liarspoliticians.In response to HotScot, I’m a huge admirer of Nigel Farage, without whom Britain would not have been given the chance to win back her independence. We are quietly nursing our paper through peer review and, if it passes, that will indeed be the end of the climate scam, and politicians everywhere can stop subsidizing unreliables and stop shutting down profitable and clean coal-fired power stations such as Longannet, the last of its kind in Scotland, which now has to order cinder-blocks for housing from hundres of miles away because builders can no longer get the fly-ash from Longannet. And that’s just one of many downstream losses from shutting the coal-fired plant. Meanwhile, coal delivers precisely 38% of total worldwide electricity demand, just as it did 30 years ago. The screeching of the climate fanatics has achieved precisely nothing.
Hello HotScot and Lord Monckton,
Not only has coal maintained its share of global primary energy, but so have fossil fuels in total.
Fully ~85% of global primary energy is from fossil fuels (oil, natural gas and coal), essentially unchanged in decades. The remaining 15% is mostly hydro and nuclear, and less than 2% is green energy, despite trillions in squandered subsidies.
Global warming alarmists advocate the elimination of fossil fuels – do that tomorrow and almost everyone in the developed world will be dead in a month from starvation and exposure.
Best regards, Allan
Another way to compare predicted and observed temperatures.
https://tambonthongchai.com/2018/09/08/climate-change-theory-vs-data/
In reply to Chaamjamal, he or someone has gone to a lot of trouble to produce all those graphs. The challenge, in my submission, is to produce graphs that even politicians can understand. Hence the dial.
Good luck, Mylord.
ManySome of that ilk even have problems with straight lines.That’s a massive challenge indeed, if US politicians are anything to judge by.
Only the last graph matters, sir. The others simply explain to the curious how that graph was constructed.
An excellent article. Howver, your dial diagram might be improved if the different pointers were of a suitable length to point to the particular scale to which they refer
The whole point of the three dials is to show what would happen at each of three stages in the evolution of global temperature. That is why the needles are designed as they are.
I object to the anti-science pro-warming bias
of using surface temperature “data”
in years after 1979 when more accurate,
less biased, much less infilling, UAH weather
satellite data are available.
This article uses GISS and HadCRUT data
in years where better UAH data are available.
For surface “data”, a majority of surface grids
have no data, or incomplete data, so there is
wild guessing by government bureaucrats
required to compile a global average temperature.
The “majority infilled” surface data show more warming
than weather satellite and weather balloon
temperature data.
Therefore, the surface data are suspect, and should
be ignored, especially because better data are available
after 1979.
The starting point for the
“era of man made greenhouse gases”
is roughly 1940.
Not 1950, as the IPCC might say.
Not 1979, simply because satellite
temperature data collection
began that year.
Since 1940, global warming has been
mild, harmless, irregular and not even
“global” most of the time — definitely
not matching the steadier, global rise of CO2.
“Irregular” =
no global warming from 1940 to 1975
and a flat trend from the 2003 peak through 2018.
“Not global” =
no warming of Antarctica since the 1960s,
and much more warming in the northern half
of the Northern Hemisphere, than in the
southern half of the Southern Hemisphere
since 1975.
The causes of climate change
are a list of the usual suspects,
with no one knowing the actual
causes with any precision.
Without that precision,
a correct climate change
physics model can not exist.
That means the so called
general circulation models
are nothing more than opinions
… and obviously wrong opinions,
because they lead to wrong
climate forecasts when compared
with temperature observations.
It is unfortunate, and a huge
conflict of interest, that actual
temperature observations
are controlled by the same
government bureaucrats who
have made “climate model”
predictions of significant
global warming.
So it’s no surprise to me that after
their many “adjustments”, their surface data
show more warming than satellite and
weather balloon data.
The little real science behind
“climate change”, the infrared
spectroscopy done in laboratory
closed system experiments,
and climate observations
since 1940 (using more accurate
UAH weather satellites after 1979)
both suggest the same thing:
The TCS of CO2
is no more than
roughly +1.0 degrees C.
per doubling of CO2.
A +1.0 TCS would lead to
a harmless worst case of
+1 degree C.
of global warming
in the next 200 years,
assuming +2 ppm
of CO2 increase per year.
Which all adds up to the
obvious conclusion:
Adding CO2 to the air has caused
no harm so far, and is unlikely
to cause any harm in the future.
If you also consider the positive effect
of increasing CO2 levels on plant growth,
as done inside most greenhouses,
then adding CO2 to the air is beneficial
for our planet.
My climate science blog,
with over 29,000 page views:
http://www.elOnionBloggle.Blogspot.com
“..so there is wild guessing by government bureaucrats”
And non-remarkably, the end result is GUESSTEMP
By “wild guessing”, I mean
the infilled numbers can never
be verified … and even though
the surface data are “contradicted”
by the weather satellite and weather
balloon data, that are similar to each other
the surface data apparently
can never be falsified !
GUESSTEMP is a perfect name for
all surface temperature “measurements”
(it’s hard to use the term “measurements”
when the surface global average temperature
consists of more infilling than actual
measurements … and the minority
of actual measurements that are used,
are “adjusted” before they are used !)
Thanks to Goddard, NASA (National Aeronautics and Space Administration) is becoming Not Actually Seeing Anything (we’re just guessing).
NASA’s Goddard Space Flight Center has become the Goddard Flights of Fancy Center
Excellent word!
You are generous to a fault regarding the El Nino NOT followed by a La Nina. Continent-sized cold blobs in both hemispheres have decoupled ENSO to a considerable degree from its domination of global temperature trends. If temperature keepers can keep their thumbs off the scales, there is an excellent chance we will see the return to a lengthening ‘Pause’.
Observed: catastrophic climate change hits the Alpine winter resorts – too much snow.
Our grandchildren
sadly will never get to know
the Alps without the winter snow.
/sarc
With three different scales, and four different hands, each giving three different measurements, it is certainly not simple. I would suggest 3 dials each clearly labelled what they measure.
A single large dial shows the entire picture in a larger scale than three smaller dials would. A minimum of effort will allow the observer to read any of the three dials.
I agree. The dials are complicated. Pollies have trouble reading a simple dial with just pointer and big numbers. Trouble is if it does not look a little bit complex they will conclude that thr story behind it is simple and not worth following. Somehow or another we have to get each of the powers that be to adopt a sciency, engineering type as an independent advisor on all the tech physicsy stuff stuff they have to digest.
In response to 4 Eyes, the dial was designed to illustrate a scientific paper. That is why there is so much information in it. But anyone can see at a glance that the observed warming rate does not fall anywhere within the enormous interval of official predictions.
If 4 Eyes would like to design a graphic that will convey the information to scientifically illiterate politicians, I should be most interested to see it.
Two things:
1. According to the NOAA ENSO Index there have been 2 periods of la Niña conditions since the end of the last el Niño, and ENSO neutral conditions since the second of these ended: http://origin.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ONI_v5.php
2. The “great Pause of 18 years 9 months in global warming” does not exist in any current global temperature data set. It was an artifact of RSS TLT v3, which the producers of that data set repeatedly said was the result of a known cooling bias in their processing procedure. RSS V3 was replaced by v4 in 2017, accompanied by a peer reviewed paper. The “great pause” did not survive the transition and the warming rate in RSS v4 since 1998 (the start of the “great pause” referred to) is currently 0.153 ±0.157 °C/decade (2σ); which is 0.004 °C/dec shy of being a statistically significant warming trend.
“The “great Pause of 18 years 9 months in global warming” does not exist in any current global temperature data set.”
Doesn’t exist in UAH?
Between early 1997 and late 2015 there was indeed a Pause in the UAH data of about the same length as the original RSS Pause. That Pause is evident in the current UAH dataset.
There is a best estimate zero warming trend in UAH TLT v6 of 18 years and 6 months between July 1997 and Dec 2015. I can’t find one longer than that. RSS TLT v4 shows a best estimate of slight warming over the same period.
” RSS TLT v4 shows a best estimate of slight warming over the same period.”
Is v4 before or after the mearsization of RSS
“best estimate zero warming trend in UAH TLT v6 of 18 years and 6 months”
So you admit that there was a pause.
“Pause” is a bad (misleading) word.
It implies the trend before the “pause”
is expected to continue.
But no one knows that.
Our global average temperature measurements,
especially the surface data / infilling,
(that keeps “changing” every year),
when the year-to-year variations
are within a 1.0 degree C. range,
are not precise enough for statistical analyses
unless you completely ignore reasonable margins of error!
There are only three possible trends that make sense:
An up trend
A down trend
A flat trend.
Any attempt to be more precise
is ignoring reasonable margins of error,
which is not real science!
2 sigma margins of error are applied to the trends quoted at University of York site: http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html
The original “great Pause” referred to the RSS v3 TLT data set, with a negative trend that began in June 1997 and ended early 2016. This was a ‘best estimate’ trend that ignored the 2 sigma error margins (-0.000 ±0.171 °C/decade (2σ)).
Using the updated RSS v4 TLT data set over the same range results in ‘best estimate’ warming of 0.100 ±0.177 °C/decade (2σ) and from June 1997 to the present the ‘best estimate’ warming in RSS v4 is now statistically significant, at 0.155 ±0.149 °C/decade (2σ).
DWR54 January 10, 2019 at 2:53 pm
I find it astonishing that anybody of integrity could put forward datasets that have obviously been modified to remove non CAGW confirming data like the Sea Temp and Surface Temps as a Scientific proof that the non conformancy did not exist.
Shane on you.
The pause from 2001-2015 is still there in RSSv$..
Current temperature after the 2015/16 transient is pretty much on that zero-trend line
Stop using the 1998 El Nino step and 2015/16 transient to create FAKE trends, its petty. !
DWR54
The pause was not an artefact.
It existed for varying lengths of time in a lot of data sets as LM has demonstrated numerous times in numerous past articles.
The longer temperatures stayed low the longer the pause grew time wise in both directions.
It is puerile to argue that a pause did not exist and cannot be seen.
It is facile to argue that RSS does not now show a pause but yet admit that it did do so before the data was manipulated to remove it in 2017.
“There is a best estimate zero warming trend in UAH TLT v6 of 18 years and 6 months between July 1997 and Dec 2015.”
That seems long. Enough to independently qualify and back up LM on the existence of a great pause in another data set that currently exists.
Even for nit pickers.
Amen to that!
DWR54,
Your series of posts on January 10 are very confusing:
1) At 8:04 am listed time, you stated: “The ‘great Pause of 18 years 9 months in global warming’ does not exist in any current global temperature data set. It was an artifact of RSS TLT v3 . . . The ‘great pause’ did not survive the transition and the warming rate in RSS v4 since 1998 (the start of the “great pause” referred to) . . .”
2) Then at 9:44 am you posted: “There is a best estimate zero warming trend in UAH TLT v6 of 18 years and 6 months between July 1997 and Dec 2015.”
3) Then at 2:53 pm you posted: “The original ‘great Pause’ referred to the RSS v3 TLT data set, with a negative trend that began in June 1997 and ended early 2016. . . . Using the updated RSS v4 TLT data set over the same range results in ‘best estimate’ warming of 0.100 ±0.177 °C/decade (2σ) and from June 1997 to the present the ‘best estimate’ warming in RSS v4 is now statistically significant, at 0.155 ±0.149 °C/decade (2σ).
Are your taking issue with the fact RSS TLT v3 showed 18 years 9 months of “pause” whereas UAH TLT v6 today shows 18 years 6 months of pause (a duration shorter by only 3 months, or 1.3%)? Therefore, doesn’t the equivalent of a “great pause” really exist in UAH TLT v6 based on Item 2 above, and in contradiction to your first sentence in Item 1 above?
Taking your asserted RSS TLT v4 2σ “best estimates” of warming on face value (not that that means they are credible), you appear to ignore that based on your statistical analysis there must be a much greater than 55% increase in warming rates after “early 2016” to drive the warming rate from June 1997 up to the present so much higher than the period of June 1997 through “early 2016” (0.155 vs 0.100 °C/decade, respectively). That 0.55 °C/decade difference in warming slope is certainly statistically significant based on the ±2σ uncertainties that you stated. So, one must conclude that your asserted 55% change in warming rate occurred relatively rapidly despite atmospheric CO2 content increasing continuously upward on a slight exponential trend. How do you explain that???
And while you’re at it, please also explain how the UAH TLT v6 zero warming trend of 18 years 6 months (that you admit happened per Item 2 above) happened despite the ever increasing atmospheric CO2 content.
That’s what’s missing from most published papers as far as I can tell. Usually scientists blithely throw math around without actually understanding what they’re doing. Give them a half hour tutorial on Matlab and they have at their hands more different ways to torture data than they ever knew existed. If they throw enough different tools at the data they will find one that results in a publishable p value and they’re off to the races. /rant
My applied math prof had a simple rule; trust the eye before the math.
I was actually quite surprised at the time, because I had graphed the data, then ran it through a statistical package. He pointed to the the graph, and remarked “this I trust”. Then he pointed to the stats and said “this not so much”. At the time, it was the complete opposite of what I had expected.
That dial is pure steam punk. Love it. Not that I can make heads nor tails of it.
In reply to Mr Anderson, just look at the steampunk graph. Predicted equilibrium sensitivities with the red needle showing the midrange estimate) occupy approximately the right-hand half of the graph. Sensitivities based on observation are the blue and green arrows, clustering about the “revised” interval of predictions that arises from our correction of climatology’s misdefinition of temperature feedback. They are on the left-hand side of the graph. It is visible that the true rate of warming is about a third of the mid-range officially-predicted rate. And by using the three scales you can see what happened from 1850 to 2011, what would happen at doubling of CO2 and (the sum of these two) what would happen from 1850-2000. Read the caption and study the graph a little and all will become clear.
RSS, whose chief scientist publicly describes those who disagree with him about the climate as “deniers”, revised its dataset to eradicate the Pause, it has tended to show the fastest apparent rate of global warming, now at 2 C°/century equivalent.
Before UAH altered its dataset, it used to show more warming than the others. Now it shows the least, at 1.3 C°/century equivalent.
Both RSS and UAH changed their data for the same reason, inconsistencies in the record from different satellites (change in orbits and decay in the sensors), notably after the switch from MSU to AMSU. In the case of UAH they changed their method and produced a different product which is sensitive to a higher region of the troposphere (4km vs 2km), although it still has the same name.
Here are the real reasons why UAH changed their method:
“Version 6 of the UAH MSU/AMSU global satellite temperature dataset is by far the most extensive revision of the procedures and computer code we have ever produced in over 25 years of global temperature monitoring. The two most significant changes from an end-user perspective are (1) a decrease in the global-average lower tropospheric (LT) temperature trend from +0.140 C/decade to +0.114 C/decade (Dec. ’78 through Mar. ’15); and (2) the geographic distribution of the LT trends, including higher spatial resolution. We describe the major changes in processing strategy, including a new method for monthly gridpoint averaging; a new multi-channel (rather than multi-angle) method for computing the lower tropospheric (LT) temperature product; and a new empirical method for diurnal drift correction. We also show results for the mid-troposphere (“MT”, from MSU2/AMSU5), tropopause (“TP”, from MSU3/AMSU7), and lower stratosphere (“LS”, from MSU4/AMSU9). The 0.026 C/decade reduction in the global LT trend is due to lesser sensitivity of the new LT to land surface skin temperature (est. 0.010 C/decade), with the remainder of the reduction (0.016 C/decade) due to the new diurnal drift adjustment, the more robust method of LT calculation, and other changes in processing procedures.”
It will be seen from this description that the “lesser sensitivity of the new lower-troposphere dataset to land-surface skin temperature” is only 0.01 K/decade, implying that that lesser sensitivity affects the global value by only 0.003 K/decade.
The difference in temperature for a cloudless winter night compared with one with cloud cover can be as much as 20°F. The problem investigated here is what can account for this extreme temperature difference. The argument being presented is that the effect of greenhouse gases alone cannot account for that temperature difference. The greenhouse effect is where molecules in the atmosphere absorb infrared radiation and radiate it in all directions. This means that that about one half is radiated downward toward Earth’s surface. The term cloud blanket effect is used to denote phenomenon in which the underside of a cloud reflects back down the infrared radiation that the Earth’s surface is radiating upward.
The greenhouse effect from a thin layer of the atmosphere can result in at most 50 percent of the thermal radiation absorbed from the surface being returned to the surface. The amount of radiation absorbed even from a thick layer of the atmosphere is quite small. The amount returning due to reflection from the underside of clouds can be higher. The albedo of cumulus clouds for the visible light range can be as high as 90 percent and that for the infrared range is of the same order of magnitude.
Reflection of electromagnetic radiation generally occurs at the interface between two types or two densities of a conducting media. This means that when a cloud is resting on the land surface as a fog there is no reflection of infrared radiation. Instead the effect of fog on surface temperature would be entirely through the greenhouse effect. This would mean that all other conditions being equal the surface temperature on a foggy night would be noticeably cooler than when there are clouds but no fog. The foggy night, however, would be warmer than a clear night.
Clouds are an overwhelming influence on the climate of the Earth. Despite this the focus of climate modelers on the greenhouse effect of carbon dioxide has resulted in a neglect of cloud phenomena. The climate models fail to adequately represent the matter of clouds and cloudiness.
The usual focus of this display is that the climate modelers, one and all, do not know much about cloud coverage. But another salient element is the difference between actual cloud coverage in the Arctic compared with the Antarctic. At the North Pole it is 70 percent; at the South Pole it is about 3 percent. Since carbon dioxide is supposedly well mixed in the atmosphere the greenhouse effect of carbon dioxide should be the same at both poles. But consider the record for temperature by latitude.
Robert C. Balling, Jr. gives a graph relevant to comparing a model prediction with the actual record. It is given in his article, “Observational Surface Temperature Records versus Model Prediction,” which is published in Shattered Consensus: The True State of Global Warming (page 53).
Here we have the real world in all its complexity. Over the period 1970 to 2001 the Arctic region did have a greater temperature increase than the tropical region. It was not a five to one ratio however. The north polar region increased in temperature about 83 percent more than the tropic region. However in the south polar region there was no larger increase than the tropics, If anything the south polar region increased less in temperature than the tropics with Antarctica actually decreasing in temperature. If the increase in temperature in the north polar region is taken as a verification of the theory and global warming model then the record in the south polar regions is a denial of the validity of the theory and model. This is the real world in all its complexity and the climate models definitely do not capture that complexity. In particular, the models, driven as they are simply by the level of carbon dioxide in the atmosphere, cannot account for the discrepancy in temperature change in the two polar regions. The cloud blanket effect does account for that difference.
The Greenhouse Effect With and Without Cloud Cover
At the first level of analysis the greenhouse effect can be estimated using Beer’s Law. Beer’s Law implies that the proportion P of radiation absorbed in passing through a medium is
P = 1−e-D
where D is the optical depth of the medium and this is given by
D = ∫0Lαρ(s)ds
where α is the absorption coefficient of the material of the medium, ρ is the molecular density and L is the physical depth of the medium. If there are two or more absorbing substances in the medium the optical depth for each is determined and their sum is the overall optical depth.
The absorption coefficient for a substance may vary with the wavelength of the radiation. There may be certain critical wavelengths at which the medium absorbs. For example, the absorption spectra for water vapor and for carbon dioxide are given in Radiative Efficiencies. What is needed is an average absorption coefficient over a range of wavelengths. It would be a weighted average based upon the distribution of radiation of different wavelengths. The radiation from a body depends upon its temperature. The total energy in that radiation is proportional to the fourth power of the absolute temperature. The wavelength distribution of that radiation looks something like the following.
The frequency scale runs from left to right whereas the wavelength scale runs from right to left.
There does not appear to be available simple absorption coefficients for water vapor and carbon dioxide averaged over the range of infrared radiation relevant for Earth’s emissions. In lieu of and while continuing to pursue the technical information needed to carryout an estimation of the absorption of infrared radiation using Beer’s Law some ballpark estimates will be made using the bits and pieces of relevant information that is available.
One datum that appears to be relevant is the estimate that 90 percent of the infrared radiation emitted by the Earth’s surface does not go out into Space. This 90 percent would be made up of several components, which include:
The absorption by greenhouse gases in the clear sky. The cloud coverage averages about 60 percent so there is about 40 percent clear sky.
The absorption by greenhouse gases in the space below the cloud cover.
The absorption by water, liquid and solid, in the clouds.
The reflection from the undersides of clouds.
Let α be the proportion absorbed by greenhouses gases in a clear sky. The amount reradiated downward would then be (1/2)(0.4)α. Suppose the proportion absorbed by greenhouses gases before the infrared radiation reaches the undersides of the clouds is one half of that for traversing the full atmosphere; i.e., ½α. This would then be 0.6(½α). The infrared radiation reaching the clouds would be 1-0.6(½α). If the reflectivity of the clouds to infrared radiation is 70 percent (as opposed to 90 percent for visible light) then infrared reflection accounts for 0.7[1-0.6(½α)]. The rest would be absorbed in the clouds and half radiated back down to the surface. The other half would heat the cloud and eventually that heat would find its way to the top surface of the cloud and half be radiated out into Space. In terms of that which does not going into Space it is [0.7+0.3(0.5+0.25)][1-0.6(½α)] . Thus
0.5(0.4)α + (0.925)[1-0.6(½α)] = 0.9
which reduces to
0.2α – 0.2775α = 0.9 – 0.925
and hence
-0.0775 = -0.025
which means that α would have to be
α = 0.322
This would mean that the cloud blanket effect (reflection of infrared radiation from the undersides of clouds) accounts for about 63 percent of the return of energy to the Earth’s surface and the greenhouse effect accounts for only about 37 percent. In an area without clouds there would be only 16 percent of the infrared radiation returned to the surface instead of the 86 percent returned to the surface under clouds. This 86 percent is made up of 59 percent from the cloud reflectivity, 8 percent from the effect of the greenhouse gases below the clouds and 19 percent from the greenhouse effect in the clouds. This is compatible with the experience of the cold clear winter night compared with a cloud-covered night.
A proportion absorbed of 0.322 means that 0.618 is transmitted. Thus the optical depth of the atmosphere due to the greenhouse gases is -ln(0.618)=0.48. At an altitude that included one half of the greenhouse gases the transmission would be exp(-0.48/2)=0.8055 and thus the absorption would be 0.1945.
Some insights may be gained by looking at equilibrium temperatures. However the nighttime temperatures are not equilibrium temperatures. At night the temperature is decreasing roughly according to a negative exponential curve.
Energy Balance Models for Equilibrium Temperature
Without greenhouse gases or clouds the equilibrium temperature of a planet’s surface would be given by
πR²(1-α)ψ = 4πR²σεT4
which reduces to
(1-α)ψ = 4σεT4
which means that
T = [(1-α)ψ/(4σε)]1/4
where T is the equilibrium absolute temperature, R is the planet radius, ψ is the intensity of the solar radiation, ε is the surface emissivity, α is the planet surface albedo and σ is the Stefan-Boltzmann constant.
If the greenhouse gases in the atmosphere absorb a proportion β and radiate half of it back to the surface then the equilibrium temperature satisfies the condition:
(1-α)ψ = 4σε(1-½β)T4
and hence
T = [(1-α)ψ/(4σε)(1-½β)]1/4
Now clouds can be brought into the picture. Let α0 be the albedo of the surface, α1 the albedo of the top of the clouds to short wave radiation and let α2 be the albedo of the bottom of clouds to long wave radiation. Let β0 be the proportion of short wave radiation absorbed by atmospheric greenhouses gases below the clouds and β1 be proportion absorbed by those gases in the atmosphere above the lower level of the clouds. Let β2 be the proportion absorbed by the greenhouse substances in the clouds.
Then for a clear sky
Tclear = [(1-α0)ψ/(4σε)(1-½(β0+β1))]1/4
For the case of the cloud cover
(1-α0)(1-α1)ψ = 4σε(1-½(β0+β2)-α2)T4cloudy
and hence
Tcloudy = [(1-α0)(1-α1)ψ/(4σε)(1-½(β0+β2)-α2)]1/4
Ratio of the clear and cloudy equilibrium temperatures is then:
Tcloudy/Tclear = [(1-α1)(1-½(β0+β1))/(1-½(β0+β2)-α2)]1/4
As seen above the common factors like the solar intensity and short wave surface albedo are eliminated.
The Dynamics of Diurnal Temperature Cycles
The equation for the dynamics of temperature is
C(dT/dt) = S0ψ(t) − S1εσT4
where C is the heat capacity of the body, T is its absolute temperature, S0 is the surface over which the body receives solar radiation, S1 is the surface area over which the body emits thermal radiation. The heat capacity is proportional to the body volume; say C=γV, where γ is the heat capacity per unit volume.
The net inflow of radiant energy ψ(t) is a cyclic function of time. Let ψmean and Tmean be the mean energy inflow and temperature, respectively. Then
0 = S0ψmean − S1εσT4mean
This equation may be subtracted from the dynamic equation to give:
C(dΔT/dt) = S0Δψ(t) − S1εσ(T4-T4mean)
where ΔT and Δψ are (T-Tmean) and (ψ-ψmean), respectively.
The term (T4-T4mean) on the right can be approximated by 4T3meanΔT. Thus the equation for the dynamics of diurnal temperature is of the form
C(dΔT/dt) = S0Δψ(t) − S1εσβΔT
where
β = 4T3mean
For material on the solution to this type of equation see Diurnal Temperature.
Comparison of a Case of Ground Temperature
With and Without Cloud Cover
On November 22, 2008 John Bryant of the WMCTV Weather Team in Memphis, Tennessee noted that the temperature at the airport under cloud cover was about 42°F whereas at Dyersburg, a small city near Memphis, sky was clear and the temperature at midnight was almost twenty degrees colder.
Comparing night time temperatures is not a matter of comparing equilibria. After the sun goes down the temperature is in disequilibrium and it decreases approximately like a negative exponential function; i.e.,
T(t) = T0e-γt
Suppose the temperature under the clear sky at sunset was 45°F and at midnight seven hours later it was 22°F. In absolute temperature these were 505°R and 482°R, respectively. The value of the coefficient in a negative exponential function for these two temperatures is
γ = −ln(Tmid/T0)/7 = -ln(482/505)/7 = 0.00666 per hour.
According to the case data the value of γ for the cloud covered situation was 0.
Let β1 be the proportion of the thermal radiation absorbed in a clear sky. The value of γ1 for the clear sky case is
γ1 = K(1−½β1)
where K is a coefficient depending upon all of the other factors besides the greenhouse gas absorption.
Let β2 be the proportion of the thermal radiation absorbed by the atmosphere under the cloud layer and α2 the proportion of thermal radiation returned to Earth from the underside of the clouds or the greenhouse effect in the clouds. Then the γ2 for the cloud cover case is
γ2 = K(1−½β1)(1−α2)
Since for the case under examination γ2=0, this means α2 must equal 1.0. This can occur only with reflection of the thermal radiation. Since β2 can be at most 1.0 and thus the proportion of radiation returned to Earth can be at most 0.5 this means at the extreme 0.5 of the thermal radiation would be returned to Earth by the greenhouse effect of the atmosphere and 0.5 by the reflectivity of the clouds and their greenhouse effect. If β2 is 0.3 then 15 percent of the thermal radiation would be returned by the greenhouse effect of the atmosphere and 85 percent by the reflectivity and greenhouse effect of the clouds. The overwhelming proportion of the effect of the clouds has to come from their reflectivity of infrared radiation.
Conclusion
Most of the effect of clouds in moderating the night temperatures is from their reflectivity of infrared radiation.
********************************************************************************************************The above was the work of Thayer Watkins. Below are my conclusions.
Therefore, clouds overwhelm the Downward Infrared Radiation (DWIR) produced by CO2. At night with and without clouds, the temperature difference can be as much as 11C. The amount of warming provided by DWIR from CO2 is negligible but is a real quantity. We give this as the average amount of DWIR due to CO2 and H2O or some other cause of the DWIR. Now we can convert it to a temperature increase and call this Tcdiox.The pyrgeometers assume emission coeff of 1 for CO2. CO2 is NOT a blackbody. Clouds contribute 85% of the DWIR. GHG’s contribute 15%. See the analysis in link. The IR that hits clouds does not get absorbed. Instead it gets reflected. When IR gets absorbed by GHG’s it gets reemitted either on its own or via collisions with N2 and O2. In both cases, the emitted IR is weaker than the absorbed IR. Don’t forget that the IR from reradiated CO2 is emitted in all directions. Therefore a little less than 50% of the absorbed IR by the CO2 gets reemitted downward to the earth surface. Since CO2 is not transitory like clouds or water vapour, it remains well mixed at all times. Therefore since the earth is always giving off IR (probably a maximum at 5 pm everyday), the so called greenhouse effect (not really but the term is always used) is always present and there will always be some backward downward IR from the atmosphere.
When there isn’t clouds, there is still DWIR which causes a slight warming. We have an indication of what this is because of the measured temperature increase of 0.65 from 1950 to 2018. This slight warming is for reasons other than just clouds, therefore it is happening all the time. Therefore in a particular night that has the maximum effect , you have 11 C + Tcdiox. We can put a number to Tcdiox. It may change over the years as CO2 increases in the atmosphere. At the present time with 409 ppm CO2, the global temperature is now 0.65 C higher than it was in 1950, the year when mankind started to put significant amounts of CO2 into the air. So at a maximum Tcdiox = 0.65C. We don’t know the exact cause of Tcdiox whether it is all H2O caused or both H2O and CO2 or the sun or something else but we do know the rate of warming. This analysis will assume that CO2 and H2O are the only possible causes. That assumption will pacify the alarmists because they say there is no other cause worth mentioning. They like to forget about water vapour but in any average local temperature calculation you can’t forget about water vapour unless it is a desert.
A proper calculation of the mean physical temperature of a spherical body requires an explicit integration of the Stefan-Boltzmann equation over the entire planet surface. This means first taking the 4th root of the absorbed solar flux at every point on the planet and then doing the same thing for the outgoing flux at Top of atmosphere from each of these points that you measured from the solar side and subtract each point flux and then turn each point result into a temperature field and then average the resulting temperature field across the entire globe. This gets around the Holder inequality problem when calculating temperatures from fluxes on a global spherical body. However in this analysis we are simply taking averages applied to one local situation because we are not after the exact effect of CO2 but only its maximum effect.
In any case Tcdiox represents the real temperature increase over last 68 years. You have to add Tcdiox to the overall temp difference of 11 to get the maximum temperature difference of clouds, H2O and CO2 . So the maximum effect of any temperature changes caused by clouds, water vapour, or CO2 on a cloudy night is 11.65C. We will ignore methane and any other GHG except water vapour.
So from the above URL link clouds represent 85% of the total temperature effect , so clouds have a maximum temperature effect of .85 * 11.65 C = 9.90 C. That leaves 1.75 C for the water vapour and CO2. CO2 will have relatively more of an effect in deserts than it will in wet areas but still can never go beyond this 1.75 C . Since the desert areas are 33% of 30% (land vs oceans) = 10% of earth’s surface , then the CO2 has a maximum effect of 10% of 1.75 + 90% of Twet. We define Twet as the CO2 temperature effect of over all the world’s oceans and the non desert areas of land. There is an argument for less IR being radiated from the world’s oceans than from land but we will ignore that for the purpose of maximizing the effect of CO2 to keep the alarmists happy for now. So CO2 has a maximum effect of 0.175 C + (.9 * Twet).
So all we have to do is calculate Twet.
Reflected IR from clouds is not weaker. Water vapour is in the air and in clouds. Even without clouds, water vapour is in the air. No one knows the ratio of the amount of water vapour that has now condensed to water/ice in the clouds compared to the total amount of water vapour/H2O in the atmosphere but the ratio can’t be very large. Even though clouds cover on average 60 % of the lower layers of the troposhere, since the troposphere is approximately 8.14 x 10^18 m^3 in volume, the total cloud volume in relation must be small. Certainly not more than 5%. H2O is a GHG. Water vapour outnumbers CO2 by a factor of 25 to 1 assuming 1% water vapour. So of the original 15% contribution by GHG’s of the DWIR, we have .15 x .04 =0.006 or 0.6% to account for CO2. Now we have to apply an adjustment factor to account for the fact that some water vapour at any one time is condensed into the clouds. So add 5% onto the 0.006 and we get 0.0063 or 0.63 % CO2 therefore contributes 0.63 % of the DWIR in non deserts. We will neglect the fact that the IR emitted downward from the CO2 is a little weaker than the IR that is reflected by the clouds. Since, as in the above, a cloudy night can make the temperature 11C warmer than a clear sky night, CO2 or Twet contributes a maximum of 0.0063 * 1.75 C = 0.011 C.
Therfore Since Twet = 0.011 C we have in the above equation CO2 max effect = 0.175 C + (.9 * 0.011 C ) = ~ 0.185 C. As I said before; this will increase as the level of CO2 increases, but we have had 68 years of heavy fossil fuel burning and this is the absolute maximum of the effect of CO2 on global temperature.
So how would any average global temperature increase by 7C or even 2C, if the maximum temperature warming effect of CO2 today from DWIR is only 0.185 C? This means that the effect of clouds = 85%, the effect of water vapour = 13.5 % and the effect of CO2 = 1.5%.
Sure, if we quadruple the CO2 in the air which at the present rate of increase would take 278 years, we would increase the effect of CO2 (if it is a linear effect) to 4 X 0.185C = 0.74 C Whoopedy doo!!!!!!!!!!!!!!!!!!!!!!!!!!
If the greenhouse gases in the atmosphere absorb a proportion β and radiate half of it back to the surface then the equilibrium temperature satisfies the condition:
That is true in the upper atmosphere but it’s not what happens near the surface where thermalization is the predominant mode of heat transfer from the excited CO2.
If by thermalization you mean convection, that does not affect the argument of my post. Convection is going on whether you have clouds or not. My sole point in the post is to show the maximum possible effect that CO2 could have had in last 68 years. It may well be ZERO, but if Thayer Watkins is correct, then CO2 can not have had any more effect than 0.185C since mankind has been emitting major amounts of CO2 into the atmosphere. That is an average increase of 0.00272 C per year, or 0.2 C per century, well under Lord Monckton’s or any other scenario of climate sensitivity.
Alan T, you should organize this with illustrations for an article here on WUWT.
I tried.
That 11 C temperature variation due to clouds at night can occur daily. It’s like saying since day and night solar insolation varies from 1,000 W/m^2 to zero, therefore solar variability of 1 or 2 W/m^2 has negligible effect on climate. Go figure
Hi Louis
Recently posted:
https://wattsupwiththat.com/2019/01/09/a-sea-surface-temperature-picture-worth-a-few-hundred-words/#comment-2583524
In the Vostoc cores, peak CO2 was never able to maintain peak temperature; in fact, peak CO2 WAS CAUSED BY temperature BECAUSE CO2 always LAGGED TEMPERATURE IN TIME.
CO2 TRENDS LAG TEMPERATURE TRENDS AT ALL MEASURED TIME SCALES.
– by hundreds of years in the ice core record;
– by ~9 months in the modern data record.
REFERENCES:
CARBON DIOXIDE IN NOT THE PRIMARY CAUSE OF GLOBAL WARMING: THE FUTURE CAN NOT CAUSE THE PAST
by Allan MacRae
http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/
http://www.woodfortrees.org/plot/esrl-co2/from:1979/mean:12/derivative/plot/uah5/from:1979/scale:0.22/offset:0.14
THE PHASE RELATION BETWEEN ATMOSPHERIC CARBON DIOXIDE AND GLOBAL TEMPERATURE
by Ole Humlum, Kjell Stordahl, Jan-Erik Solheim
Global and Planetary Change, Volume 100, January 2013, Pages 51-69
https://www.sciencedirect.com/science/article/pii/S0921818112001658
[I deleted the earlier, identical comment. Mod]
What is the justification of using 4.83 instead of 5.35 to be multiplied by the ratio of after/before C)2 levels to obtain W/m^2 forcing from the change?
What is the justification of using 1 degree K as reference sensitivity after deriving 1.15 K? (I figure 1.11 K using 5.35 times ln (2) times .3)
In reply to Mr Klipstein, the CMIP5 model ensemble (whose outputs are summarized in Andrews+ 2012) has a mean CO2 radiative forcing of 3.346 K. The product of 3.346 K and the Planck sensitivity parameter 0.3 gives the reference sesitivity to doubled CO2: it is 1.0 K.
The value 1.15 K is the centennial midrange predicted warming on the basis of IPCC’s RCP 6.0 declared CO2 forcing of 700 ppmv in 2100 compared with 368 ppmv in 2000, enhanced by 20% to allow for other anthropogenic forcings.
One thing to consider is that the forcings and temperature rises predicted by the RCPs include forcings from increase of GHGs other than CO2 such as methane. The RCPs are named after W/m^2 forcings from all GHGs (other than water vapor) added to the atmosphere by human activity.
Another thing to consider is that warming after 2000 is not limited to that caused by GHGs added to the atmosphere after 2000, because of lags.
One need not worry about lags in the operation of the short-acting feedbacks, precisely because they are short-acting, with delays of years at most and usually hours or days.
There are lags other than ones in feedbacks. For example, the lag in temperature of the upper ocean in response to a forcing. This takes many years.
The atmosphere does not warm the oceans. The temperature direction is the other way through evaporation. When water evaporates it loses heat. Put a pail of room temperature water inside a room in the tropics where temperature does not vary much. There will be a steady evaporation of the water to the air in the room until there will be no water left. In the earth system, the oceans on average are slightly warmer than the air temperature despite the fact that 77.76 W/M^2 of evaporation goes on and 8.64 W/m^2 of transpiration happens. The reason is that the oceans are receiving 114 W/m^2 of sunlight and the land is receiving 49 W/m^2 of sunlight.
Uhhhh . . . I believe that, absent cloud effects, both lands and oceans RECEIVE the same amount of sunlight at the same latitude for any particular day of the year.
In addition to sunlight, the land and the ocean receive downwelling IR from clouds and greenhouse gases in the atmosphere.
I’ve always been suspicious about the ignoring of the feedback and would be interested in reading your paper. I am very interested in reading it. If you have it on hand, I’d also like to see a good reference to the Climatologies perspective on it, just so I can see the most recent stuff.
Is it possible for you to send it to me?
If Mr Combs were to email me at monckton{at}mail.com, I should be able to send him a short version of our paper, which will present the argumentum ex definitione in Classical logic that is, on its own, enough to establish our main point.
The paper currently under review contains a much more mathematical treatment of the subject, providing formal proof that what is in any event self-evident in Classical logic is also demonstrably true in physics and in mathematics.
Christopher Monckton of Brenchley, thank you for your essay.
Good luck with the paper.
I forgot to add: I like the dial presentation.
Many thanks to Mr Marler for his kind comments. And I’m glad he likes the dial. it takes a bit of getting used to, but it does give a very clear presentation of the wide divergence between excitable prediction and sober observed reality.
Lord Monckton-
I have been a fan all your activities, both scholarly and in the activist realm (who can forget your actions at the COP meeting in South Africa.) And the work you describe in this post is eye-opening. Thanks for taking the time to discuss it here.
However, I would like to humbly comment on your “dial” graphic. I would describe your graphic as elegant, but I am not sure that elegance is what you should be striving for. I recently had cause to go back and review what I knew of Edward Tufte’s work in the visual display of quantitative information. He points out that the purpose of visual display of information is to make the difficult and complex more easily understood. I am not sure that the dial graphic has made your results easier to understand.
After a forty year career in engineering research, having had to present my fair share of information graphically, I had a hard time understanding what you were trying to convey. (true, I’ve been retired for 20 years, so I could be a little rusty). And not that the dial itself is a bad idea, I just think that there is too much information crammed into too little space.
In response to Old Engineer, I should be most grateful if he would create, and convey to me at monckton[at]mail.com, a simpler graphic that conveys the same information.
Most people who take one look at the graph can see at once that there is a very large discrepancy between both the magnitude and the interval of official predictions (cunningly labeled “official predictions” in large letters) and the magnitude and interval of observed warming.
Lord Monckton, it is said that a picture is worth a hundred words. In the case of your dial graphic I fear it needs more than a 100 words to explain just what the graphic is trying to express. Perhaps bar charts would convey the message in a simple and digestible manner. That said, may I thank you for sharing the results of your teams sterling efforts.
Perhaps oldscouser would like to draw a suitable bar-chart for me. Most people who have seen the dial get the point at once that the needles indicating the measured or inferred observed temperature do not fall anywhere within the enormous interval of warming predicted by official climatology.
“Therefore, the 21st-century warming that IPCC should be predicting, on the RCP 6.0 scenario and on the basis of its own estimates of CO2 concentration and the models’ estimates of CO2 forcing and Charney sensitivity, is 3.37 x 1.15, or 3.9 K.
Yet IPCC actually predicts only 1.4 to 3.1 K 21st-century warming on the RCP 6.0 scenario”
No, it should not be predicting that. As said, 3.37 is equilibrium sensitivity. The change when the effects of the rise in CO2 during C21 have settled. That won’t have happened by end century.
“Finally, UAH, which Professor Ole Humlum (climate4you.com) regards as the gold standard for global temperature records. Before UAH altered its dataset, it used to show more warming than the others. Now it shows the least, at 1.3 C°/century equivalent.”
So, more or least. Which one is gold?
In reply to Mr Stokes, the short-acting feedbacks have delay times of years at most, and usually hours or days. There is, therefore, a clear discrepancy between the detuned predictions of 21st-century warming made by IPCC, of which the RCP 6.0 prediction is studied in the head posting, and IPCC’s far more extreme prediction of equilibrium sensitivity to doubled CO2.
The second question raised by Mr Stokes is not for me but for Professor Humlum.
“In reply to Mr Stokes, the short-acting feedbacks have delay times of years at most, and usually hours or days.”
To my limited understanding that seems at odds with what the IPCC say, based on CMIP5.
According to AR5 (WG1 Box 12.2) the CMIP5 exercise produces an ECS of 2.1°C to 4.7°C, but they also say that CMIP5 gives an estimate for TCR of 1.2°C to 2.4°C.
“In reply to Mr Stokes, the short-acting feedbacks have delay times of years at most, and usually hours or days.”
The time of actions of feedbacks have little to do with the time scale. It’s a much simpler issue; how fast does a kettle heat when you turn on the gas. That has nothing to do with feedbacks; it is just thermal inertia. You have to maintain a heat flux for a while to add enough heat to raise the temperature. Same with AGW and the oceans, and the scale there can be centuries.
Bellman makes the right point. TCR predicts only half the heating of ECS on its timescale. And that timescale is seventy years (of rising flux).
“The time of actions of feedbacks have little to do with the time scale.”
Please Nick.
Time is time.
Thermal inertia is part of how fast a feedback works, they are interdependent.
Your argument, such as it is, needs to be expressed differently.
Nick Stokes is still thinking within the frame of reference of the existing, unduly limited transfer-function equation.
In 1850 the equilibrium temperature was 287.5 K. Even allowing for Mr Stokes’ 70-year delay, it was still 287.5 K, for there was no trend in global temperature for 80 years after 1850 (HadCRUT4). The reference temperature was somewhere between 220 and 265 K. The transfer function accordingly fell on [1.1, 1.3].
The result came as quite a surprise to us, too
========
MoB, I would recommend submitting your results for peer review by physicists with a background in theoretical mathematics rather than climatologists. From your previous postings, we saw that the climatology definition of feedback is so highly ingrained in climatology that you are unlikely to overcome human bias, even when the climatologist is sympathetic.
The problem is not PhD climatology. It is first or second year physics and mathematics. It is the age old problem of specialization: Knowing more and more about less and less.
In reply to Ferd Berple, I agree that the journal should really get a professor of applied control theory to read what our professor of control theory has written. It would also be a good idea for the journal to find a number theorist to read the latest draft of our paper, for it contains the standard number-theoretic demonstration that the feedback fraction is simply the sum of an infinite convergent geometric series under the convergence condition that the absolute value of the feedback fraction be less than 1.
The current draft of the paper provides multiple demonstrations – one in Classical logic (which is far and away the simplest for third parties to understand); one in number theory; and one in control theory. In this way, we have left no room for doubt.
The result came as quite a surprise to us, too
========
Here is a simple example to demonstrate this result; showing that feedback works on both change and absolutes, contrary to the formal theory of climatology.
This single contradiction is sufficient via falsification to show that climatology theory is incorrect. It also shows that MoB’s paper is correct in the specific. Readers can easily test this for themselves.
Typically, passenger cars have feedback in their steering, as a result of the geometry of the suspension. The purpose of this feedback is for safety; to return the vehicle to a straight course when the driver releases the steering wheel and the vehicle is in motion forward.
Select a large empty lot. A vehicle with manual steering works best, but power steering will also work. The forces will simply be less obvious.
Put the vehicle in motion. To test positive feedback, drive the vehicle in reverse.
To test negative feedback, drive the vehicle in forward. For our test, drive the vehicle in reverse because climatology theory is that feedback is positive.
Hold a constant speed, and turn the steering wheel to the left. You will feel feedback from the steering geometry via the steering wheel as you turn. Stop turning the steering wheel, but do not allow it to come back to center (or to slam outward to the left because you are in reverse). Hold the steering wheel in a constant position to the left.
In both cases, when you turned the steering wheel from neutral (absolute zero) and when you hold the steering wheel in a constant turn (no change) you will continue to feel feedback from the suspension geometry, via the steering wheel. This feedback does not disappear simply because there is no further change in the system.
Now turn the steering wheel further to the left while maintaining constant speed. You will notice that the further you turn to the left, the more force that will feedback from the suspension geometry to the steering wheel. This is the feedback from a delta.
Now hold the steering wheel at the extreme left position. The feedback via the steering wheel does not go to zero, rather it remains at the highest level of the entire test. In fact, if you are going fast enough in reverse, you may have considerable trouble holding the steering wheel in position against the feedback force. This is the no change feedback.
Under climatology theory, this “no change” feedback should go to zero when you hold the steering wheel in a constant position, but it does not. The feedback only goes to zero when the steering geometry returns to the neutral position.
Now hold the steering wheel at the extreme left position.
===========
note: obviously, not so extreme that the suspension hits the stops.
Ferd Berple’s feedback analogy is quite a nice one. However, we were not surprised by the fact that the transfer function is the ratio of equilibrium to reference temperature, and not merely the ratio of equilibrium to reference sensitivity. The matter is self-evident from the mathematics and physics, as well as ex definitione. We were, however, surprised by the fact that official climatology simply had no idea that the transfer function is the ratio of absolute equilibrium to reference temperature.
Lord Monckton,
I strongly hope your paper will be published shortly. The sooner, the better. In my opinion it’s strong, elegant and just brief enough for a lay person to understand (I’m not sure about Dutch politicians). In the Netherlands policy makers are about to take the most absurd measures to reduce carbon dioxide emissions. To them it’s a pollutant and more dangerous than any other matter around. Please keep us posted which journal will publish . As soon as it is available I buy a couple of dozen to hand on personally to our policy makers in The Hague!
Mr Duiker is very kind. We, too, hope the paper will be published in due course. The fact that the editor of the journal withdrew his rejection of the paper based on the manifest inadequacies of the reviews is a promising start. As soon as the paper is published, WUWT will be the first to know.
If official climatology had not made the mistake of adopting an erroneously restrictive definition of temperature feedback, no one would ever have tried to maintain that global warming caused by us could possibly be catastrophic. The whole nonsense is based on a fundamental and elementary error of physics perpetrated by climatologists borrowing mathematics from another branch of physics without understanding what they had borrowed.