By Christopher Monckton of Brenchley

I make no apology for returning to the topic of the striking error of physics unearthed by my team of professors, doctors and practitioners of climatology, control theory and statistics. Our discovery the climatology forgot the Sun is shining brings the global-warming scare to an unlamented end. My last article discussing our result attracted more than 800 comments. Here, I propose to answer some of the more frequently-occurring comments, which will be in bold face. Replies are in regular face.

In a temperature feedback loop, the input signal is surface reference temperature before feedback acts. The output signal is equilibrium temperature E after feedback has acted. The feedback factor f (= 1 – R / E) is the ratio of the feedback response fE (= E – R) to E. Then E = R + fE = R(1 – f)–1. By definition, E = RA, where A, the system-gain factor or transfer function, is equal to (1 – f)–1 and to E / R.

But your result is too complex. Please state it in simpler terms.

Erroneously, IPCC (2013, p. 1450) defines temperature feedback as responding only to changes in reference temperature. However, feedback also responds to the entire reference temperature. Climatology thus omits the sunshine from its sums and loses the opportunity to find, directly and reliably, the Holy Grail of climate-sensitivity studies – the system-gain factor.

Lacis+ (2010) imagined that in 1850 feedback response accounted for 75% of the equilibrium warming of ~44 K driven by the pre-industrial non-condensing greenhouse gases, implying a feedback factor 0.75, a system-gain factor 4 and an equilibrium sensitivity 4.2 K. i.e., 4 times reference sensitivity 1.04 K (Andrews 2012). Lacis misattributed to the non-condensing greenhouse gases the large feedback response to the emission temperature from the Sun.

In reality, absolute emission temperature in 1850 with no non-condensing greenhouse gases would have been 243.3 K and the warming from those gases 11.5 K, giving a reference temperature of 254.8 K before feedback. The HadCRUT4 equilibrium temperature after feedback was 287.55 K Thus, the system-gain factor, the ratio of equilibrium to reference temperature, was 287.55 / 254.8, or 1.13.

By 2011, if all warming since 1850 was anthropogenic, reference temperature had risen by 0.68 K to 255.48 K. Equilibrium temperature had risen by the sum of the 0.75 K observed warming (HadCRUT4) and 0.27 K to allow for delay in the emergence of manmade warming: thus, 287.55 + 1.02 = 288.57 K.

Climatology would thus calculate the system-gain factor as 1.02 / 0.68, or 1.5. Yet the models’ current mid-range estimate of 3.4 K warming per CO2 doubling implies an impossible 3.25.

In reality, the system-gain factor was 288.57 / 255.48, or 1.13, much as in 1850. It barely changed over the 161 years 1850-2011 because the 254.8 K reference temperature in 1850 was 375 times the manmade reference sensitivity of 0.68 K from 1850-2011. Sun big, man small: nonlinearities in feedback response are not an issue.

Given 1.04 K reference warming from doubled CO2, equilibrium warming from doubled CO2 is 1.04 x 1.13, or 1.17 K, not the 3.4 [2.1, 4.7] K imagined in the CMIP5 models (Andrews, op. cit.). And that, in just 350 words, is the end of the climate scare. There will be too little warming to cause harm.

The feedback-loop diagram simplifies to this black-box block diagram

But your result is too simple. Bringing 122 years of climatology to an end in 350 words? It can’t be as simple as that. Really it can’t. It has to be complicated. Models take account of a dozen individual feedbacks and the interactions between them. IPCC (2013) mentions “feedback” more than 1000 times. Feedback accounts for 85% of the uncertainty in equilibrium sensitivity (Vial et al. 2013). You can’t just jump straight to the answer without even mentioning, let alone quantifying, even one individual feedback. Look, in climatology we just don’t do simple.

Inanimate feedback processes cannot “know” that they must not respond to the very large emission temperature but only to the comparatively small subsequent perturbations. Once it is accepted that feedback responds to the entire input signal, it becomes possible to derive the system-gain factor reliably and immediately. It is simply the ratio of equilibrium to reference temperature at any chosen time. Equilibrium sensitivity to doubled CO2 (after feedback has acted) is simply the product of the system-gain factor and the reference sensitivity to doubled CO2 (before feedback has acted). And that’s that. To find the system-gain factor, one does not need the value of any individual feedback. We can treat the transfer function between reference and equilibrium temperatures simply as a black box.

But each of the five Assessment Reports of the IPCC is thousands of pages long. You can’t just get the answer that has eluded the world’s experts in a few paragraphs.

To quote a former occupier of the office of President of the United States, “Yes We Can.” The “experts” had borrowed feedback math from control theory without understanding it. James Hansen of NASA first explicitly perpetrated the error of forgetting the sunshine in a lamentable paper of 1984. Michael Schlesinger perpetuated it in a confused paper of 1985. Thereafter, everyone in official climatology copied the mistake without checking it. Correcting the error makes it easy to constrain the system-gain factor and hence equilibrium sensitivity.

But climate sensitivity in models is what it is. The science is settled.

All honest experts in control theory will agree that feedback processes in dynamical systems respond to the entire input signal and not just to some arbitrary fraction of that signal. The math is the same for all feedback-moderated dynamical systems – electronic op-amp circuits, process-control systems, climate. Build a test rig. All you need is an input signal, a feedback loop and an output signal. Set the input signal and the feedback factor to any value you like. Now measure the output signal. The circuit doesn’t respond only to some fraction of the input signal. It responds to all of it. We checked by building our own test rig and then getting a government lab to build one for us and to measure the output under a variety of conditions.

Feedback amplifier test circuit built and operated for us by a government lab

But the circuits you built are too simple. Any undergraduate could have built them. You didn’t need to go to a government lab.

We knew official climatology and its devotees would kick and scream and whinge and throw all their toys out of the stroller when they learned of our result. Trillions are at stake. So we checked what did not really need to be checked. Feedback theory has been around for 100 years. To borrow a phrase, it’s settled science. But we checked anyway. Oh, and we went right back to basics and proved the long-established feedback system-gain equation by two distinct methods.

But you didn’t need to prove the equation by two methods. All you needed to do was to prove it by linear algebra.

Yes, indeed. The proof by linear algebra is very simple. Since the feedback factor is the ratio of the feedback response in Kelvin to equilibrium temperature, the feedback response is the product of the feedback factor and equilibrium temperature. Then equilibrium temperature is the sum of reference temperature and the feedback response. With a little elementary algebraic manipulation, it follows that equilibrium temperature is the product of reference temperature and the reciprocal of (1 minus the feedback factor). That reciprocal is, by definition, the system-gain factor.

But we also obtained the system-gain factor as the sum of an infinite series of powers of the feedback factor. Under the convergence condition that the absolute value of the feedback factor is less than 1, the system-gain factor is the sum of the infinite series of powers of the feedback factor, which is the reciprocal of (1 minus the feedback factor), as before. We are guilty of double-checking. Get over it.

Convergence upon the truth

But the equation you use is not derived from any known physical theory.

Yes, it is. See the above answer. But all you really need to know about feedback is that the system-gain factor is the ratio of equilibrium temperature (before feedback) to reference temperature (after feedback). For 1850 and for 2011, we know both temperatures to quite a small margin of error. So we know the system-gain factor, and from that we can derive equilibrium sensitivity to doubled CO2.

But climatology’s version of the system-gain equation is derived from the energy-balance equation via a Taylor-series expansion. It can’t be wrong.

It isn’t wrong. It’s just not useful, because there is much more uncertainty in the delta temperatures than in the well constrained absolute temperatures we use. Neither the energy-balance equation nor the leading-order term in the Taylor-series expansion reliably gives the system-gain factor. It is only when you remember the Sun is shining that you can find the value of that factor directly and reliably.

Climatology in the dark

But if you’re saying climatology isn’t wrong, why are you saying it’s wrong?

Climatology’s system-gain equation, using reference and equilibrium temperature changes rather than absolute temperatures, is a correct equation as far as it goes. It is the difference between two instances of the absolute-value equation. But climatology erroneously limits its definition of feedback as responding only to changes, effectively subtracting out the sunshine. Feedback also responds to the absolute input signal, making it easy to find the system-gain factor and thus equilibrium sensitivity.

But you’re starting your calculation from zero Kelvin. You’re literally Switching On The Sun.

No. We have looked out of the window and noticed that the Sun is already Switched On and shining (well, not in Scotland, obviously, but everywhere else). Our calculation starts not with zero Kelvin but with the reference temperature of 254.8 K in 1850. The feedback processes in the climate respond to that temperature and not to any other or lesser temperature. They neither know nor care whether or to what extent they may have existed at any other temperature. They neither know nor care how they might have responded to some other temperature. They respond as they are, and they respond only to the temperature they find. We know the magnitude of the response they engender, for we can measure the equilibrium temperature, calculate the reference temperature and deduct the latter from the former.

But the Earth exhibits bistability. It can have two different temperatures for the same forcing.

Given the variability of the climate, Earth can have several temperatures for a single forcing. But not in the short industrial era. The system-gain factors for 1850 and 2011 are close to identical, indicating that at present there is insufficient inherent instability to disturb our result.

The scrambled account of feedback math in Hansen (1984)

But the feedback system-gain equation is not appropriate for climate sensitivity studies.

Interesting how the true-believers abandon their “settled science” when it suits them. The system-gain equation is mentioned in Hansen (1984), Schlesinger (1985), Bony (2006), IPCC (2007, p. 631 fn.), Bates (2007, 2016), Roe (2009), Monckton of Brenchley (2015ab), etc., etc., etc. If feedback math were not applicable to the climate, there would be no excuse for trying to pretend that equilibrium sensitivity to doubled CO2 is anything like 2.1-4.7 K, still less the values up to 10 K in some extremist papers. As it is, all such values are nonsense anyway, as we have formally proven.

But Wikipedia shows the following feedback-loop block diagram, which proves that feedback only responds to changes, or “disturbances”, in the input signal and not to the whole signal –

A feedback loop diagram from the world’s chief source of fake news

Our professor of control theory trumps the CreepyMedia diagram with the following diagram. And behold, the reference or input signal is at left; the perturbations (in pink) descend from above to their respective summative nodes; and the feedback block (here labeled the “output transducer”) acts on all of these inputs, specifically including the reference signal –

Mainstream block diagram for a control feedback loop

But the models don’t use the system-gain equation. They don’t even use the concept of feedback.

No, they don’t (not these days, at any rate, though until recently their outputs were fed into the system-gain equation to derive equilibrium sensitivity). However, we took some care to calibrate the models’ predicted [2.1, 4.7] K interval of Charney sensitivities using the system-gain equation, which produced exactly the same interval based on the excessive feedback factors derivable from Vial+ 2013. The system-gain equation is, therefore, directly relevant.

The models try valiantly to simulate the multitudinous microphysical processes, many of them at sub-grid scale, that give rise to feedback, as well as the complex interactions between them. But that is a highly uncertain and error-prone method – and even more prone to abuse by artful tweaking than the temperature records themselves: see e.g. Steffen+ (2018) for a deplorable recent example. Besides, no feedback can be quantified or distinguished from other feedbacks or even from the forcings that triggered it by any measurement or observation. The uncertainties are just too many and too large.

Our far simpler and more reliable black-box method proves that the models have, unsurprisingly, failed in their impossible task. By correcting climatology’s error of definition, we have cut the Gordian knot and found the correct equilibrium sensitivity directly and with very little uncertainty.

But you talk of reference and equilibrium temperature when radiative fluxes drive the climate.

Well, they’re called “temperature feedbacks”, denominated in Watts per square meter per Kelvin of the temperature that induced them. They are diagnosed from the models and summed. The feedback sum is multiplied by the Planck sensitivity parameter in Kelvin per Watt per square meter to give the feedback factor. Because the feedback factor is unitless, it makes no difference whether the loop calculation is done in flux densities or temperatures. Besides, our method requires no knowledge of individual feedbacks at all. We find the reference and equilibrium temperatures, whereupon the ratio of equilibrium to reference temperature is the feedback system-gain factor. Anyway, if you want to be pedantic it’s radiative flux densities in Watts per square meter, not fluxes in Watts, that are relevant.

Ten handsome unpersons

But you’re not a scientist.

My co-authors include Professors of climatology, applied control theory and statistics. We also have an expert on the global electricity industry, a doctor of science from MIT, an environmental consultant, an award-winning solar astrophysicist, a nuclear engineer and two control engineers, to say nothing of our pre-submission reviewers, two of whom are the world’s most famous physicists.

But there’s a consensus of expert opinion. All those general-circulation model ensembles and scientific societies and intergovernmental agencies and governments just can’t be wrong.

Yes They Can. In suchlike bodies, totalitarianism prevails (though not for much longer). For them, the Party Line is all, and mightily profitable it is – at taxpayers’ and energy-users’ expense. But the trouble with adherence to the Party Line is that it is a narcotic substitute for independent, rational, scientific thought. The Party Line replace the heady peril of mental exploration and the mounting excitement of the first glimmer of a discovery with a dull, passive, cringing, acquiescent uniformity.

Worse, since the totalitarians who have captured academe ruthlessly enforce the Party Line, they deter terrorized scientists from asking the very questions it is the purpose of scientists to ask. It is no accident that most of my distinguished co-authors now live and move and have their being furth of the dismal scientific establishment of today: for if we were prisoners of that grim, cheerless, regimented, unthinking, inflexible, totalitarian mindset we should not have been free to think the thinkworthy. For these malevolent entities, and the paid or unpaid trolls who mindlessly support them in comments here regardless of the objective truth, punish everyone who dares to think what is to them the utterly unthinkable and then to utter the utterly unutterable. Several of my co-authors have suffered at their hands. Nevertheless, we remain unbowed.

But no one agrees with you.

Here is one of many supportive emails we have had. I get ten supportive emails for every whinger –

“Hi and congratulations on what I believe may have the potential to put the final nail in the coffin of the anthropogenic global warming hysteria. The work of you and your team is very promising and I cannot wait to see how alarmists will go about to attack this. Bring out the popcorn, as we say. The application of feedback theory in this case is simple, physics-wise elegant, mathematically beautiful, and understandable to a wider audience. I am especially excited about how the equation grasps the whole feedback problem without having to deal with all the impossible little details of trying to distinguish which gas does what and without relying on hopelessly complex computer models. And that is why I think it will stick. I will be following this eagerly in the coming months and years and I am considering going to Porto [on 7-8 September: portoconference2018.org: b there or b2] to catch all the latest from others as well, even though your work is the current crown jewel of the anthropogenic global warming debate so far.”

But global temperature is rising as originally predicted.

No, it isn’t –

Our prediction is close to reality: official climatology’s predictions are far out

But you have averaged the two global-temperature datasets that show the least global warming.

Yes, we have. The other three longest-standing datasets – RSS, NOAA and GISS – have all been tampered with to such an extent that they are no longer reliable. They are a waste of taxpayers’ money. We consider the UAH and HadCRUT4 datasets to be less unreliable. IPCC uses the HadCRUT dataset as its normative record. Our result explains why the pause of 18 years 9 months in global warming occurred. Because the underlying anthropogenic warming rate is so small, when natural processes act to reduce warming it is possible for long periods without warming to occur. NOAA’s State of the Climate report in 2008 admitted that if there were no warming for 15 years or more the discrepancy between the models and reality would be significant. It is indeed significant, and now we know why it occurs.

But …

But me no buts. Here’s the end of the global warming scam in a single slide –

The tumult and the shouting dies: The captains and the kings depart …

Lo, all their pomp of yesterday Is one with Nineveh and Tyre

Article Rating
Inline Feedbacks
UzUrBrain
August 15, 2018 11:24 am

Finely an explanation that agrees with my four years of engineering math and 20+ years of designing, developing the process control system, aligning and calibrating and tuning the process control systems for Nuclear power Plants. And my knowledge of process control theory was good enough that the final system needed minimal tuning during the startup phase of the plant. From all of the false theories I have read about this feedback/forcing BS associated with Climate change, I was beginning to think I needed a refresher course in Process control systems. I do not thank any man has designed a process control system to date that can keep a Nuclear power plant, Spacecraft, Airplane, even autonomous automobile as stable as the present inherent climate control system for global temperature.

David L. Hagen
August 15, 2018 1:09 pm

usurbrain Yes Earth’s climate is operating well.
God originally stated it to be “very good”. Genesis 1:31
https://biblehub.com/genesis/1-31.htm

Theo
August 15, 2018 1:25 pm

Earth’s climate has not always been so hospitable to life over the past four billion years. There have been many mass extinctions, and Snowball Earth episodes lasting hundreds of millions of years, in which average global temperature may have dropped to around -50 degrees C.

JonScott
August 16, 2018 10:22 pm

Extinction is a NATURAL part of the system. Nothing lasts for ever and the natural demise of one genus creates space for another. Irrespective of the actual cause of the demise. Time to send the Canutes back to school because they obviously were not paying attention the first time.

Greg
August 15, 2018 1:15 pm

The problem with this lies here:

Our calculation starts not with zero Kelvin but with the reference temperature of 254.8 K in 1850.

The so-called reference temperature : whatever the Earth would be without GHG feedbacks is unknowable. The only way is to guestimate what the Earth’s albedo would be in that state or know how strong the GHG warming is and work backwards.

It’s circular logic.

UzUrBrain
August 15, 2018 1:28 pm

Why is it not possible to obtain a fairly good approximation from the temperature of Earth’s Moon and other Moons? Especially when I hear celebrated “Scientists” claiming they can determine if there is humanoids on newly discovered planets from the presence of GHG in their atmosphere.

Theo
August 15, 2018 1:44 pm

Please state which scientists have claimed that they can determine the presence of “humanoids” on other planets based upon GHGs in their atmospheres. This claim has escaped my notice.

Thanks!

UzUrBrain
August 15, 2018 2:37 pm

On a Discovery Channel Documentary about the discovery of planets a few years back (not more that 4 or 5) His logic absolutely flabbergasted me as I was responsible for calibrating instruments to NBS traceable requirements while in the military and understood the impossibility of this. His name and university was in the credits, I pulled up the university web page and wrote him a respectful letter asking how this was possible in that you are dealing with single digit parts per million in a device that does is not have that accuracy and all I got back wa gobbledygook professor speak, I am smarter than you lecture. Millions have seen it so millions believe it. Do not remember the title, only that it dealt with the discovery of exoplanets early when it was proven and verified.

Theo
August 15, 2018 3:24 pm

Thanks. Couldn’t find it on YouTube.

Yes, I’m still in the evil clutches of Alphabet.

August 15, 2018 2:32 pm

In response to Usurbrain, we have used a nifty wrinkle in geometric number theory (the spherical-surface areas of equialtitudinal spherical segments are equal) to derive the mean dayside temperature of the Moon: it is around 306 K. On the nightside, it is about 94 K. Mean lunar temperature is about 200 K, not the 270 K imagined by NASA, which has failed to allow for Hoelder’s inequalities between integrals.

Using a similar technique on Earth, the mean dayside temperature is 275 K and the nightside temperature around 246 K (based on a study of Earthlike aquaplanets by Merlis+ (2010), after allowing for the lower albedo in Merlis (0.38 against Lacis’ 0.42). Thus, the mean terrestrial surface temperature in the absence of noncondensing greenhouse gases is about 260.5 K, and the reference temperature after adding 11.5 K of pre-industrial non-condensing greenhouse gases is about 272 K.

This would imply a system-gain factor of 287.55 / 271.95, or 1.06, implying Charney sensitivity 1.10 K. We noted this result in passing in our paper, but adhered to climatology’s erroneous method that does not allow for Hoelder’s inequalities, so as to obtain the system-gain factor 1.13, implying CHarney sensitivity 1.17 K. Hope this helps.

Richard G.
August 16, 2018 1:16 pm

Dear MoB, I greatly appreciate your efforts to de-fuse the climate bomb. Your comment above illustrates my main objection to the foundation of modern climate science. It is founded on several logical fallacies, primarily False Premise and Misplaced Precision.

To use your lunar temperature model as an example, it describes a day side temperature, a night side temperature, and a mean temperature as if these are real and not simply a mathematical abstraction. The mean exists nowhere and equilibrium between day and night exists nowhere.

Like wise the model for the Earth. There is no equilibrium, as the system is continually chasing it’s tail as it seeks an equilibrium that stays out of reach through the diurnal and seasonal cycles. This is actually perpetual Dis-equilibrium, and it pervades the entire atmospheric/hydrospheric system. I therefore propose in the spirit of truth in advertising that the term Equilibrium Temperature be henceforth changed to Disequilibrium Temperature.

It is also patently obvious after the climate gate capers that the HAD-CRUT database is absolutely corrupted by errors. It follows that the Reference Temperature is actually a wild-ass-guess. Thus the derivation of the term Climathemajics.

This from a biologist who gets my own local temperature readings from an old fashioned mercury/glass high/low recording thermometer calibrated in two degrees of accuracy, not tenths or hundredths, and understands that a reading of 55 1/2 degrees is actually translated from : ” Hmm, that looks like less than 56 and way more than 54, so lets say, oh, call it 55 1/2.”

Thanks for so publicly fighting the good fight.

August 16, 2018 5:15 pm

In response to Richard G, there is so much wrong with official climatology’s methods and data that one hardly knows where to start. However, we took the simple approach of accepting ad argumentum all of official climatology except what we could disprove, and then demonstrating what we could disprove.

The influence of the sunshine, once it is taken into account in feedback calculations, is such as to overwhelm small differences in estimates of surface temperature etc. The value of our method is that, within reasonable limits, one can vary all the input parameters without much affecting the final answer.

Kristi Silber
August 16, 2018 2:05 pm

I’m really trying to understand the background of your calculations, and why you believe them to be better than others.

So you are deriving the lunar temperature through mathematics, and believe that is a better estimate than measuring it? Do you take into account the temperature of the craters?

You use a single study of “Earthlike aquaplanets,” compensating for nothing but different albedo, to estimate pre-industrial terrestrial surface temperature? Am I understanding that correctly?

Can you define Hoelder’s inequalities in layman’s terms, and why it should be applied to estimation of climate sensitivity?

Why do you call it “Charney sensitivity”? Was he not the one who first came up with the 1.5-4 C range – and shouldn’t THAT be called the “Charney sensitivity,” it anything?

Where in your calculations of system gain do you account for the heat absorbed by the oceans?

August 16, 2018 5:12 pm

Ms Silber asks several interesting questions.

First, the lunar temperature. Unfortunately, we were unable to find anything in the Lunar Diviner mission’s papers that stated the lunar global mean surface temperature. Accordingly, we were compelled to compute it. On the dayside, we performed latitudinal calculations using the fundamental equation of radiative transfer and integrated, to find a mean temperature of 306 K. On the nightside, we used the Lunar Diviner data (for the nightside temperature varies little, and it is relatively easy to deduce the mean nightside temperature). That was about 94 K. So the lunar mean surface temperature is about 200 K.

We did a similar dayside calculation for the Earth, again calculating the mean temperature at each latitude and then integrating, using a useful device from geometric number theory that reduced the problem from a double integral (lat. and long.) to a single integral (for the spherical-surface areas of equialtitudinal spherical segments are equal). We used Merlis only to gain an idea of the nightside temperature, which depends far more on the heat capacity of the first 7 m of the ocean, treated as a slab, than on anything else. On Earth, we found that the emission temperature in the absence of greenhouse gases would not be the 243.25 K obtainable by a single global application of the fundamental equation of radiative transfer but more like 260.4 K. This consideration would reduce the system-gain factor from our 1.13 to about 1.06, in turn reducing Charney sensitivity from 1.17 to 1.10 K. We mentioned this result only in passing, as a consideration worthy of further work and, eventually, of a separate paper.

As to Hoelder’s inequalities between integrals, note that on the Moon a single use of the fundamental equation of radiative transfer suggests a lunar mean surface temperature of 270 K. However, it is in fact about 200 K, because the fundamental equation of radiative transfer is a fourth-power relation and the sum of a series of fourth powers differs from the fourth power of the sum of a series.

On Earth, owing to the formidable heat capacity of the ocean, the error is in the opposite direction: the temperature correctly calculated as the sum of a series of fourth powers (one for each spherical segment) is greater than the incorrectly-calculated single global value obtained from that equation.

Equilibrium sensitivity to doubled CO2 concentration is known to climatologists as “Charney sensitivity”. It is the standard metric or yardstick in equilibrium-sensitivity studies.

In our system-gain calculation we do not need to make any allowance for any individual feedback. All we need to know is the reference and equilibrium temperature for any chosen date for which respectable data are available. The system-gain factor is then the ratio of the latter to the former.

To allow for the possibility of time-delay occasioned by the heat capacity of the oceans, we performed not one but two calculations – one for 1850 and the other for 2011. The system-gain factor was the same in both years, at just 1.13 (or 1.50 if one uses the delta system-gain equation rather than the absolute-value equation). Therefore, time delay is not making much difference.

I do hope that these answers help.

RACookPE1978
Editor
August 16, 2018 8:12 pm

Monckton of Brenchley

First, the lunar temperature. Unfortunately, we were unable to find anything in the Lunar Diviner mission’s papers that stated the lunar global mean surface temperature. Accordingly, we were compelled to compute it. On the dayside, we performed latitudinal calculations using the fundamental equation of radiative transfer and integrated, to find a mean temperature of 306 K. On the nightside, we used the Lunar Diviner data (for the nightside temperature varies little, and it is relatively easy to deduce the mean nightside temperature). That was about 94 K. So the lunar mean surface temperature is about 200 K.

But does this not fall to the “insolated, isolated, insulated flat grey body in space” fallacy?
(A grey flat body is assumed uniformly insolated at a uniform rate in a perfect vacuum, and must lose enough energy from one side of the flat body to come in thermal equilibrium with the inbound radiation.)

Should not the moon be calculated as a (near-uniform) sphere illuminated on one side, rotating every 28 days at the earth’s actual orbit and losing energy from its entire surface?
Yes, thermal near-equilibrium can at best only be assumed if the weight, density, thermal mass and thermal conduction of the first 1 meter of the moon’s surface is approximated/estimated/guessed to be uniform. But the result would be 10 degree bands that can be proved useably correct by the Apollo instruments left on the surface.
If anything, the vacuum of the moon and slow rotation of the spherical surface would make a thermal radiation model of the moon easy for any of the GSM models to work.

August 17, 2018 9:02 am

In response to Mr Cook, we took ten billion spherical segments on the dayside hemisphere and derived the temperature of each by a calculation based on the zenith angle at the centerline of each segment. Then we took the average. Answer: 306 K. For the nightside, we averaged the Diviner measurements. Answer: 94 K. The mean of the two gives about 200 K.

RACookPE1978
Editor
August 17, 2018 9:04 am

Thank you.

richard verney
August 17, 2018 4:23 pm

Did you account for the varying albedo? If so, how was the albedo assessed at each point?

See:

?interpolation=lanczos-none&fit=inside|660:*

ThinkingScientist
August 17, 2018 8:14 am

Hi Christopher,

I don’t think my comments detract from your overall theory and presentation, but I must again take issue with the day, night and average temperatures of the moon that you are using. In doing so I will try and use clear language so others can perhaps see where this is going in terms of definitions.

The Diviner dataset provides detailed lunar temperature profiles from pole-to-pole and every 1 hour in longitude. I have taken the published Diviner dataset profiles and digitized them. I have then performed an integration over the lunar surface and also looked at running 12-hour windows to find the minimum and maximum hemisphere averages. The results of various ways of computing the average are:

1. Entire lunar surface and naively averaging the temperatures gives 180 K
2. Entire lunar surface averaged as T^4 gives 253 K
3. Entire lunar surface averaged as T^4 and weighted by area gives 270 K

To obtain an “average” which relates to the mean energy flux we must obviously use method (3). This agrees with NASA.

The coldest hemisphere lunar surface average (T^4 with area weights) gives 103 K
The hottest hemisphere lunar surface average (T^4 with area weights) gives 320 K

(The two hemispheres do not overlap in the above calculation – just to be clear!)

The naïve average of the hot and cold hemispheres is (103+320)/2 = 211 K
The T^4 average of these two numbers is 270 K – the same answer as integrating over the whole surface, exactly as we would expect.

These numbers and calculations are not in doubt but perhaps the meaning and utility of them is in dispute. If we want to state a single average temperature that summarises the average flux being radiated by the whole moon at any time the correct answer is 270 K. This is rooted in physics. If we want to state what the average thermometer reading on the surface of the moon is we would be using a number closer to 211 K, or your 200 K.

The T^4*area calculation is the one that relates the physical quantity temperature to the physical quantity energy flux in a meaningful way.

Hope that helps everyone. I don’t think this is really any direct relevance to your theory.

For others wanting to know about models with different physical properties of the atmosphere excluded I would suggest they refer back to Manabe & Strickler (1964) “Thermal Equilibrium of the Atmosphere with a Convective Adjustment”, J. Atmospheric Sci. 21 pp 361-385. There you will discover that the temperature with GHG but without weather would be +40 – +60 degC, contradicting the GHG = +33 meme completely. It’s the big effects of evaporation and convection that keep us cool.

Regards,

TS

Editor
August 17, 2018 8:36 am

TS, I fear that both Lord Monckton and you (along with Manabe & Strickler) have greatly underestimated the complexity of estimating the earth’s blackbody temperature. See here for Dr. Robert Brown’s clear examination of the problems involved in that calculation.

w.

August 17, 2018 9:16 am

Mr Eschenbach is quite right that the problem of deriving an emission temperature in the absence (and still more in the presence) of greenhouse gases is not easy. Our approach has been to accept official climatology’s method ad argumentum, but to note that if one performs one of the earliest steps recommended by the ever-interesting Dr Brown – namely a T^4 calculation at each point on the sphere – one comes closer to the true value both on the Moon (where the mean temperature is some 70 K less than using official climatology’s single, naive calculation) and on the Earth (where the error appears to be 10-20 K in the opposite direction). Our conclusion is that further work needs to be done on this question, which has little implication for equilibrium sensitivity obtained by our method (it reduces our 1.17 K mid-range estimate to about 1.10 K) but probably has major implications for official climatology’s method.

richard verney
August 17, 2018 4:38 pm

In the case of the Earth, the problem is more acute since the Earth is nothing like a blackbody with all the thermal inertia and lags.

Further, most energy is not absorbed at the surface, eg., the oceans which cover almost ~70% of the planet and where energy is absorbed several metres below the surface and some of this is carried to depth by oceanic overturning, in the atmosphere at cloud height and the height profile of water vapour in the atmosphere, in tropical rain forests little sunlight reaches the surface and solar is absorbed at canopy height and then it is converted powering photosynthesis and tree growth.

Yet further energy absorbed in one place is often reradiated in another place (eg., as a consequence of oceanic currents). and the problem caused by latent energy in evaporation, ice melt etc.

This is a 3D system where the incoming watts are simply not absorbed at one uniform height.

August 18, 2018 6:19 pm

Mr Verney rightly points out some of the numerous complexities in reaching an estimate of global mean emission temperature. However, these complexities have a very small influence compared with the very large influence of official climatology’s failure to make allowance for Hoelder’s inequalities between integrals in arriving at its estimates of emission temperature.

August 15, 2018 2:23 pm

In response to Greg, the values for albedo and reference temperature in the absence of non-condensing greenhouse gases were derived from a GCM by Lacis+ (2010). Assuming today’s insolation and Lacis’ albedo, the emission temperature in the absence of those gases was derived using the fundamental equation of radiative transfer in the usual way.

In practice, it makes very little difference what the reference temperature is: it can vary quite widely without much influencing equilibrium sensitivities.

Kristi Silber
August 16, 2018 2:39 pm

Why do you use Lacis+ GCM, rather than another?

“In practice, it makes very little difference what the reference temperature is: it can vary quite widely without much influencing equilibrium sensitivities.”

What does “quite widely” mean here? Seems strange that your calculations are quite insensitive to actual conditions.

How did you pick the years (just two!) to estimate system gain?

“Lacis misattributed to the non-condensing greenhouse gases the large feedback response to the emission temperature from the Sun.”

It seems to me that once you bring the Sun into the equation, you are creating an open loop. The Sun’s emission is independent of any feedbacks, and that invalidates control theory.

If you are looking at the top-of-atmosphere energy balance, the input is the sun’s energy, and the output is the radiation from the planet into space. This cannot be described by control theory.

If you are looking at the temperature on the surface of the Earth, the actual emission temperature of the Sun is to some extent irrelevant since its energy is greatly modified by the time it hits the surface, and is dependent on some of the same controls that come into play in the feedbacks (clouds, albedo, etc.).

Where am I wrong here? Really, I want to know and understand, and I’d appreciate it if any response is in layman’s terms and refers to actual climate parameters rather than engineering/electronics models.

August 16, 2018 4:57 pm

Ms Silber asks some sensible questions. I am happy to answer them.

We used Lacis+ 2010 as our starting point at the suggestion of Dr Mojib Latif, whom I had the pleasure of meeting at a climate conference organized by the City Government of Moscow last year. The virtue of using Lacis is that the co-authors are known to take a rather extreme position on global warming: few, therefore, would argue with their findings, though we are able to demonstrate that the feedback factor they imagine is absurdly high.

The reason why reference temperature R(1) may vary quite widely is that the influence of the Sun overwhelms the comparatively small influence from greenhouse gases. We allowed R(0), the emission temperature in the absence of greenhouse gases, to vary by 5% up or down on Lacis’ 243.25 K. We then conducted a 30,000-trial Monte Carlo simulation and derived the uncertainty interval of about 0.08 K either side of our mid-range estimate of 1.17 K equilibrium sensitivity to doubled CO2.

We selected 1850 as the start-point for our calculations because there had been little if any anthropogenic influence before that date and it was at that date that the first global-temperature measurement was conducted, albeit with an uncertainty of some 0.35 K either side of the mid-range estimate.

We selected 2011 as the end-point because that was the year to which IPCC and its contributors updated their data and methods in time for the most recent Assessment Report.

However, we also conducted an empirical campaign based on ten separate estimates of net anthropogenic forcing to various dates, four from IPCC’s reports and six from mainstream, peer-reviewed sources. In all cases the equilibrium sensitivity to doubled CO2 was found to be 1.17 K.

It is incorrect to state that including the emission temperature “creates an open loop”. It does no such thing. Build a test rig (or, if engineering is not your thing, just set up the Bode system-gain equation). Set the gain block to unity. Set the input signal to represent the 243.25 K emission temperature. Set the feedback block to any nonzero value. Measure the output. It is not 243.25 K. The entire difference between the input and output signal, where the gain block has been set to unity, must come from feedback. Case closed.

If Ms Silber thinks control theory is inapplicable to climate, she makes our case for us a fortiori: for in that event there is no basis for imagining that temperature feedbacks operate. Control theory is feedback theory.

As to the complications caused by the multiplicity of forcings and feedbacks, including water vapor, albedo and cloud feedbacks, our black-box method does not need to take them individually into account. All we need to know is the reference and equilibrium temperature for a given year, whereupon the system-gain factor is simply the ratio of the latter to the former.

I do hope these answers help.

Editor
August 17, 2018 8:38 am

Monckton of Brenchley said:

We used Lacis+ 2010 as our starting point at the suggestion of Dr Mojib Latif, whom I had the pleasure of meeting at a climate conference organized by the City Government of Moscow last year.

What? Clear proof of Russian climate collusion, notify Robert Mueller’s team immediately!

w.

August 17, 2018 9:17 am

Guilty as charged! Where do I collect my money?

commieBob
August 15, 2018 2:12 pm

I’m not convinced that feedback analysis is appropriate in the first place.

What CM et al have done is to accept, for sake of argument, Hansen’s feedback analysis but to demand that it be done correctly. Absolutely brilliant.

The only way CO2 will produce catastrophic warming is if there is positive feedback. Take that away, as CM et al have done and there is no Catastrophic Anthropogenic Global Warming (CAGW).

Editor
August 15, 2018 2:19 pm

cB,

Pretty sure the feedback is still considered positive…it’s just much much smaller than “consensus” science thinks…

rip

commieBob
August 15, 2018 2:41 pm

… much much smaller … by around an order of magnitude. 0.08 vs 0.67 or 0.75 link

Given the lack of precision and accuracy of the data, the feedback might as well be zero.

Editor
August 15, 2018 3:23 pm

Hahn. Agreed. Although, upon consideration, CM’s analysis seems to rest on all other things being equal. Just something to keep in mind, I guess. Not that I truly think insolation changes or orbital fluxes would impact this materially, but just sayin.

rip

August 17, 2018 9:19 am

Ripshin makes the fair point that we are not stating that we know so much about the climate that we are sure that Charney sensitivity is 1.17 K. We are saying that, accepting ad argumentum all of official climatology except what we can prove to be in error, and after correcting the error we can prove, Charney sensitivity is 1.17 K plus or minus 0.08 K, to 95.4% certainty.

Editor
August 17, 2018 12:12 pm

Understood and agreed!

And, I’ll note two things in passing.

First, I’m exceedingly grateful that you are narrow in your conclusions. This is how science (and, as an aside, the law) should be conducted. Conclusions (and verdicts) should be narrowly constructed, giving due consideration to the limitations that produced them (whether, in the case of science it was an experiment, or in the case of law it was the trial/arguments). Broad, sweeping conclusions are almost always filled with errors and fallacies. Your careful description of your conclusion, taking in the caveats, is much appreciated. That a significant portion of “research” of an entire branch of science is, in one fell stroke, invalidated by such a simple and narrow conclusion such as yours is as much a kudos to you and your team as it is a (or should be) a censure to the ringleaders who’ve perpetuated this fallacy. [For some reason, I really wanted to write “farcical aquatic ceremony” there instead of “fallacy”: https://youtu.be/dt-a6sovg_k?t=1m41s ]

Secondly, I found your youtube presentation to be quite charming. You did a nice job, but I have to be frank here: I’m not sure I totally bought the cowboy hat and bolo combo. Once you started speaking, I still knew you were a villian (https://www.thecut.com/2017/01/why-so-many-movie-villains-have-british-accents.html).

At any rate, thanks again for keeping our community here updated and for the constant engagement to answer questions and comments.

Sincerely,

Brian Lindauer (rip)

August 17, 2018 2:50 pm

Mr Lindauer’s comments are most kind and helpful. He has entirely understood both the limitations and (precisely because of the limitations) the scope and power of our result.

As to the Stetson with its associated gear, that was presented to me by the Republican Party of Montana as a thank-you for making the keynote speech at a fund-raiser for Mr Trump before the Presidential election. I wore it at Camp Constitution because the sunlight was very bright, and it provides better shade even than the excellent Stetson leather baseball cap that i bought in Cannes some years ago, and I have recently had a couple of operations on my eyes, which are more sensitive than usual at present. Also, the schoolkids who were my audience liked it. It is, of course, incongruous when talking of scientific matters, but I shall be giving a highly-focused 20-minute presentation on our result at a high-level scientific conference in Porto next month. That will be professionally filmed and posted up on YouTube, and I shall be soberly suited and looking serious.

UzUrBrain
August 15, 2018 4:33 pm

Math tells me that it has to exceed all Negative Feedback to go into runaway warming. Do not think there is enough for that and geological history also shows no runaway. With levels of CO2 at 7,000 PPM the earth’s temperature was only 15 oC than now. Thus there is a negative feedback. My guess is H20.

old construction worker
August 15, 2018 6:01 pm

I believe you are right. The water cycle is a net negative feedback. We live in a swamp cooler atmosphere.

August 15, 2018 2:33 pm

Many thanks to Commiebob for his comments. It does seem clear from our analysis that net-positive feedback is acting, but that the effect is small.

CheshireRed
August 16, 2018 4:08 am

Always been my position. WHERE is this highly positive feedback we hear of re CO2 – H2O? It seems to me ‘they’ recognised there’s a trifling amount of human CO2 and any GHE it produces (?) is overwhelmed by natural H2O, so they had to shoe-horn in an ‘explanation’ for it. I’ve never once bought into it.

August 15, 2018 2:20 pm

I am very grateful to usurbrain for his kind comment.

If anyone would like a simple account for high-school students, go to:

HotScot
August 15, 2018 4:36 pm

Monckton of Brenchley

Chris, that blew me away.

Whilst I barely understood a word, nor do I suspect did the attendees, I understand why you had to go into the detail you did, for the benefit of your sceptics.

Nor should there be any doubt, the sceptics boot, is now on the other foot.

We ‘sceptics’ are in the ascendancy and are rapidly becoming the mainstream of climate change opinion. It is a bull market for us, invest!

No one believes in the concept of Catastrophic Anthropogenic Climate Change any longer other than the deluded hardliners who are being exposed as totalitarian elitist’s, determined to use CAGW to further their despicable ends, change the political world order and impose global governance.

I’ll write to the Swiss Embassy in the UK irrespective of my late submission and I await their reply with interest.

And when I retire back to Scotland in the next 4 or 5 years, I will turn up on your doorstep with a bottle of our finest to toast your good health, even if you’re not there.

Lang may yer lum reek Chris.

Thank you,

HotScot.

August 15, 2018 4:54 pm

Most grateful to HotScot for his very kind comment, and I look forward to sharing a dram with him one day. The math was a bit too much for the high-school students, but the value of spelling it out is that the far larger audience on YouTube can watch the presentation and get some idea of the actually quite simple result.

If we are right, this is the end of the climate scare, which is no doubt why the IPCC has not complied with its own error-reporting protocol to the extent of acknowledging my report of its error.

old construction worker
August 15, 2018 6:06 pm

‘If we are right, this is the end of the climate scare,’ Sad truth, it won’t be. As along as some people want Governance X. Co2 will be the villein.

August 15, 2018 9:00 pm

Well, I am not a quitter. If we have succeeded in demonstrating a major error in official climatology’s math, and if we are right that in consequence of that error the equilibrium sensitivity to doubled CO2 is of order 1.2 K rather than 3.4 K, then that is indeed the end of the scare. One should not underestimate the power of a mathematical proof.

hanelyp
August 15, 2018 9:19 pm

I expect we will not see the final end of the current climate hysteria until after they roll out the next hysteria to blame on modern industry and use as an excuse for a power grab.

AGW is not Science
August 16, 2018 9:52 am

Sad, but true. They have already gone from “global cooling” to “global warming,” in BOTH cases blaming human industry and in BOTH cases proposing the same “solution” to the non-existent “problem” – limits/controls on energy use. The fact is what they’re after has always been control of energy use, through which they can gain control of *everything.*

Steve O
August 16, 2018 5:27 am

The True Believers have stated their positions so forcefully. for so long, that they are unable to reverse themselves. They will take their positions to the grave. How could they stand the humiliation of admitting they were wrong? Their brains won’t allow the possibility.

August 16, 2018 8:13 am

Steve O is right that the true-believers in the New Religion – or, rather, the New Superstition – are not going to be willing to abandon the Party Line without a struggle. However, our result does make it quite plain that equilibrium sensitivity is between one-third and one-half of their mid-range predictions – enough of a reduction to bring the global-warming scare to an end.

The first stage will be to see whether we can get our paper past peer review. There will be a lot of snapping and snarling by true-believing reviewers – indeed, there already has been – but, if the generally feeble opposition to our result from the true-believers here is anything to go by, there is nothing much for us to worry about.

Once our paper has been peer-reviewed and published, it will be up to the wider scientific establishment to see if it can find any significant holes in our argument. That may prove a great deal more difficult than some of them may think.

AGW is not Science
August 16, 2018 9:56 am

LMOB, I have much faith in the work of you and your team, but unfortunately little faith that the “journal” gatekeepers will allow your work to be published, because they prefer to refuse to publish anything that would kill the “golden goose.”

Reply to  AGW is not Science
August 16, 2018 4:43 pm

In response to AGW is not Science, we know it will be difficult to persuade the journals that the game is up and the scare is over, but there will come a point where, unless they can produce valid reasons to reject our actually quite simple argument, they will have to publish or face prosecution for fraud.

Ian Magness
August 17, 2018 12:40 am

You and me both Scottie!
I’m quite partial to older versions of Bunnahabhain.

sycomputing
August 16, 2018 8:00 am

“Likewise, Science has as its end and object the truth in the physical world…Truth is the objective of science, and objective Truth is the objective of Science.”

Wonderful!

August 17, 2018 9:21 am

Thank you, sycomputing, for having said how much you enjoyed my little excursus into the philosophy of science and of religion. For what is the scientist but a seeker after truth, as al-Haytham used to say.

kevinK
August 15, 2018 5:25 pm

“I do not think any man has designed a process control system to date that can keep a Nuclear power plant, Spacecraft, Airplane, even autonomous automobile as stable as the present inherent climate control system for global temperature”

With respect to your experience with control systems, the climate is very stable simply because of the huge thermal mass of the Oceans (caused by the huge volume of the Oceans).

No feedback(s) necessary (or present). It takes a very long time for the temperature of the Ocean to change, and then the other things that respond faster (CO2, polar ice) simply go along for the ride.

Cheers, KevinK.

August 15, 2018 9:01 pm

KevinK appears to imagine that there are no feedbacks present in the climate. However, it is readily demonstrable that there are. It has been demonstrated. A mere assertion to the contrary, unsupported by any evidence, does not constitute an effective refutation.

Meh
August 15, 2018 11:29 am

I understand very little of this, but that prediction-observation -graph looks a bit dishonest to me, seeing how it misses the last 6 years.

Latitude
August 15, 2018 11:50 am

but it doesn’t miss all the years the IPCC, models, etc were wrong…and that’s the point

Theo
August 15, 2018 11:54 am

Not dishonest. Just reusing an old graphic.

Extending the graph to July 2018 would only slightly raise the angle of the blue line. It still wouldn’t make it up to the lower MoE line, ie into the IPCC’s yellow prediction zone.

August 15, 2018 2:37 pm

Theo is correct. It is only if one uses the much-tampered-with GISS data to 2018 that the trend-line barely makes it into the very bottom of IPCC’s prediction region.

Robert W Turner
August 15, 2018 1:40 pm

The 1998 El Nino raised the global average temperature above the model predictions, the 2015/16 El Nino raised it to the average of the models (and they cheered that they were accurately modelling global average temp), but since then it has cooled whereas the model predictions continue to warm. They will be lucky if the next major El Nino reaches their lowest forecast models.

August 15, 2018 2:36 pm

Mr Turner makes an excellent point. It does seem clear that a large fraction of the observed warming in most datasets arises from adjustments (whether legitimate or otherwise).

August 15, 2018 2:35 pm

In response to Meh, we shall of course update our analysis once up-to-date data from IPCC are to hand: but we chose 2011 as our end date because that was the date to which IPCC had derived its predictions and data.

However, we also ran an empirical campaign studying ten distinct estimates of net anthropogenic forcing over various periods, together with the observed industrial-era warming since 1850 for each period. In every case, the Charney sensitivity was found to be 1.17.

Warren
August 15, 2018 5:09 pm

Dishonest?
Meh did something magical happen in the last six years?
If there was anything ‘dishonest’ about 1850-2011 this thread would have gone into alarmist meltdown.

Nick Stokes
August 15, 2018 8:39 pm

“I understand very little of this, but that prediction-observation -graph looks a bit dishonest to me”

Omission of recent warming is just one of the problems. The FAR century trend was, as usual, based on scenarios. Naturally this plot uses only Scenario A, which is for the highest estimate of GHG increase, which is not what happened 1990-2011. The scenarios are similar to Hansen’s. I have plotted below here the graph with data to date, proper surface data as postulated by IPCC (and TLT for those who like that sort of thing), all three FAR scenario century trends, and the MoB green line.

Theo
August 15, 2018 8:49 pm

Omission of 2011-18 means not only omission of warming but also of cooling.

Surely you’ve noticed that the world has cooled since the totally natural warming associated with super El Nino of 2015=16.

August 15, 2018 9:15 pm

Mr Stokes continues to quibble. Since there have been no reductions in the rate of CO2 concentration growth, it is the business-as-usual scenario that is relevant. On that scenario, the mid-range prediction of medium-term warming made by IPCC in 1990 was 2.8 K or 3.3 K, depending on which version of the medium-term prediction one relies upon. I chose the lesser of the two, so as to be kind to IPCC.

Mr Stokes maintains, disingenuously, that the forcings imagined by IPCC in 1990 have not come to pass. They have, however, but IPCC, realizing that they were not producing the desired warming rate, introduced a very large fudge-factor in the form of the negative aerosol forcings.

honest liberty
August 16, 2018 12:48 pm

and this is what happens MoB when a scholar attempts to discuss logic with a fanatic: they will resort to obfuscation and self-deception to insulate their ego. If we plummeted into another LIA in the next ten years… Mr. Stokes would still blame CO2.

Thanks again sir, you rock!
Nick, you can continue kicking them

Nick Stokes
August 16, 2018 1:51 pm

“and this is what happens MoB when a scholar attempts to discuss logic”
A scholar would at least tell you that there were scenarios involved, and that he was choosing the most extreme in terms of GHG growth. And then give some positive justification as to why that choice was justified by the events that unfolded. MoB has given no detail about the scenarios at all.

August 16, 2018 4:41 pm

The accident-prone Mr Stokes should perhaps check to see whether the emissions growth to 2011 was below or above the IPCC’s business-as-usual scenario in 1990. Hint: it was above. IPCC’s prediction, therefore, was way off beam. It has realized this and has approximately halved its medium-term prediction since then – yet, unaccountably and inconsistently, it has left its longer-term predictions unaltered.

Bellman
August 17, 2018 6:53 am

Monckton of Brenchley,

I seem to remember a few years back you were claiming that the increase in CO2 was well below what the IPCC predicted. Have you changed your mind since then?

August 17, 2018 9:23 am

As the ever-tiresome and unconstructive Bellhop will have realized by now, in climatology it is necessary to adapt one’s position as the data and the science change. If Bellman were to provide a reference to what I said, for I do not recollect having said any such thing, I should be able to recall the circumstances and provide a more detailed answer.

Bellman
August 17, 2018 1:03 pm

From February 2009,

“It is important to draw the distinction between the increase in CO2 emission, which has been at the high end of the IPCC’s projections, and the corresponding increase in CO2 concentration, which has recently been very near linear, and is running well below the least of the exponential rates of increase projected by the IPCC.

On the current, linear observed trend, CO2 concentration in 2100
will be just 575 ppmv (IPCC central estimate 836 ppmv), requiring the IPCC’s central projection of temperature increase to 2100 to be halved from 3.9 to a harmless 1.9 C°.”

Global Warming is Not Happening

At the hearing of March 25 2009: “Carbon dioxide is accumulating in the air at less than half the rate that the United Nations had imagined. This
century we may warm the world by just half a Fahrenheit degree, if that.”

August 17, 2018 3:01 pm

Excellent. The CO2 concentration is still heading for about 575 ppmv by 2100, and that alone requires IPCC’s prediction of global warming to be reduced. The emissions, however, are – like it or not – above the business-as-usual scenario in IPCC (1990). The IPCC is, therefore, wrong on the following counts:

1. Despite decades of rhetoric and annual bilious climate conferences, IPCC, UNFCCC, UN et hoc genus omne have utterly failed in their primary mission of bullying the West into making heavy enough cuts in emissions to bring the emissions growth rate even down to its high-end or business-as-usual case.

2. Notwithstanding that emissions are running above the business-as-usual scenario envisioned by IPCC in 1990, CO2 concentrations are – as I had said they would – running at well below IPCC’s then mid-range estimate.

3. As our present result shows, equilibrium sensitivity is approximately one-half to one-third of IPCC’s mid-range estimate.

Now, put these telling facts together and it is indeed perfectly possible that the anthropogenic component in the global warming of the 21st century will be only 0.5 K. However, our present result concentrates solely on the question of equilibrium sensitivity to doubled CO2. Bearing in mind that consideration alone, we should expect the anthropogenic component in global warming this century to be of order 1.2 K.

However, if one were to bear in mind that IPCC was also wrong about the relationship between emissions and concentrations, one might well conclude that the anthropogenic component in 21st-century warming will be as little as 0.5 K. But that step, though relevant to my testimony before Congress, was not relevant to our present paper, which is narrowly focused.

Bellman
August 17, 2018 4:46 pm

Excellent. The CO2 concentration is still heading for about 575 ppmv by 2100, and that alone requires IPCC’s prediction of global warming to be reduced.

I’m probably missing something here, but I thought that was the whole point of Nick Stokes argument. CO2 is not rising in accord with the original IPCC business as usual scenario, and therefore you would not expect as much warming as they predicted.

Now, put these telling facts together and it is indeed perfectly possible that the anthropogenic component in the global warming of the 21st century will be only 0.5 K.

Yet you accept temperatures are currently rising three times faster than that.

Also, you predicted 0.5°F, not K – though a few minutes later you suggest it could be as much as 2°F.

August 18, 2018 7:34 am

Bellman continues to be deliberately obtuse. The CO2 emissions, and those from other greenhouse gases, are rising at somewhat above the IPCC’s business-as-usual rate as predicted in 1990. Yet the concentration is not rising as fast as IPCC had predicted. This is one of IPCC’s many mistakes.

However, our paper concentrates only on one mistake: the erroneous definition of “temperature feedback” in IPCC’s reports.

At present, temperatures are rising at 1.6 K/century equivalent on the HadCRUT dataset, less on the RSS dataset and more on the others. However, there has recently been a large el Nino, a naturally-occurring event, which has pushed up the warming rate. Before the el Nino, there had been no warming at all on the satellite datasets for 18 years 9 months, and none on the terrestrial datasets for about 15 years until they were tampered with (for whatever reason) in a fashion calculated to raise the apparent warming rate of recent decades compared with the original measurements.

Since most of the current century has yet to happen, it is interesting that Bellman – who perhaps lacks the statistical knowledge to understand that shorter-term trends tend to fluctuate more than longer-germ trends – assumes, on no evidence, that the current rate of warming will persist throughout the century.

As I have already explained twice (and one does understand that Bellman is calculatedly slow on the uptake, for it is paid to be so), it remains possible that the rate of warming this century will be as little as 0.5 K. However, for present purposes our argument takes no account of IPCC’s predictive failure that led it to assume a far greater concentration of greenhouse gases as a function of the emission rate than has actually occurred. That is why our paper, which is confined to the question how much global warming will occur after temperature feedbacks have acted, finds that equilibrium sensitivity to doubled CO2 will be 1.2 K.

Bellman
August 18, 2018 12:47 pm

Since most of the current century has yet to happen, it is interesting that Bellman – who perhaps lacks the statistical knowledge to understand that shorter-term trends tend to fluctuate more than longer-germ trends – assumes, on no evidence, that the current rate of warming will persist throughout the century.

That’s patent nonsense. I’ve argued here on numerous occasions that short term trends will fluctuate and cannot be used to predict future changes.

it remains possible that the rate of warming this century will be as little as 0.5 K.

Anything’s possible. We might plunge into a new ice-age by the end of the century, or warming might accelerate in line with the 1990 IPCC estimates. The problem is you claim the period from 1990 – 2011 verifies your prediction of 1.2°C / century. If it does that it doesn’t provide evidence for your 0.5°C warming, let alone your 0.5°F projections.

August 18, 2018 6:15 pm

The furtively pseudonymous coward Bellman continues, pointlessly, to pick nits and to lie. It is contemptible. And, when caught out in one of the lies in which it specializes – for that is what cowards do when they are caught out in their bottomless ignorance time and again – it doubles down with further lies.

For the reasons I have explained, our analysis is confined to quantifying the effect of the error of definition on which official climatology has hitherto foolishly relied. Taking that matter on its own, one would expect about 1.2 K of global warming this century, on the assumption that the net centennial warming from all anthropogenic sources is approximately equivalent to the equilibrium warming in response to doubled CO2.

However, there remains the fact that IPCC’s original business-as-usual prediction of CO2 emissions growth is the track currently being followed by inferred CO2 emissions (see e.g. Le Quere’s annual papers), and yet the CO2 concentration growth is well below IPCC’s original business-as-usual prediction. It is for that reason, combined with official climatology’s error of definition, that I consider it possible that anthropogenic global warming this century may prove to be as little as 0.5 K. However, for the purposes of our present paper, 1.2 K this century is enough. If the warming turns out to be 1.6 K – the current HadCRUT4 trend – then our prediction will still be very much closer to reality than IPCC’s original business-as-usual prediction of about 4 K warming over the 21st century.

If one considers IPCC’s medium-term predictions rather than its centennial predictions, then our 1.2 K would be considerably closer to a centennial warming of 1.6 K than IPCC’s 2.8 K/century equivalent or 3.3 K/century equivalent medium-term business-as-usual predictions.

Bellman
August 19, 2018 7:47 am

it doubles down with further lies.

It would be a real help if you could quote a specific lie. I’m sure I make mistakes and correct them when pointed out, but to be accused of making non-specific lies makes it impossible for me to respond.

on the assumption that the net centennial warming from all anthropogenic sources is approximately equivalent to the equilibrium warming in response to doubled CO2.

And that’s one of my problems, where does that assumption comes from? Logically it should depend on factors such as how much CO2 increases. Even if a sensitivity can be translated into a rate of warming over the 21st century, there is no reason to suppose this would be the same over a shorter period.

However, there remains the fact that IPCC’s original business-as-usual prediction of CO2 emissions growth is the track currently being followed by inferred CO2 emissions (see e.g. Le Quere’s annual papers), and yet the CO2 concentration growth is well below IPCC’s original business-as-usual prediction. It is for that reason, combined with official climatology’s error of definition, that I consider it possible that anthropogenic global warming this century may prove to be as little as 0.5 K.

I still don’t follow the logic of this. You appear to be confusing predicted rates of warming with climate sensitivity. If your argument is that we may only see 0.5°C warming over the 21st century because there will less of an increase in CO2 over that period, you are not saying that climate sensitivity is only 0.5°C.

then our prediction will still be very much closer to reality than IPCC’s original business-as-usual prediction of about 4 K warming over the 21st century.

I assume that’s a typo, the IPCC predicted 3 °C over the 21st century.

If one considers IPCC’s medium-term predictions rather than its centennial predictions, then our 1.2 K would be considerably closer to a centennial warming of 1.6 K than IPCC’s 2.8 K/century equivalent or 3.3 K/century equivalent medium-term business-as-usual predictions.

Again, the IPCC never predicted 3.3 / century mid term warming.

But the main issue is that as you so rightly said else where, you cannot project a short term warming trend over the next century. Your argument is that if warming over the next century is only 1.6 °C, the IPCC will have been wrong and you will have been less wrong. If.

August 20, 2018 3:06 am

Here are just a few of Bellnan’s lies, uttered from behind its cloak of cowardly anonymity.

1. IPCC predicted 3 K, not 4 K, business-as-usual warming over the 21st century. Yet the business-as-usual diagram quite clearly shows 4 K warming.

2. IPCC never predicted 3.3 K mid-term business-as-usual warming. This lie is repeated twice. Yet IPCC did predict 3.3 K mid-term business-as-usual warming.

3. Bellman, having lied and lied again about IPCC’s predictions, then says it is not interested in IPCC’s predictions. It should at least make some attempt to make its lies self-consistent.

Bellman
August 20, 2018 4:01 am

1. IPCC predicted 3 K, not 4 K, business-as-usual warming over the 21st century. Yet the business-as-usual diagram quite clearly shows 4 K warming.

The 4K warming is the warming since pre-industrial times, not the warming over the 21st century.

2. IPCC never predicted 3.3 K mid-term business-as-usual warming. This lie is repeated twice. Yet IPCC did predict 3.3 K mid-term business-as-usual warming.

You can assert they did predict it ad infinitum, it doesn’t make it true. I’ve tried to explain why I think you are wrong. You have ignored what I say and just repeat your claim. The facts are that the IPCC FAR do state a figure of 3.3K / century up to 2030. They do not state there will be 1.35°C warming between 1990 and 2030. They did predict, as a crude estimate, 1°C warming up to 2025. Their graph does not show short term warming at a rate of 3.3°C / century.

If I’m wrong then Lord Monckton only has to point to the passage stating that and I’ll withdraw my assertion, but up to now all he has down is make some suppositions based on unrelated parts of the IPCC report to derive his own figure.

3. Bellman, having lied and lied again about IPCC’s predictions, then says it is not interested in IPCC’s predictions. It should at least make some attempt to make its lies self-consistent.

If I said I wasn’t interested in IPCC’s predictions it was in the context of thinking them irrelevant in confirming your predictions. Of course I’m interested in IPCC predictions, I just think their crude predictions from 28 years ago are less interesting than their current predictions.

August 20, 2018 6:05 am

I am glad that Bellman now concedes, contrary to his previous assertions, that IPCC do state a figure of 3.3 K/century up to 2030: or, rather, they state that there will be 1.8 K warming compared with pre-industrial. Now, there had been 0.45 K warming to 1990, as IPCC knew at the time. Therefore, it was predicting 1.35 K warming from 1990-2030, which is equivalent to at least 3.3 K/century. It is pleasing that we are now agreed on this.

And Bellman did indeed spend a lot of time citing IPCC’s predictions, in the hope of demonstrating some imagined inconsistency or another in the head posting. Only when it reallzed that I knew it had misstated IPCC’s position did it suddenly and falsely pretend that it was not, after all, interested in IPCC’s predictions. It is deliberate falsehoods like these that Bellman should work on avoiding, for they undermine whatever case it conceives it is supporting in its equivocations and mendacities here, delivered from behind a coward’s cloak of anonymity.

Bellman
August 20, 2018 6:55 am

but for the record my “concession” is the result of missing the word “not” from the above comment. The offending sentence should read “The facts are that the IPCC FAR do not state a figure of 3.3K / century up to 2030.”

Theo
August 15, 2018 11:29 am

The HadCRU books have also been cooked, but maybe just not extra crisply like the Karlmelized GISS.

August 15, 2018 2:38 pm

Another good point from Theo. If we had confined ourselves to the UAH data, our trend-line would have coincided with the observed trend-line almost exactly.

Peter Schell
August 15, 2018 11:30 am

I fear that this debate is no longer one that can be influenced by science and facts. It has become a religion and a handy dog whistle for politicians looking for an excuse to either bring in some sort of social program or to explain how it was not their neglect that is responsible for their city currently being underwater.

and we have an entire generation that has been indoctrinated at a faith and belief level. It will take a lot more than math and science they don’t understand to convince them. The Great Lakes freezing solid and staying frozen through the summer might not be enough.

Greg
August 15, 2018 1:23 pm

Hey, haven’t you heard? The Great Lakes freezing over all summer is EXACTLY what climate models have predicted will happen.

Greg
August 15, 2018 1:46 pm

Sadly, you are right, except that there never was a debate, just a shouting match.

The alarmists have done their best to indoctrinate the younger generation but half the population remain unconvinced. Trump has been elected and proved you do not need to kow-tow the priests of the cult of AGW to get elected.

He has cut off funding to UN climate green slush fund and hopefully at some stage will live up to his promise to pull USA out of Paris “agreement”.

Academics have completely blown there image of fair-minded, objective experts and have mostly come out as dishonest bigoted political activists. And that , folks, is about all we get for our money.

August 15, 2018 2:41 pm

Peter Schell should not underestimate the power of a formal mathematical demonstration of the actually rather elementary error of physics perpetrated by climatology in recent decades. We have demonstrated that global warming in response to doubled CO2, after correcting the error, will only be about 1.2 K. On any view, that is not enough to be regarded as catastrophic.

No doubt the journals will continue to resist publishing our result. But, in the end, they cannot be seen to participate in a fraud. If we are correct, then our result deserves to be published and, if it is not published, questions will arise. If we are wrong, then at least we tried.

GregK
August 15, 2018 5:07 pm

Dear Monckton of Brenchley,

What would the curve look like for CO2 doubling/warming…ie from10ppm to 20ppm, 20 ppm to 40ppm, 50ppm to 100ppm, 100ppm to 200ppm, 500ppm to 1000ppm etc all other things being equal [which they wouldn’t be but..]

Basically what is the nature of the curve ?

Theo
August 15, 2018 5:17 pm

In the lab, it’s a logarithmic curve, with about 1.1 to 1.2 degrees C per doubling. If you start at one ppm, we’re working on the ninth doubling.

In the real, complex climate system, with positive and negative feedback effects, it’s still probably not much different from that value. IPCC however imagines that the ECS range is 1.5 to 4.5 degrees C per doubling.

August 15, 2018 8:57 pm

In response to GregK’s excellent question, from 50 ppmv to 1000 ppmv the CO2 feedback curve is approximately logarithmic. The latest value of the CO2 radiative forcing is approximately 5 times the natural logarithm of the proportionate change in CO2 concentration: thus, for every doubling of CO2 concentration within the interval [50, 1000] ppmv, the forcing is 5 ln 2, or 3.466 W/m^2. The product of this value and the current Planck sensitivity parameter 0.299 K/W/m^2 is the reference sensitivity to doubled CO2: i.e., 1.0363 K, which I have rounded to 1.04 K in the head posting. Previous values of the coefficient in the forcing function were 5.35 (Myhre et al., 1998, cited in IPCC, 2001) and 6.3 (IPCC up to 1995).

Helge Ankjær
August 15, 2018 3:48 pm

I fear you are right Peter. Especially that between 1 and 2 generations have received AGW, almost intravenously. We have schoolbooks in my country (revised in 2016) where it says that the Arctic ice would disappear in 2013.
How can you have books that were actually revised in 2016 and still they claim that the ice should disappear by 2013? It tells a lot about how big this “climate hysteria” really is.
But now it’s all about fighting, we must get this out. Especially we who live in non english speaking countries. The climate deception is possibly more incorporated here than many English speakers believe. The biggest battle is most likely to be in Europe.

Than you Monckton of Brenchley, thank you so much for never giving up

August 15, 2018 4:17 pm

I am most grateful to Helge Ankjaer for a most generous comment. Once we have succeeded in persuading a learned journal to publish our result, we expect that it will receive quite a bit of publicity.

richardw
August 15, 2018 11:38 am

I am so looking forward to UEA’s response! Thank you for the insight, focus, dedication and fearlessness you have shown in putting this together.

August 15, 2018 2:45 pm

Many thanks to RichardW for his generous comment. We shall plug away at this until either we are given a credible explanation showing that we are wrong or we are published.

At an earlier stage in our research, a reviewer broke the scientific code of conduct by sending a copy of our paper to UEA with a request that they should assist him in refuting it. The paper found its way to the vice-chancellor, Professor David Richardson, who, in August 2017, called all 65 members of the environmental sciences faculty together and yelled at them: “This is a catastrophe. If Monckton’s paper is ever published, there will be hell to pay.” He ordered everyone to drop everything they were doing and work on refuting our paper. He has subsequently denied that any such meeting took place, but we received our information from one who was present.

Tom Halla
August 15, 2018 11:41 am

Good post. Assuming the same things that produced the 1850 temperature still apply today is a difficult assumption to challenge.

August 15, 2018 2:46 pm

Many thanks to Mr Halla for his kind comment. In fact, we did not assume that the system-gain factors in 1850 and in 2011 were the same: we demonstrated it.

DaveKeys
August 15, 2018 11:41 am

I have been looking at the scale of mother nature.

To fill all the rivers and lakes. To dump all the snow on the mountain ranges around the world requires roughly 1350 cubic kilometres of water to be evaporated daily. A 2 meter deep swimming pool would cover an area the size of France. This evaporation plus the rotation produced by the rotation of the earth creates vast ocean currents pulling vast heat from the tropics towards the poles. So much heat it can warm entire continents such as Europe. The scale is enormous which you would expect from a process that regulates the temperature of a planet. The effect of CO2 is like a fart in the wind. CO2 is insignificant in the grand scheme of things. The slightest change in energy from the sun would swamp any effect by CO2.

Another scam by the loony left. As out media pushes this bull shine what other lies do they tell us. Do the media ever tell us the truth. The news is just some weird form of show business.

richard verney
August 15, 2018 11:51 am

Our result explains why the pause of 18 years 9 months in global warming occurred. Because the underlying anthropogenic warming rate is so small, when natural processes act to reduce warming it is possible for long periods without warming to occur.

This is a rather grandiose claim. So what were the natural processes that exactly offset the forcing caused by the rise in CO2 over the 18 years 9 months?

As I said in a previous post on this paper, we will never be able to assess Climate Sensitivity to CO2 (if there be any at all), until such time as we know and can fully explain natural variation, what its consists of, the upper and lower bounds of each of its constituent forcings and which of these forcings are operating at any given time and in what amount.

There is a reason why there are over 50 different explanations for the pause, and that is that our present understanding of the system, how it is is driven and operates is insufficient.

The fact is that there is no correlation between CO2 and the temperature reconstructions, and we cannot explain the temperature reconstruction from 1850 to date. We cannot explain why it warmed, cooled warmed, cooled etc, and why there is so much multidecadal variation, and at the heart of this is that we do not sufficiently understand natural variation, and the data is not fit for scientific scrutiny such that we cannot eek out the signal to CO2, if any at all, from the noise of natural variation.

Latitude
August 15, 2018 12:54 pm

“we cannot explain the temperature reconstruction from 1850 to date.”

Saying the LIA ended in 1850…1850 is as arbitrary as anything else that’s made up, might as well be some random made up number…
..as far as anyone knows, almost all of this unmeasurable warming….could just be recovery

…and it would explain the pause

Dave Bidwell
August 15, 2018 1:05 pm

The statement may have been worded better. But what they are saying is that although there is a lot of variability, the trend upward is fairly predictable. And since the slope is so gradual, it should not be a surprise that longer periods of little apparent stability occur. There is no need to explain why the short term variations occur because of the “…far simpler and more reliable black-box method…”

richard verney
August 15, 2018 2:38 pm

In order to explain the post 1940 to 1975 cooling, and the ~19 year pause, what one can say is this:

1. IF Climate Sensitivity is high, it follows that the bounds of natural variability are equally high, if not higher; or

2 IF the bounds of natural variability are small, it follows that Climate Sensitivity must likewise be small, or smaller.

I think what Lord Monckton was seeking to point out is the obvious, namely if Climate Sensitivity is low then then it is not surprising that a pause might be observed. Obviously in this scenario, less negative natural variability forcing is required to completely cancel out the (assumed) positive forcing brought about by additional CO2.

When Lord Monckton used to post on the pause I often pointed out the obvious, namely that the longer the pause continues the lower the Climate Sensitivity is likely to be.

Having a low Charney Sensitivity does not explain the thermometer reconstruction from 1850 simply because there is no correlation between temperature change and CO2 in that reconstruction, and there is quite some anti-correlation in it.

August 15, 2018 2:51 pm

In response to Mr Bidwell, it is not our method but our result that helps to explain why the Pause occurred. The weaker the signal from Man, the likelier it is that long periods without warming will occur.

RobR
August 15, 2018 2:26 pm

Richard,

I’m in agreement with your position on not knowing the vagaries of natural variability, but the proposed reduction in sensitivity is congrant with recent unremarkable climate change.

You can not on one hand claim something is unknowable and and expect a definitive answer. Can’t both co-exist in nature?

August 15, 2018 2:50 pm

Mr Verney has not understood our method. We do not need to know the details he says we need to know. We have taken a black-box approach. All we need to know is the reference temperature and the equilibrium temperature at a given time. The system-gain factor is simply the ratio of the latter to the former. We obtained the same system-gain factor for 1850 and for 2011 and applied it to the published CO2 forcing to obtain 1.17 K warming per doubling of CO2.

It had long been thought that one could only attain to a respectable estimate of equilibrium sensitivity if one knew all the data mentioned by Mr Verney. However, this proves not to be the case. Provided that one uses the absolute-value form of the system-gain equation, rather than the delta form universally used hitherto in climatology, it is quite easy to obtain a reliable and quite well constrained estimate of equilibrium sensitivity.

looncraz
August 16, 2018 9:13 am

“Explain” in this case is probably a bit of a strong word in a scientific setting… probably more accurate to say that the absolute system-gain equation removes the need for an explanation, whereas AGW theory mandates an explanation that is not forthcoming.

AGW theory mandates that global warming would have been too great for a modest natural cooling event to be sufficient to counter anthropogenic forcing for such a long period of time. In the absence of an explanation as to how such a strong cooling effect arose (volcanic activity, light-blocking aerosols, sudden shift in solar output, etc.), AGW theory is left wanting by predicting far higher than observed temperatures – whereas the absolute system-gain equation remains, at first glance, entirely within the margins of error in relation to observation.

AGW is not Science
August 16, 2018 1:01 pm

To this and you posts on the last thread to which I couldn’t reply to (too many replies there?), I would say the following.

I agree wholeheartedly with the concepts you have expressed regarding the “data” being unfit for a scientific purpose, because it has inconsistencies in numbers of measuring stations, locations of measuring stations, character of measuring stations, types of enclosures, types of temperature reading instruments, finishing materials used for the enclosures, differences in how readings were taken and rounded, etc. On top of that pile of manure we can add the endless “adjustments” by (in the main) those trying to propagate belief in human-induced catastrophe.

However, LMOB has performed an invaluable service, since he has in essence, by accepting all the “climastrologist” nonsense as factual (which incorporates the “worst case” scenario in all of its assumptions), to set a “worst case scenario ceiling,” as it were, which completely implodes the case for catastrophe – and with it, the case for needed “action.”

Reply to  AGW is not Science
August 16, 2018 4:35 pm

I am most grateful to AGW Is Not Science for his kind comments. He has understood our approach perfectly: we accept ad argumentum all of official climatology except what we can disprove, compelling the usual suspects to focus solely on their unfortunate error of definition and its consequences.

John Harmsworth
August 15, 2018 11:59 am

I’m very sorry but I fail to comprehend how this changes much of anything. So far as I can see the issue is not with perturbations of the math. It is with missing and poorly understood feedbacks. Things like clouds, deep ocean currents and humidity variability and movement seem to be entire mysteries. And climate science spits out computer model outputs based on inputs that appear to deliberately ignore these factors.
It is a bogus pseudo-science which seeks to impoverish humanity and extend control over people’s lives excessively.
It is an horrific mixture of Socialism and Eco-fanaticism which leaves no role for human aspiration or endeavour. It is evil!

Robert W Turner
August 15, 2018 1:44 pm

And it’s a non-linear system.

RPT
August 15, 2018 2:15 pm

Chaotic non-linear!

August 15, 2018 2:59 pm

Chaotic, yes, but one should not equate a mathematically-chaotic system with a shambolic one. Even the pendulum of a well-regulated clock may behave chaotically in the mathematical sense (see the beautifully-written and magisterial paper by the late Sir James Lighthill on this subject in 1998). Yet the clock still keeps good time.

richard verney
August 15, 2018 2:40 pm

And one that is never in equilibrium.

August 15, 2018 3:20 pm

In response to Mr Verney, global temperature was in equilibrium in 1850: it showed no trend over the next 80 years (HadCRUT4). And one can infer the equilibrium that would have occurred in 2011 after allowing for the mid-range estimate of the radiative imbalance.

August 16, 2018 11:34 am

global temperature was in equilibrium in 1850

Prove it.

August 16, 2018 4:33 pm

Already proven. The least-squares linear-regression trend on the HadCRUT4 monthly global mean surface temperature dataset from 1850-1930, a period of eight decades, exhibits a trend vanishingly different from zero.

RACookPE1978
Editor
August 16, 2018 7:50 pm

Monckton of Brenchley

Already proven. The least-squares linear-regression trend on the HadCRUT4 monthly global mean surface temperature dataset from 1850-1930, a period of eight decades, exhibits a trend vanishingly different from zero.

??? All of the graphs I’ve seen for global average temperatures (and temperature proxies) from 1600 – 2000 show an irregular but gradual increase
from the low’s of the LIA around 1650 staggering upwards in 60-70 year increments towards the Modern Warming Period of today. Short cycle peaks in 1830, 1890, 1940-45, 2000-2010-2018.

In fact, if i were to look for a man-made (CO2 induced) “signal” in all this noise, I’d look at the differences in the short cycle cooling periods (1890-1915), 1945-1975, and 2010-2018 (which has been flat – not deceasing as it did the previous short cycles.

August 17, 2018 9:26 am

The fact remains, in answer to Mr Cook, that the trend on the HadCRUT4 dataset from 1850-1930 is as near zero as makes no difference. As far as I know, there is no similar period since 1930 with a zero trend. Furthermore, even quite large variations in the equilibrium temperature for 1850 would make little difference to the Charney sensitivity.

richard verney
August 17, 2018 4:45 pm

How about 1940s to early 1970s.

See:

I use HADCRUT3 since it is unadjusted and predates the politically motivated version 4.

August 18, 2018 7:23 am

Since we are concerned with the equilibrium temperature in 1850, looking at the 1940s to 1970s is not particularly helpful. By that stage, anthropogenic effects were beginning to make themselves felt in the temperature records.

August 15, 2018 2:58 pm

In response to Mr Turner, some feedback processes are indeed nonlinear, such as the water-vapor feedback. However, the reference temperature in 1850, before we had much to do with it, was 254.8 K, and that is some 375 times larger than the manmade reference warming from 1850-2011. Once one thus includes the sunshine in the calculation, one can ignore feedbacks altogether without much error, so the nonlinearities simply don’t matter.

See - owe to Rich
August 16, 2018 1:18 am

I am afraid that actual non-linearity, or non-constantness, is the major flaw in this paper. For consider the thought experiment in which the Earth slowly moves its orbit towards the Sun, or the Sun slowly increases in brightness, either way increasing the “forcing”. Early on, the Earth is very cold, all its water is frozen and the albedo is high. As the forcing increases, nothing much happens by way of feedback – the temperature increases but still it is too cold to melt ice. At a certain point though, ice starts to melt around the equator, and positive feedbacks (less ice, smaller albedo, water vapour GHG effect) compete with a negative feedback (clouds start to form). I think the positive feedback wins.

At some point, when the ice retreats to Lisbon say, the positive feedback is huge and the inhabitants worry about runaway global warming (incorrectly) and how many more hundreds of metres of sea level rise are going to occur (correctly). But later on, when the ice is restricted to the polar regions, as now, the feedback falls again, as the albedo is starting to approach its limit.

So when Monckton et al divide 288K by 255K or whatever to get a sensitivity amplification factor, this is incorrect, because almost no feedbacks occurred on temperatures below 200K or so, yet the equation assumes that they have. What matters for present global warming is present feedbacks, and my money is on them implying a sensitivity closer to 2K per CO2 doubling than to 1K. Which is still highly unalarming in my view.

Rich.

See - owe to Rich
Reply to  See - owe to Rich
August 16, 2018 4:32 am

Here are some mathematics on the radiative forcings which underlie temperature, to underpin my comment.

Let T(F) = (F/s)^.25 be the standard conversion from forcing to temperature. Suppose that any feedback is applied to the forcing rather than the temperature. Suppose that the forcing without feedbacks in 2011AD was F1, and that the feedback ratio for forcing was f1. Then the equilibrium temperature was E1 = T(F1(1+f1)). Now suppose that we add a forcing x (e.g. 3.7Wm^-2) due to a doubling of CO2, and that the feedback ratio changed to f2. Then E2 = T((F1+x)(1+f2)). E2-E1 is then the sensitivity to doubled CO2. Now,

E2/E1 = [(F1+x)(1+f2)/F1/(1+f1)]^.25
= [(1+x/F1)(1+(f2-f1)/f1)]^.25
~ 1 + x/(4F1) + (f2-f1)/(4f1)

(1+x/(4F1) is the temperature multiplier if f2=f1, and x E1/(4F1) would be the standard 1.1K for E1 = 288K. But if albedo reduced or water vapour infrared absorption increased then E1(f2-f1)/(4f1) could be at least as big a player as that 1.1K. So it is the fractional change in forcing feedback which matters.

Reply to  See - owe to Rich
August 16, 2018 8:03 am

Rich attempts to convert forcing to temperature by taking the fourth root of the ratio of the forcing to the Stefan-Boltzmann constant. That will give a spectacularly wrong answer. The correct approach is to take the product of the forcing and the Planck sensitivity parameter (about 0.3 Kelvin per Watt per square meter at present).

Next, Rich supposes that feedback responds to forcing rather than to temperature. However, in climatology the feedbacks respond to temperature, which is why they are denominated in Watts per square meter per Kelvin of the reference temperature that triggered them.

Thirdly, based on the above two serious errors, Rich concludes that if albedo were to diminish or water vapor feedback were to increase the feedback response “could be” at least as great as the directly-forced reference sensitivity to doubled CO2. But our result shows it couldn’t, under anything like modern conditions.

See - owe to Rich
August 16, 2018 3:03 pm

Milord, that is odd, because I thought that in a summary which you once posted you used the fourth root and SB constant to derive a figure of 255K, but I shall have to look back over your many postings to check that. I don’t see how it gives a spectacularly wrong answer – can you supply figures to support that?

I did indeed use a feedback related to forcing, which is slightly incorrect as you say, but only slightly because of the relationship between forcing and temperature. In any case in a comment further down thread I have reanalyzed it directly in terms of your E = R/(1-f) and find that a problem persists if f varies, but I’ll await your answer to that comment.

Harry Palmes
Reply to  See - owe to Rich
August 16, 2018 5:56 am

@See – owe to Rich

After spending many days looking at this, I agree with you. Lord Monckton’s conjecture can explain the equilibrium temperature of 288K from the Emission Temperature of 255K, using a low feedback factor (circa 0.1) acting on the whole 255K input temperature. This is evidently true.

But an equally satisfactory conjecture would be that no feedbacks act until 255K and then they ‘turn on’ to act on input temperatures above 255K. The arithmetic for this works equally well using a feedback factor of 0.75 to also yield the correct equilibrium temperature of 288K

So both conjectures give the correct result. So which one is more likely to be correct?
The warmists can cite some sensible reasons for the latter. For example, water vapour feedback cannot act without any water vapour in the atmosphere (fairly obviously), and that begins at approximately 240K, not 0K.

Conversely I don’t see good reasons to 0K as the datum (Monckton) in place of 255K (IPCC et al)

August 16, 2018 8:10 am

Mr Palmes has done us the courtesy of thinking about our result for some time. However, it may be that he has not quite understood the data or the method.

He starts his calculation well before the industrial era, assuming that no feedbacks act until 255 K. However, Lacis+ (2010), using a general=circulation model, concluded that some 8.75 K of feedback response acted until 252 K, and that thereafter three-quarters of the warming to 1850 was attributable to feedback, implying a system-gain factor of 4.0.

However, from 1850-2011 the reference sensitivity was 0.68 K and the inferred equilibrium sensitivity (after allowing for the radiative imbalance) was 1.02 K, implying a system-gain factor of only 1.5, within shouting distance of our 1.13. So, what has happened? Well, once the great ice-sheets had melted and the specific humidity of the atmosphere had increased in response, the climate settled down to something like modern conditions. That is why the system-gain factors for 1850 and for 2011 are near-identical at 1.13 (or 1.50 if one uses the highly-uncertain delta values rather than the far better constrained absolute values of reference and equilibrium temperature).

Our argument may appear naive, but merely because it is simple one should not underestimate its subtleties. It leaves very little room indeed to imagine an equilibrium sensitivity of much more than 1.5 K at most to doubled CO2. And that is just not enough to constitute a problem.

Reply to  See - owe to Rich
August 16, 2018 7:56 am

“See-owe 2 Rich” perpetrates an error common among those who have little understanding of feedback theory. He assumes that the feedback processes must somehow “know” what they would have been at a different moment or how they might have responded to a different temperature. In fact, feedback processes respond simply to the temperature they find. The reference temperature in 1850 was 254.8 K. The equilibrium temperature was 287.55 K. Like it or not, the ratio of the latter to the former was 1.13, and that is the system-gain factor. What is more, that system-gain factor remained the same in 2011, suggesting that under modern conditions nonlinearities in individual feedbacks make little difference to equilibrium sensitivity.

But let us do it Rich’s way. in Lacis+ (2010), it is assumed that feedbacks in response to the emission temperature of 243.25 K were only 8.75 K, owing to the very small specific humidity they expected in the absence of non-condensing greenhouse gases. They then assumed that three-quarters of the warming in response to the non-condensing greenhouse gases was attributable to the feedback response, implying a system-gain factor of 4. However, that feedback factor would only be relevant on an Earth that began by being almost entirely ice-bound and ended up with little more ice in 1850 than there is today.

From 1850-2011, the reference sensitivity to net anthropogenic forcings is about 0.68 K, and the equilibrium sensitivity is about 1.02 K, using mid-range estimates and standard methodology to obtain both values. In that event, using the delta form of the system-gain equation, the system-gain factor is only 1.02 / 0.68, or 1.50, within shouting distance of our result obtained using the far more reliably estimated absolute reference and equilibrium temperatures. Note that using the delta equation the system-gain factor has fallen from 4 to 1.50, reflecting the fact that there is no longer a vast ice-sheet in temperature latitudes. Yet the models’ implicit mid-range system-gain factor is 3.25, well over double the true value and far too close to the value that might have obtained while the great ice-sheets were melting.

It should be as plain as a pikestaff that the very high equilibrium sensitivities predicted by IPCC and the models are simply unjustifiable, whether one uses the delta or the absolute-value system-gain equation.

See - owe to Rich
August 16, 2018 1:32 pm

Milord, I don’t think I perpetrate the error you claim, and perhaps you would be so kind as to show where I did.

But let’s do it Monckton’s way, quid pro quo. So I won’t use forcings , but your E = R/(1-f) formula, where now f is obviously different (but related to my old f), but with f varying.

Then at one time we have E1 = R1/(1-f1) and later after a doubling of CO2 we have E2 = R2/(1-f2). Then E2/E1 = R2(1-f1)/R1/(1-f2), whence sensitivity is

S = E2-E1 = (R2-R1)/R1 + (f2-f1)/(1-f2) + [(R2-R1)(f2-f1)]/[R1(1-f2)]

Now it is evident that if the feedback ratio f is increasing as temperature rises then there is more to S than the (R2-R1)/R1 term, and f2-f1 comes into play.

Perhaps you can show that f2-f1 is tiny? This would imply that the effect of smaller ice sheets and more water vapour is tiny, which may possibly be the case.

Reply to  See - owe to Rich
August 16, 2018 4:31 pm

Rich should read the head posting. We derived the system-gain factors for 1850 and again for 2011 and they were near-identical at 1.13. Even if one used the delta version of the system-gain equation the value for 2011 was only 1.50. Under anything like modern conditions, therefore, the system-gain factor is in reality near-invariant. We are not concerned with what it might be if it were not near-invariant, for it is in fact near-invariant.

Our professor of statistics allowed for quite wide variations in the underlying quantities, and concluded that, to 2-sigma uncertainty, the bounds of the interval of Charney sensitivity are only 1/12 K either side of our mid-range estimate of 1.17 K. Why, then, should I consider some purely abstract scenario that is not on all fours with observed reality, when we have already carried out a 30,000-trial Monte Carlo simulation?

See - owe to Rich
August 17, 2018 12:57 am

Milord, thank you, I believe I may paraphrase your reply to say that you are answering my last paragraph about f2-f1 in the affirmative. I shall study the details of the 2011 calculation over the weekend.

See - owe to Rich
Reply to  See - owe to Rich
August 17, 2018 1:09 am

No-one yet spotted the algebraic mistake I made in the above equation – the whole of the RHS should be multiplied by E1 (or the LHS divided by E1). I leave it to the reader to draw the inferences of the extra factor.

See - owe to Rich
Reply to  See - owe to Rich
August 18, 2018 6:17 am

Milord, I have now studied your 2011 calculation, and that of doubled CO2, in error-summ.pdf linked from https://wattsupwiththat.com/2018/07/30/climatologys-startling-error-an-update .

(In parenthesis, as I explained back in May I disagree with your and IPCC way of doing feedbacks on temperatures because they cannot be physically added together, whereas forcings can, but I leave that aside for now.)

Let us take my previous equation but drop the last product term because it is of second order in small quantities:

E2-E1 ~ E1[(R2-R1)/R1 + (f2-f1)/(1-f2)]

First note that this equation can be used to reproduce your “Equilibrium 2” result. Translating the notation in that PDF to that in this blog, we have:

E1 = 287.55, E2-E1 = 1.02, R1 = 254.8, R2-R1 = 0.68
f1 = 1-R1/E1 = 0.1139, f2 = 1-R2/E2 = 0.1147
E1(R2-R1)/R1 = 0.77K, E1(f2-f1)/(1-f2) = 0.26K (1)

The sum of these two is 1.03K, which agrees to within rounding error with the value 1.02K in the PDF, thereby verifying my mathematics.
Now you will observe that ¾ of that value arises from the ‘R’ term, and ¼ from the ‘f’ term, showing that the change from f1 to f2, though small, had a noticeable effect, so “near invariance” is not the same as “invariance”.

Let us now consider “Equilibrium 3” in the PDF, for doubled CO2. The equations above may be used, except that now R2-R1 = 1.04, E2 is unknown and so f2 is unknown. So

E1(R2-R1)/R1 = 287.55(1.04)/254.8 = 1.17K (2)

and that is the figure you quote as the bottom line of that PDF for Charney sensitivity (for which S seems a reasonable variable name).

Note that in your 1.03K which agrees with (1) you effectively allowed for f2 > f1, but in (2) you assume f2 = f1! Here is a table of correction differences which must be added to (2) depending on the value of f2, the first row being for the f2 estimated in the 2011 “Equilibrium 2” case.

f2 correction
0.1147 0.26K
0.1150 0.36K
0.1160 0.68K
0.1170 1.01K

Now the argument about the value of S turns on the value of f2 at the end of a doubling of CO2. And for this we also need to specify whether the doubling is from pre-industrial 280ppm or from current 400ppm. Since we are currently half way through a doubling from 280ppm, it could be argued that f will change as much from 2011 until 560ppm (in say 2100) as it did from 1850 to 2011, and that would make the correction twice 0.26K, i.e. 0.52K, for a total of 1.67K. Oddly enough, that is quite close to the 1.76K one obtains by using the delta forcing approach with a system gain you quoted as 1.5.

But others may argue differently about the change in f. Further, regarding a doubling to 800ppm, if the correction is 0.26K for each half-doubling then it would be 0.78K after the 1.5 doublings from 280ppm to 800ppm, leading to a warming of 1.17+0.78 = 1.95K compared with today, which is still not too scary, but significantly higher than the value you are quoting.

Reply to  See - owe to Rich
August 18, 2018 7:21 am

Rich has failed to appreciate one of the main points of our argument, which is that the values of the delta reference and equilibrium temperatures are subject to very large uncertainties, whereas the values of the absolute reference and equilibrium temperatures are preferable because they are subject to far smaller uncertainties.

For instance, the mid-range estimate of the net anthropogenic forcing to 2011, given in IPCC (2013) as 2.29 K, is very considerably understated because IPCC had introduced a very strongly negative aerosol forcing, two-thirds of which is simply implausible. Professor Lindzen refers to this as the “aerosol fudge-factor”. IPCC is now beginning to concede that that fudge-factor was excessive. If one reduces it by two-thirds, then using the delta-value system-gain equation for 1850-2011 produces the same equilibrium sensitivity as the absolute-value equation – about 1.13.

See - owe to Rich
August 18, 2018 8:07 am

Monckton has replied fairly quickly and in his haste omitted to respond to any detail of what I wrote, especially the egregious exclusion of an f2-f1 term in Equilibrium 3 when it is clearly visible in Equilibrium 2. Monckton has assured us of his prowess in mathematics, and I am satisfied with those assurances. Therefore given a little more time, should he have the inclination, I am sure he will understand the import of what I write.

As to “large uncertainties”, well they cut both ways. Any peer reviewer is going to say there is uncertainty in f2-f1, and f2 may then be so high as to imply that S is 3K or more (which I don’t believe, but justification re f2 must be made if this paper is to have any value).

See - owe to Rich
Reply to  See - owe to Rich
August 19, 2018 3:35 am

Well, Lord Monckton has gone rather quiet on this. I suppose various reasons are possible.

a. He has no good answer and hopes that he can just ignore the issue which I raise.
b. Some of his co-authors have been following this, and being better scientists they realize there is an issue to address, but it will take them and Lord M some time to reach a resolution.
c. More extremely, they are all sitting in a university hall somewhere, probably not that of East Anglia :-), gnashing their teeth and wailing “Who will rid us of this turbulent mathematician who has so cruelly dissected our beautiful equation?”

I think “dissect” is an appropriate word, for I have split the Monckton-style equation for sensitivity into two parts, being an ‘R’ part which depends on the CO2 forcing and an ‘f’ part which depends on the change in the feedback ratio over time, and the ‘f’ part cannot be ignored.

Incidentally, I made a small mistake in my comment of 6:17am yesterday. It is to do with the effect of doubling from 400ppm to 800ppm. When we consider this, the R1 and f1 values should be updated to reflect 2011 rather than 1850, which won’t alter things hugely, and then the correction would only be twice 0.26K rather than three times it. That would leave S at close to 1.7K, which is even less scary.

I hope that the more refined and accurate analysis which I have laid out will lead to a revised Monckton et al paper which is more likely to pass peer review, which I certainly hope to see happen.

Rich

Reply to  See - owe to Rich
August 19, 2018 4:16 am

Rich has not understood my earlier answer. Using the absolute-value equation, the system-gain factors for 1850 and 2011 are so near-identical that one may safely use that value in deriving Charney sensitivity. We simply do not need to know what is going on inside the feedback black box.

Reply to  See - owe to Rich
August 19, 2018 4:34 am

See-owe to Rich has written a series of jumbled equations that are more than a little difficult to follow, not least because he has at least twice corrected them ex post facto. If he will get his act together and set out the equations on which he relies, this time in a form that he himself considers correct, and preferably using the same notation as we have used in the head posting, and putting quantities to the variables, and if he will give a brief and clear statement of why he thinks that his equations demonstrate an error in our approach, and if he will state in clear terms what he conceives that error on our part to be, I shall attempt to answer him.

At present, as best I can understand him, he is saying that he would rather continue relying on the error-prone delta-value system-gain equation exclusively used in climatology, which, however, suffers from the twin defects of great uncertainty in the reference and equilibrium sensitivities and unwarrantably extreme sensitivity to quite small changes in those quantities.

See - owe to Rich
August 19, 2018 12:50 pm

I have just read this, after an enjoyable day in the Cotswolds, and it will be my pleasure. Thank you for the invitation.

See - owe to Rich
Reply to  See - owe to Rich
August 19, 2018 2:45 pm

OK, Lord Monckton asks for clarification, which is to say, reiteration in a more measured form.

Regarding notation, I shall use the equation E = R/(1-f) at the head of this thread, rather than the notation in his document error-summ.pdf (“the PDF”), though I do take the values of these variables exclusively from that document. So R is the “reference temperature”, including GHG forcing but no feedbacks, f is the feedback ratio, and E is the equilibrium temperature after feedbacks have applied and settled down. The PDF also uses the variable A = 1/(1-f). I use the qualifier ‘1’ to correspond to Monckton’s “Equilibrium 1” date of 1850. Thus,

E1 = R1/(1-f1)

where R1 = 254.8K (called T_{r1} in the PDF), E1 = 287.55K (called T_{q1}), f1 = 1-R1/E1 = 0.1139.

I then use the qualifier ‘2’ to correspond to the “Equilibrium 2” date of 2011. Thus,

E2 = R2/(1-f2)

where R2 = 254.8+0.68 = 255.48K, E2 = 287.55+1.02 = 288.57K, f2 = 1-R2/E2 = 0.1147.

Now, my “dissection” to arrive at E2-E1 was:

E2/E1 = (R2/(1-f2)/(R1/(1-f1))

E2-E1 = E1[ (R2-R1)/R1 + (f2-f1)/(1-f2) + [(R2-R1)(f2-f1)]/[R1(1-f2)] ]
I can spell that out in easier steps if that is required. I then drop the last term because it is the product of two small first-order (but important) quantities:

E2-E1 = E1[ (R2-R1)/R1 + (f2-f1)/(1-f2) ] (*)
This equation (*), which is new as far as I can tell, establishes that a change in equilibrium temperature E arises from two sources, an ‘R’ part and an ‘f’ part. From the figures above we have the two parts being

E1(R2-R1)/R1 = 0.77K
E1(f2-f1)/(1-f2) = 0.26K

Thus E2-E1 = 1.03K which agrees with the PDF’s 1.02K to within rounding error, and corroborates the algebraic manipulations which led to it. I then move on to consideration of “Equilibrium 3”, for a doubling of CO2 from 1850 values. In previous comments I reused the qualifier ‘2’ for this, following the lead of the PDF itself, but it will be clearer if I use ‘3’ instead here.

Thus R3 = R1 + 1.04 = 254.8+1.04 = 255.84K.

Now, the said doubling has not happened yet, so we don’t know what f3 and E3 will be. But if we accept the Monckton et al view of feedback, we know that E3 = R3/(1-f3), and therefore by (*) with ‘2’ replaced by ‘3’,

E3-E1 = E1[ (R3-R1)/R1 + (f3-f1)/(1-f3) ] (**)

In Monckton’s preceding comments at 4.16am and 4.34 am, he made the assertions “the system-gain factors for 1850 and 2011 are so near-identical that one may safely use that value in deriving Charney sensitivity” and “he would rather continue relying on the error-prone delta-value system-gain equation exclusively used in climatology” respectively.

The first assertion is an understandable error, because the values f1 = 0.1139 and f2 = 0.1147 do look very close. But a consequence of Monckton’s desire _not_ to use delta values, means that when one multiplies the small difference f2-f1 (divided by 1-f2 in (*)) by the largeish number E1 = 287.55K one gets a not insignificant number 0.26K which provides a contribution of one quarter of the total E2-E1.

The second assertion is just a misunderstanding of my mathematics, which I hope is cleared up by the present elaboration.

Now, the nub of the matter is the value of S = E3-E1, the total equilibrium warming from a doubling of CO2. The PDF uses the value

1.17K = 287.55(255.84-254.8)/254.8 = E1(R1-R3)/R1

Thus, comparing with (**), it has been assumed that f3 = f1, precisely in line with Monckton’s first assertion above. Yet, if it were the case that f2 = f1, then E2-E1 would be only 0.77K, which is quite a large discrepancy from the PDF’s quoted 1.02K, and one which could not be overlooked. Since f2-f1 is not zero, by the small amount 0.0008 which has these significant consequences, it seems unwise to assume that f3-f1 is zero.

In my earlier comment I argued that f3-f1 should be at least f2-f1, and more likely twice as much because it applies to a whole doubling of CO2 rather than a half doubling. Hence I would add 0.52K to that 1.17K to get 1.69K.

And some climate scientists may find arguments for f3-f1 > 2(f2-f1), while others may find arguments for it being smaller. But, if one accepts this Monckton et al view of sensitivity (and I have expressed reservations), then the argument is all about the value

…………………………f3 – f1…………………….

Hope this helps.

See - owe to Rich
Reply to  See - owe to Rich
August 22, 2018 1:50 pm

Nigh on four score and seven hours ago our “dissect” comment brought forth on this blog a new idea dedicated to the proposition that all contributions to climate sensitivity are created equal. Now we are engaged in a great personal war testing whether this idea, or any idea so conceived and dedicated, can endure the ignorance, dismissiveness, and obscurantism of Lord Monckton. [With apologies to Abraham Lincoln.]

Fewer than three score minutes after that comment, Lord Monckton promised that if Rich does A, and Rich does B, and Rich does C, then Lord Monckton shall attempt to answer him. Well, in the comment at August 19 2.45pm Rich did do A, and he did B, and he did C, yet Lord Monckton has not answered.

Granted, no timescale for such reply was put forth. And I understand that this is a very difficult reply for LM to make, because he might actually have to admit that he is wrong, a thing which his many writings and comments attest that he is very loath to do (unlike Willis Eschenbach who has always been very gracious when he has made some error or other). But how many score hours must I await the mighty lord’s reply?

Has LM the Grace to reply courteously, accurately, and with due deference to the truth of my mathematics? Or would that require him to be a Marquess, rather than a Viscount?

Rich (mere commoner and Ph.D.)

Robert W Turner
Reply to  See - owe to Rich
August 16, 2018 8:59 am

I view this paper as a 1) mathematical attempt to simplify the problem as much as possible to estimate climate sensitivity (perfectly legitimate) 2) this is exactly what the IPCC has done but Monckton et al. did it correctly.

I say simplified as much as possible because water complicates things and the vast majority of exchangeable heat on this planet is in the oceans and crust, not in the atmosphere. Not only is climate a chaotic nonlinear system, but it is hysteretic, not only sensitive to forcings and feedbacks happening this moment, but sensitive to forcings and feedbacks of the past thousands of years. That’s why it bugs me when decadal trends in temperature are only attributed to forcings in the moment, when in reality they are the product of the culmination of recent geological history.

August 16, 2018 4:27 pm

Mr Turner is quite right. Looking at periods as short as decades is imprudent. One should look at centuries at least.

August 15, 2018 2:55 pm

Mr Harmsworth is worried about “missing and poorly understood feedbacks”. It was precisely for that reason that we found a way of deriving equilibrium sensitivity that did not require any knowledge at all of the value of any individual feedback, or of the interactions between feedbacks. As long as one uses the absolute-value system-gain equation rather than the delta-value equation universally used in climatology, one does not need to know what goes on inside the feedback black box. All one needs is the reference temperature before feedback and the equilibrium temperature after feedback. The system-gain factor is simply the ratio of the latter to the former.

In our submission, the value of our method is that it exposes those elements of official climatology that are indeed “bogus pseudo-science”.

August 15, 2018 11:59 am

Like the report but I have a question:
You used 1850 and 1911. Would using other years show the same thing? Thanks

richard verney
August 15, 2018 2:43 pm

It does make a difference, but not much of a difference since the big adjuster is having the water vapour feedback already operating and locked in as a forcing which was present as at 1850 and which contributed to the 1850 temperature.

August 15, 2018 5:09 pm

In response to Mr Verney, water vapor is (except at the margins) treated as a feedback and not as a direct forcing in climatology.

August 15, 2018 3:02 pm

Mr Shedlock raises a very proper question. We used 1850 as the start date because that was the first year of the HadCRUT4 global mean surface temperature database, and because we had had little if any impact on the climate that early on. We chose 2011 because that was the year to which IPCC brought all its predictions and data up to date for inclusion in the 2013 Fifth Assessment Report, from which we obtained them.

However, we also conducted an empirical campaign based on ten separate estimates of net anthropogenic forcing in the industrial era to various dates. In every instance, the Charney sensitivity was found to be 1.17 K. Mr Shedlock will find the table of results in our paper when it is published.

dodgy geezer
August 15, 2018 12:04 pm

It’s no good having an answer if people are gong to ignore it.

I suggest that you take that calculation and try to formally embarrass some climate change scientists with it. Try the UK Met Office. Make it a point in Parliament…

richard verney
August 15, 2018 2:49 pm

It has to pass peer review, and this would appear to be extremely difficult.

As I understand matters even prominent skeptics such as Dr Spencer and Professor Curry do not support it! That being the case what is the prospect of getting a panel of peer reviewers to accept the paper for publication?

Despite the simplicity of the point made:

Erroneously, IPCC (2013, p. 1450) defines temperature feedback as responding only to changes in reference temperature. However, feedback also responds to the entire reference temperature.

it is going to be an uphill slog

August 15, 2018 3:21 pm

In response to Mr Verney, I do not suggest that it will be easy to persuade official climatology of its error. Trillions have been invested on the assumption that there is no major error. However, we shall persist until either we are given a sound reason why we are wrong or we are published. I am not afraid of uphill slogs.

August 15, 2018 3:05 pm

In response to Dodgy Geezer, first and foremost we must give the climatological community a fair chance to assess our result. To this end, we have invited the editors of the journal that is considering our paper to send it to some of the most prominent scientists who have expressly perpetrated in their papers the error we have found. In that way, they will have the fairest chance to respond and to point out any defect in our own argument.

Once that scientific exchange of ideas is complete, either the paper will be found unworthy of publication, in which event pestering scientists or parliamentarians would be wrong, or it will be published and then subjected to what I imagine will be some of the most intense scrutiny ever directed at a scientific result. If the paper survives that scrutiny more or less intact, then it will be open to all of us to draw our elected representatives’ attention to its conclusions.

John Endicott
August 16, 2018 8:35 am

Or, as I’m sure you are well aware, the paper will not be published for the slightest of reasons that don’t actually reflect on the validity of the paper (Pal review at it’s finest!) in which case you will have to endeavor to find another publisher and try to get it through their Pal review process and so on until you find one that will access the paper honestly.

August 16, 2018 4:25 pm

Mr Endicott is probably right that the reviewers will continue to give us a hard time: but we are expecting them to produce proper scientific arguments against our conclusions this time. The yah-boo we got first time around will not be tolerated again. If we are incorrect, than all the reviewers will have to do is to point out, quietly and straightforwardly, that we have misunderstood or miscalculated something, and to back up their assertions with rational argument.

Antero Ollila
August 15, 2018 12:11 pm

Just two simple questions for a start. In the text above: “Our calculation starts not with zero Kelvin but with the reference temperature of 254.8 K in 1850.” What is this reference temperature? Is it a real surface temperature in 1850? It does not look like. What is the disturbance, which should results to a new equilibrium temperature? The CO2 forcing is a continuos change and not a step change at a certain year like in 1850.

August 15, 2018 3:10 pm

In response to Mr Ollila, the reference temperature in 1850 was derived thus. We relied upon the results in Lacis+ 2010, who found that in the absence of the non-condensing greenhouse gases the albedo of the Earth would be 0.42 against today’s 0.30. Using today’s insolation and Lacis’ albedo, we derived the emission temperature in the absence of those gases: it would be about 243.25 K.

Next, we considered three distinct methods of deriving the warming caused by the presence of the non-condensing greenhouse gases to 1850. The mid-range estimate was about 11.55 K. Adding this to the 243.3 K emission temperature gave a reference temperature of 254.8 K for 1850. The equilibrium temperature that year was 287.55 K (and there would be no trend for another 80 years). The ratio of the latter to the former is the system-gain factor: i.e., 1.13.

Other values for the emission temperature and the warming from the non-condensing greenhouse gases could of course be taken, but at any realistic values there would not be much change in the system-gain factor.

Joe Born
August 15, 2018 12:14 pm

Suppose that at exactly 7:00:00 PM you want to estimate how far a car traveling down the highway will go in the next minute. If all you know is that it went exactly one mile in the last minute, a pretty reasonable estimate would be one mile.

But now say you know more. Suppose you additionally know that since it left home in city traffic at exactly 3:00:00 PM it has gone precisely 120 miles: its average speed up to now has been only 30 mph, not the 60 mph the last mile implies. Would you change your estimate?

I probably wouldn’t, either. The car is currently going 60 mph, so that’s what I’d base the estimate on.

According to the logic of Lord Monckton and his “team of professors, doctors and practitioners of climatology, control theory and statistics,” though, that would be a “grave error” because I haven’t taken the entire trip into account. Taking that into account would enable them to make a more-informed projection, based on the 30 mph speed for the entire trip, of only half a mile rather than my naive one mile.

Yes, climate isn’t road trips. But ask yourself whether Lord Monckton has really shown you how that distinction makes a difference to the math here. The reason you don’t quite understand how all the feedback and op-amp stuff supports his conclusion is that it doesn’t.

Latitude
August 15, 2018 12:34 pm

..you didn’t account for stopping for pee breaks

Joe Born
August 15, 2018 12:43 pm

Now I get it. That’s what the op amps simulate.

Jeff Mitchell
August 15, 2018 2:11 pm

You didn’t account for the possible use of adult diapers. 🙂

Dave Bidwell
August 15, 2018 1:18 pm

I don’t think Lord Monckton is trying to arrive at the speed in the next minute, or the temperature in the next year. It’s a long term projection/prediction that black boxes the myriad inputs and feedback. He clearly ties the methodology back to Hansen himself who botched it.

Joe Born
August 15, 2018 7:32 pm

I’m afraid you’ve completely misunderstood the analogy. If you watch Lord Monckton’s video (https://www.youtube.com/watch?v=kcxcZ8LEm2A), you’ll see that it’s based entirely on the (as it happens, false) premise set forth in the slide at 13:10. His premise is that the reason why “climatology,” as he expresses it, produces such high equilibrium climate sensitivity values is that it computes after-feedback equilibrium temperatures ($T_\mathrm{eq}$) from before-feedback, “reference” temperatures ($T_\mathrm{ref}$).

All rubbish, of course, as his own video would show him if he had the wit to recognize it: the approach he contends they use doesn’t, at least as applied to his numbers, yield the high values the GCMs profess to find. (He unwittingly admits as much in the comment nearby.) But he doesn’t recognize that his premise is false.

So, suspending disbelief, I set forth a road-trip analogy whose purpose was to follow him down that rabbit hole and consider just the math that his (invalid) premise implies. In that analogy the dependent variable, distance, is a function of the independent variable, time, just as his dependent variable, $T_\mathrm{eq}$, is according to that slide a function of his independent variable, $T_\mathrm{ref}$.

Now, the desideratum is the equilibrium-temperature change $\Delta T_\mathrm{eq}$ that results from a carbon-dioxide-concentration doubling, which is associated with just a 1.05 K reference-temperature change $\Delta T_\mathrm{eq}$ out of an “entire” $T_\mathrm{ref}$ value 255.4 K. That small change is indeed analogous to only the next minute out of 4 hours x 60 minutes/hour = 240 minutes.

Yes, he’s looking for a long-term equilibrium-temperature projection, but that projection is tiny in terms of his independent variable, reference temperature. So in terms of that analogy he’s looking ahead only the next minute.

August 15, 2018 8:51 pm

Mr Born, who has never studied feedback theory, presumes to lecture our professor of applied control theory. If only he were less spiteful and more interested in the truth, his comments would have some merit. As it is, he is merely another concern troll, paid – for all I know – to be ignorant, malevolent and silly here: for why would he write such nonsense unless he were paid to confuse with his pathetically off-the-point analogy?

When Mr Born has learned a little elementary math, he will find that the effect of including the emission temperature and the warming from the pre-industrial greenhouse gases in the feedback calculation is to stabilize the system-gain factor, which turns out to be 1.13 both in 1850 and in 2011. Once that has been established, of course it can be applied to derive Charney sensitivity to doubled CO2 compared with 2011, for the small warming that has occurred since 1850 is simply insufficient to lead to a major bifurcation or, as the climate extremists put it, a “tipping point”.

paul courtney
August 16, 2018 12:56 pm

Mr. Born: Your road trip analogy doesn’t consider how the car went from averaging 30mph (or 0, for that matter) to 60mph. Does pushing on the gas pedal count as a “feedback”? Not sure I can make your analogy work, but I can see what’s wrong with it- if the car is to continue at 60 you still push on the pedal, no? If you push on the ‘feedback” pedal and go faster, and you want to know how you got there, do you only consider the bit of extra push? I think he has shown that the IPCC is ignoring the push that got you to 60, and therefore implying that the little extra push produces all of the speed. I’m not sure your analogy can be made to work, but your last comment indicates you’re not looking for understanding.

Tom Abbott
August 16, 2018 1:53 pm

Rick C PE
August 15, 2018 1:32 pm

Try your analogy with a different question – What will your average speed be after the next mile? 60 MPH or still very close to 30?

RACookPE1978
Editor
August 15, 2018 2:42 pm

Rick C PE

Try your analogy with a different question – What will your average speed be after the next mile? 60 MPH or still very close to 30?

Rather: Try your analogy with a different question – What will your instantaneous average speed be after each of the next ten miles? 0 , 10, 15, 20, 25, or 60 MPH? Will any be at the average speed of 30?

Joe Born
August 15, 2018 7:35 pm

You may not realize it, but you’re making my point.

Yes, if the expected happens and the car does indeed travel a full mile in the next minute, the average speed will remain very close to 30 mph, although the speed relevant to estimating how far the car would go in that minute is 60 mph. Lord Monckton wants to use 30 mph to determine it; as he describes it, “climatology” wants to use 60 mph. Obviously, the “climatology” approach is superior.

paul courtney
August 16, 2018 1:08 pm

Mr. Born: Maybe I’m missing something, but this “average speed” that you use is missing his point. He’s not trying to show the average of feedbacks like average mph. Instead, he’s showing that there were forces (feedbacks) that got you to 60 which are ignored by the IPCC when they try to show how it goes from 60 to 70.

eyesonu
August 15, 2018 1:37 pm

jhborn,

I can’t make any sense of what you have written or how it would be relevant to the discussion.

Gunga Din
August 15, 2018 2:42 pm

Well, if Mann was driving the car at the start and you believe what he said about where he started and how fast he was going and how far he had gone (all based on his left rear tire’s tread wear) ….

HotScot
August 15, 2018 5:00 pm

Gunga Din

A left rear tyre a few hundred years old.

eyesonu
August 15, 2018 5:53 pm

Don’t forget he inflated it with C02.

Theo
August 15, 2018 6:39 pm

Joe Born
August 15, 2018 7:37 pm

It’s called an analogy. In my road-trip analogy the dependent variable, distance, is a function of the independent variable, time, just as Lord Monckton’s dependent variable, $T_\mathrm{eq}$, is a function of his independent variable, $T_\mathrm{ref}$.

I fleshed that out in a comment nearby.

Jeff Mitchell
August 15, 2018 2:09 pm

If the average speed for the whole trip is 30 mph it means that the current 60 mph isn’t going to be much of a predictor and it is unknown what will happen in the next minute. So if I drive surface streets at 25 mph (40 kmph), get on a freeway (very fast highway if it isn’t free) at 70 mph (113 kmph, our current speed limit in town) for a couple miles, then back to 30 mph (48 kmph), when I get off, then predictions of the last minute aren’t going to help us much. What we do know is that there is a bunch of variability involved if we look at the whole trip, and that we might want to account for it instead of just looking at the last minute.

Joe Born
August 15, 2018 7:38 pm

You don’t get the point of analogies, do you?

John Endicott
August 16, 2018 8:41 am

You don’t know how to make an applicable analogy, do you?

August 15, 2018 3:16 pm

Let us do things Mr Born’s way and use only the delta-value system-gain equation rather than the absolute-value equation that we used.

The reference sensitivity from 1850-2011 was 0.68 K. The equilibrium sensitivity, after due adjustment for the mid-range estimate of the radiative imbalance to 2010 given in Smith+ (2015), is 1.02 K. The system-gain factor is the ratio of the latter to the former: i.e., 1.50. And that is within shouting distance of our 1.13. Indeed, if one removed two-thirds of IPCC’s aerosol fudge-factor from the list of anthropogenic forcings, the ratio would be exactly 1.13. And that would give a Charney sensitivity of 1.17 K. Using 1.50 as the system-gain factor would give 1.55 K, somewhat above our best estimate but well below the bottom of the [2.1, 4.7] K interval of Charney sensitivities in the CMIP5 models (Andrews 2012).

Using the absolute-value equation, we find the system-gain factor to be 1.13 both in 1850 and in 2011. Unless Mr Born imagines that the very low rate of warming that such values imply is going to be sufficient to engender a major “tipping-point” in the climate, it is difficult to imagine how one can justify the absurdly high equilibrium sensitivities posited by the models and by the iPCC.

Joe Born
August 15, 2018 7:41 pm

Lord Monckton replies as though I were arguing that equilibrium climate sensitivity (“ECS”) is high. I’m not. I think it’s low.

The point is whether, as he contends, the reason that what he calls climatology overestimates ECS is that it employs the equation set forth in his video and other various posts.

What his comment demonstrates is that the equation he contends climatology uses results in a value much lower than the GCMs find. So, unwittingly, he has just disproved his own premise.

However, if that premise were true and his numbers were right, then the “climatology” approach of using small-signal values (“perturbations”) would be superior to his of using large-signal values.

This is basic math.

August 15, 2018 8:44 pm

Mr Born continues to demonstrate his ignorance of elementary linear systems theory. Our “premise”, to use his word, is that the delta and absolute-value system-gain equations are both valid – a point that he now seems, at last, to accept – but that official climatology, in defining temperature feedback in such a way as to exclude the absolute-value equation, has denied itself the opportunity to derive the true system-gain factor and hence equilibrium sensitivity by a method simpler and a great deal less error-prone and fiddle-prone than its present bottom-up method of attempting to quantify individual temperature feedbacks and the interactions between them, a method that suffers from the defect of not being Popper-falsifiable in that no feedback can be directly measured to ascertain its value.

Mr Born appears to imagine that using deltas is preferable to using absolute values, and he calls this “basic math”. No: it is basic error. At present, official climatology – e.g. Lacis+ 2010 – imagines that the system gain factor of 4 derivable by using deltas during the ice-melt of the past 9000 years continues to be applicable today. That is not the case, however.

Jim Gorman
August 16, 2018 12:04 pm

jhborn —> Your so-called analogy is totally inept. It does not include any reference to any feedbacks in your trip therefore the math ‘error’ you are trying to prove does not have any basis. You might try adding a speed control loop to your example and then reevaluate your results.

Inge
August 18, 2018 2:49 am

jhborn I find your analogy and comments very confusing. You start in city traffic and then speed up assuming your speed will increase. It is these assumptions where ‘climatologists’ go completely off the rails. That is why I like Lord Moncktons explanation: he does not make assumptions but calculates them. And he does not exclude the pre-industrial feedback that was already there.

jhborn: “Yes, if the expected happens and the car does indeed travel a full mile in the next minute, the average speed will remain very close to 30 mph…”

Why is that? Do you also assume an hour consists of 30 minutes?
Also, we do not measure distance in mph but rather in miles or kilometers.

August 15, 2018 12:16 pm

Take off CO2 it leaves (inverted) LOD
http://www.vukcevic.co.uk/Lod-CrT4.gif
but neither of the above two makes any difference to the climate variability, CO2 is just one molecule in 2500 of the rest, and the LOD is measured in ms (milliseconds).
In my view it is the rising temperature that out-gases more of the oceans’ CO2, also the rising temperature inputs more energy into oceans, increasing evaporation and moving large volumes of atmospheric ‘water’ towards the poles, consequently speeding up earth’s rotation (reducing length of day -LOD).

HotScot
August 15, 2018 5:16 pm

vukcevic

My question is, isn’t it as important to understand what doesn’t cause AGW than what does?

Man made CO2 at ~0.0012% of all atmospheric gases is surely inconsequential even if it is doubled to ~0.0024%.

I really don’t understand why CO2 is considered such an overpowering influence on our climate, especially when it’s so unequally mixed.

Kristi would have me believe that energy is transferred between CO2, Nitrogen and Oxygen as well to ‘stabilise’ the planets temperature, which opens up another can of worms it would seem.

Kristi Silber
August 17, 2018 12:34 am

HotScot: “Kristi would have me believe that energy is transferred between CO2, Nitrogen and Oxygen as well to ‘stabilise’ the planets temperature, which opens up another can of worms it would seem.”

Well, yes, it is rather an important point, isn’t it? That’s why I’m surprised it’s not discussed here. Wondering myself how CO2 could have such an effect, I asked my uncle about it, who is an atmospheric physicist with NOAA (and former chief of the Mauna Loa Observatory). He explained it. After much looking, I lately stumbled on the article by Pieter Tans (whom I met while visiting my uncle’s Boulder lab – a fascinating, voluble talker!), which confirmed what my uncle said.

Does this not clarify some things for you?

HotScot
August 17, 2018 5:24 am

Kristi Silber

I entirely accept that there is energy transference between molecules. My problem is that with the volume of all the other gases, and their uneven distribution in the atmosphere, there can’t possibly be enough CO2 to ‘stabilise’ the entire atmospheres temperature to a single global number, measured down to a fraction of a degree whilst man’s contribution at 0.0012% of all atmospheric gases somehow imposes complete control of the planets temperature.

That’s what all this is about isn’t it, and if so, it’s simply neither logical nor credible.

What atmospheric scientists like to present is that there is a universally stable atmosphere which never distorts and within which every molecule is evenly distributed. That’s not credible even to me as a layman. One can average everything but you may never observationally encounter the average figure wherever you go other than by coincidence.

We know historic land temperature measurements going back any further than the last 50 years or so are unreliable for a myriad of reasons, indeed, we know there are problems with UHI’s and contamination of measurements by other means even now. Sea surface temperatures, probably even more unreliable because they were for the most part derived from well plied trade routes. Satellite measurements over the last 30 years have also been plagued with problems, calibration, drift, technology change etc. And as for palaeoclimatology, it’s just downright vague, it isn’t possible to determine a temperature down to a single degree with any confidence never mind a fraction of a degree. Similarly, ice cores are constantly being re evaluated as more problems with them emerge.

These are all well known issues yet studies accepting whatever version of these, or combination of them, are trotted out as scientific gospel with no acknowledgement of the variables and inaccuracies.

And amongst all these problems are egotistical, fame seeking scientists who are determined to deliver the magic bullet; politicians who use global warming for their own purposes (at the extreme end, Al Gore who whipped up hysteria with his book and movie then went on to trade in carbon credits to great success on the back of it); our failing media who must find the next daily sensation to report; and activist organisations with their own agenda’s (Patrick Moore has condemned Greenpeace as a terrorist organisation); lobbying groups with their agenda’s, all jumping on the AGW bandwagon because it’s passing.

Meanwhile, as I have pointed out to you before, the only scientifically observed phenomenon that can be directly attributed to increased atmospheric CO2 is global greening. It was something mainstream, alarmist science didn’t anticipate, even now it’s barely acknowledged by the MSM and poo poo’d by alarmist scientist who are now on a mission to demonstrate that additional CO2 is harmful for plant life, flying in the face of centuries of scientific acceptance.

If the scientific community didn’t consider this monumental event, what else aren’t they considering? Indeed, it raises the question, what might concencus scientists be suppressing?

Bizarrely, the author of the NASA study demonstrating the greening phenomenon publicly, Dr. Ranga Myneni, attacked Matt Ridley for reporting it as an entirely positive effect of increased atmospheric CO2 http://www.rationaloptimist.com/blog/global-greening-versus-global-warming/

Perhaps you can explain this extraordinary turn of events in light of 40 years of failed predictions.

Rob Dawg
August 15, 2018 12:17 pm

Imagine how much of the the last three decades of crap could have been avoided if climate models used Kelvin. This is not a flippant comment. From the circa 3° absolute background to the hottest corona spheres the miniscule increments around the 270°K range possibly being observed are lost well down in the noise.

Theo
August 15, 2018 12:26 pm

No need for the degree sign with K.

But yes, CACA would be a harder sell if people knew that its pushers were talking about a change from perhaps as high as 288.15 K in AD 1850 to at most 289.15 K today.

And of course the human contribution to a one K increase would be minor, much, if not most, of the gain having occurred before CO2 took off in the mid-20th century, with whatever natural “forcings” caused the warming before then still at work after the rise in vital trace gas began.

NorwegianSceptic
August 16, 2018 1:25 am

What?! And miss out all those scary graphs? 🙂

Antero Ollila
August 15, 2018 12:18 pm

A remark. IPCC has reported that ECS is applicable only for many century scale calculations and TCC/TCR is applicalbe for the temperature changes during this century for the CO2 forcing. It looks like that you calculate ECS for the year 2011. How do you explain this?

August 15, 2018 3:24 pm

Mr Olilla raises a fair question. The answer is that for 1850-2011 we make the standard adjustment to the measured period warming to yield the equilibrium warming that would have obtained had it not been for delays in the emergence of anthropogenic warming.

Alan Tomalty
August 15, 2018 12:31 pm

[Snipped per request of the author. -mod]

Alan Tomalty
August 15, 2018 1:03 pm

It is even worse than that. The downward emittance of photons is so small as to be laughable. The only time that can happen is when the sky is warmer than the surface which is not often. Otherwise since there is a photon energy density gradient between the CO2 molecules and the surface, the cooler atmosphere cannot emit photons to a hotter surface because of this electromagnetic field gradient. The troposphere gets colder with each elevation. To top it all off, the time span of collisions between N2 and O2 and the CO2 molecule leaves very little probability of the CO2 being able to eject a photon upon receiving it because the CO2 molecules are being collided with N2 and O2 at the rate of 6.9 billion per second. They have created a fictitious back radiation amount that defies the laws of physics. The IPCC figures are even worse than all that I have explained in that evapotranspiration is ~ 50% of the total solar flux reaching the surface. That latent heat upon condensation is all released to outer space because convection makes the water vapour (water vapour is lighter than air) and hot air rise. If any of that got back to the surface we would have had runaway global warming already.
The true sensitivity to doubling of CO2 is so small as to be unmeasurable as the future temperature record of UAH will prove.

Antero Ollila
August 15, 2018 1:15 pm

The downward radiation by he atmosphere matches with the radiation theory, with the energy balance calculations and with the direct measurements.

Alan Tomalty
August 15, 2018 2:01 pm

Did you bother to look at the NASA diagram and actually read my above post?

Anthony Banton
August 15, 2018 1:32 pm

“It is even worse than that. The downward emittance of photons is so small as to be laughable. The only time that can happen is when the sky is warmer than the surface which is not often.”

No, that is not how the GHE works.

In fact the main (spurious) argument against the GHE by naysayers is that it violates the 2nd LoT – that a cooler body cannot heat a warmer body…..
http://hockeyschtick.blogspot.com/2010/07/why-greenhouse-theory-violates-2nd-law.html

Of course the answer is that it doesn’t. The warmer body just cools more slowly because of the LWIR intercepting molecules between ground and space — and no there are plenty in the way when considering a photon’s path-length to space – which is why Richard ought to read up about the Beer-Lambert Law, along with how the GHE works.

Alan Tomalty
August 15, 2018 2:32 pm

The warmer body doesnt cool more slowly. It radiates at its temperature. Easy to test. Set up an inner object medium completely surrounded by another outer medium except that you leave a portal to the inner medium so that you can inject heat into the inner medium and leave a different portal into the outer medium to inject heat into it as well.

Then close the portals. Repeat the experiment a number of times each time putting in a larger amount of heat in the inner portal. However do it differently for the outer portal. Put a lesser amount of heat in the outer portal but the same amount ( as in 1st experiment) each time of (lesser amount than the inner portal) so that the outer medium is always colder than the inner one. Measure the temperature of the inner medium at exactly the same time differential ( pick a time before all temperatures are in equilbrium) each time you do the experiment. The inner medium and outer medium will both eventually equal the temperature of your room . However when you measure the inner medium temperature each time at that same time differential, you will find that the inner medium temperature has cooled at the same rate each time. It will of course take longer each experiment because you injected a larger amount of heat into the inner medium in each experiment. However the rate of cooling of the inner medium is the same. The result is that the inner medium is unaffected by the differences in temperature levels of the outer medium as long as the outer medium is always colder.

Alan Tomalty
August 15, 2018 2:50 pm

Or if you want to do the experiment the other way, put the same amount of heat each time into the inner medium and change the amount of lower inputted heat in the outer medium each time lowering the heat inputted into the outer medium. The result will be that the system will now take the same exact time to reach equilibrium for all the experiments. This will prove that the outer medium had no affect on the radiation of the inner medium.

Alan Tomalty
August 15, 2018 3:03 pm

I should have said that the rate of cooling of the inner medium is unaffected by the outer medium . The actual rate of cooling is never a linear one.

Anthony Banton
August 15, 2018 3:15 pm

Ok – so you have reinvented radiative physics.
Good luck with that.
Here is skeptic (Roy Spencer) proving you wrong and empirical physics correct.

Must make life nice and easy to reinvent the universe to make it fit your ideology.

August 15, 2018 3:27 pm

Surprising though it may seem, I am with Mr Banton here. The greenhouse effect is not a violation of the second law of thermodynamics.

Gary Ashe
August 16, 2018 1:22 pm

Yes it is.

LWIR photons emitted from TOA co2 have an average blackbody thermalising potential of –83c.

Back welling LWIR, is system redundant spectra, inconvenienced a nano-second on its way to space.

A C Osborn
August 15, 2018 4:08 pm

It is very interesting that you took that experiment and the subsequent conclusions at face value.
I assume that you did not bother to read all the dissenting comments.
I have conducted similar experiments and the one thing that you MUST do is to have the Air between the objects stirred, otherwise the air is heated by both the lamp and the plate.
So you are NOT measuring “Back Radiation” but heat transfer by the air between the objects
I had a fan blowing on the objects and there was absolutely no increase in the hot object’s temperature when a warmer object was introduced, turn the fan off and the hot object gets hotter.
Or you MUST have a vacuum to do the experiment correctly.
There was also no measurable change in the rate of cooling when the power is turned off with a warmer object in close proximity to the hotter object if a fan is mixing the air correctly.
With a fan blowing there is also no increase in temperature when aluminum foil is introduced either.

Bernard Lodge
August 15, 2018 10:16 pm

There is a simple question that intuitively proves that a cold object cannot raise the temperature of a warmer object:

How many ice cubes do you need to surround a pan of water with before the water boils?

Everybody knows the answer … even Roy Spencer

Editor
August 15, 2018 10:45 pm

Thanks, Bernard. The question is, which will warmer, an object surrounded by ice at 0°C, or an object surrounded by outer space? Clearly, the object surrounded by ice will be warmer.

Or more to the point, which will be warmer, a planet surrounded by an atmosphere at say -30°C, or the same planet surrounded by outer space?

Heck, when you put on a cold jacket your temperature goes up … why?

Because the cold object is slowing the heat loss of the warm object.

See my post here for a discussion of this question.

Regards,

w.

Bernard Lodge
August 17, 2018 6:56 am

Willis, good questions as always. Answers as follows:

An object surrounded by ice will we warmer than an object surrounded by space. However, that is not the point.

My example intuitively proves that under no circumstances can a colder object make the temperature of a warmer object actually increase. I think you agree with that based on comments you have made in other threads. That means that the Roy Spencer experiment referred to above is wrong.

The real question, if you could answer it, is: Is an object surrounded by O2 and N2 warmer than an object surrounded by the same O2 and N2 but with one molecule of oxygen replaced by one molecule of CO2?

The reason that is the more important question is that we know the CO2 molecule absorbs energy from the surrounding O2 and H2 molecules and then emit LWIR in all directions. Some of that LWIR goes up to space and is lost to the earth …. which therefore cools. This happens at all altitudes.

Your second statement … that putting on a jacket makes you warmer is also true … but also not the point. That is not the greenhouse effect. The reason it makes you warmer is that the human body has an internal source of heat.

A more appropriate example would be to ask what happens when you wrap a coat around a steel bar which has a temperature of 100 degrees. The answer is that the temperature of the steel bar does not increase … because it does not have an internal source of heat.

A molecule of air does not have an internal source of heat, so the CO2 ‘coat’ you are adding will not make it warmer either.

richard verney
August 17, 2018 5:21 pm

The reason that is the more important question is that we know the CO2 molecule absorbs energy from the surrounding O2 and H2 molecules and then emit LWIR in all directions.

But only at a few discreet wavelengths, and whilst photons do not have a temperature per se their wavelength/frequency is related to a specific BB temperature.

So one has to ask the further question, where is the source of photons which have the appropriate wavelength that they are absorbed by the CO2 molecule?

The surface of this planet, save for a few extremes eg., Antartica, Everest, does not radiate photons at the necessary wavelength to be absorbed by CO2. It is only well above the tropopause where one encounters photons of the requisite wavelength.

See the cross section of the atmosphere and see how CO2 is not illuminated below the tropopause (the dotted 14km line) but is lit up well above the tropopause where it transfers energy to TOA and thence radiated away to the void of space.

richard verney
August 17, 2018 5:08 pm

Willis

Whilst I am not disagreeing with the points that you make, how come is a space blanket only effective when it is wrapped around you?

From a radiative energy perspective, consider the scenario where you are surrounded by people in a circle holding a space blanket pointing towards you at a radius of say 2 metres.

In this scenario, the space blanket does not warm you because the radiative flux is overcome by convection. This is what happens in the lower atmosphere (say below the tropopause). In the lower atmosphere the effect of radiative energy transfers is simply overwhelmed by other energy transfers that take place and which have dominion, in particular conduction and convection.

It is only in the upper atmosphere (where there is little atmosphere) that radiative energy transfers rule.

Willis you have probably seen this, but if not, it is worth a read:

https://chiefio.wordpress.com/2012/12/12/tropopause-rules/

In short, the Troposphere is where convection and evaporation / condensation dominate. Driven by ground heating. Radiation simply does not matter here. Any ‘ground heat’ is rapidly taken up by convection and evaporation / precipitation, lofted to the height where radiation takes over, and dumped. We see that every day with the daily temperature cycling in response to 0 to 1400 (ish) W/m^2 solar flux variations.

Gary Ashe
August 16, 2018 1:54 pm

Or 4 x 1 square metre blocks of ice pumping out 1100 watts of LWIR.
And a small household heater pumping out 1100 watts of heat.

In a room.

Which photons thermalise in your skin and which do not.

All the photons are ”heat” in a environ cooler than the ice.

At room temp .only the photons from the heater are heat.

Photons from the ice are redundant energy particles.
And simply deflect around the room.

Only the photons of IR are being absorbed by the mass in and of the room.

Alan Tomalty
August 16, 2018 12:57 am

The earth surface is not a blackbody, the atmosphere is not closed off and and the atmosphere is not a blackbody either even though Dr. Spencer’s radiative measureent tool treats it like one.

If at one time there was just the right amount of CO2 in the atmosphere to the greenies liking then that would have been what year 1800? okay There was even some CO2 in the atmosphere then, therefore according to their theory there must have been back radiation as well. Hell, since H2O also absorbs IR there has been back radiation since 4 billion years ago in the alarmist theory of physics. So if temperatures in 1800 werent going up there must have been an equilbrium. By that time we were already out of the little ice age. In an equilibrium if any of the surface radiation is caught by CO2 and H2O and sent back to the surface, you will have warming( centre plank of alarmist theory) . However that means you have to have extra emission to space to keep the equilbrium. Even with vast amounts of CO2 being put into the atmosphere in 2018, we don’t even have extra outgoing radiation. And if there was 8000 ppm CO2, 530 million years ago ; how come the earth didnt turn into a hothouse? So how can back radiation exist? It doesn’t. See below for a mathematical physics view of this by Dr. Anderson.

One must be very careful in separating the 3 ways of heating. As Mr Osborn points out , it is very easy to think you are measuring radiative heat transfer when in fact you are measuring conduction/convection. I always get those terms confused when talking about gases. The NASA energy diagram doesnt split up the land and ocean components when talking about emission fluxes but never mind, the diagram has bigger problems. They dont have a number for the CO2 emitted radiation upward which has to be at least = to the back radiation downward.
NASA does give a reasonable number for evapotranspiration but I suspect their convection number from the surface is low . However their total emitted (IR plus conduction) number from the surface exceeds the original solar input. They should split up the IR and conduction
surface emitted numbers.

Getting back to the back radiation problem, if a lower temperature body really could radiate all its IR to a hotter body then surely the higher temperature body would also be able to radiate all its energy to the lower
one. If you say not all then how does the lower temp body know how much to give away? In that case you wouldnt have equilibrium but just a switch in who was hotter or colder. Clearly that is ridiculous. The real reason that stops IR going from the lower temp body to the upper one is that there is a photon electromagnetic field energy density gradient between the CO2 molecules and the surface, ie. the cooler atmosphere cannot emit photons to a hotter surface because of this electromagnetic field gradient. Photons have to have some rules by which to act or they would go anywhere anytime. The total energy of an electromagnetic photon/boson is the sum of its electrical energy and its magnetic energy which are in fact equal to each other. The magnetic energy comes from the magnetic field that surrounds the photon. Photons can’t travel against an energy gradient which is increasing starting from their source.

https://objectivistindividualist.blogspot.com/2018/03/

The above website is from Dr. Charles Anderson. He has singlehandedly destroyed the very foundation of AGW.

I will quote Dr. Anderson

“….the reason that radiant energy only flows from the warmer to the cooler body is because the flow is controlled by an electromagnetic field and an energy gradient in that field.”
“Electromagnetic energy flows from the high energy surface to the low energy surface, as is the case in energy flows generally.”
Also

“The actual radiation that each emits is highly influenced by the fact that the other black body emitter is nearby.
In reality, photons are a manifestation of an electromagnetic field. Thermal radiation is emitted from a material or a molecule due to dipole vibrations and the vibration effect of higher order poles, though the higher order poles have much shorter electromagnetic ranges than do the dipoles in vibration. The acceleration and deceleration of charges in dipoles is the primary source of the electromagnetic field that generates photons. An energy density eH = a TH4 in the vacuum immediately outside the surface of the inner sphere and an energy density of eC = a TC4 immediately inside the surface of the outer spherical shell cause a gradient in the electromagnetic field (the energy density of an electromagnetic field in vacuum is proportional to the magnitude of the electric field squared) from the inner sphere surface to the surface of the outer spherical shell. The total energy gradient between the two surfaces is given by

ΔE = 4πRH2eH – 4πRC2eC

and

4πRH2 PH = (σ/a) ΔE, where PH is the power emitted per unit area from the inner sphere surface, so

PH = σTH4 – (RC2/RH2) σTC4

And

4πRH2PH = 4πRC2PC

Where PC is the power per unit area incident upon the inner wall of the spherical shell at the lower temperature, so

PC = (RH2/RC2) σTH4 – σTC4

It is the energy gradient that is fundamental here and it determines the flow of energy and hence the incidence of photons upon the outer spherical shell. ”

The argument wasnt about at what temperature the equilibrium ended up at. Or even that the hot body got colder and the cold body got hotter. We all agree on those facts. The argument is whether a colder body can radiate IR to a hotter body. According to the “gradient electromagnetic field theory” this is not possible.

Alan Tomalty
August 16, 2018 1:15 am

So how can back radiation exist? It doesn’t . I should have added

“except for the few times when the atmosphere is hotter than the surface.”

Alan Tomalty
August 15, 2018 10:10 pm

And i should have said radiates at the 4th power of its temperature

Jim Gorman
August 16, 2018 12:32 pm

Let me add my two cents. Assume the atmosphere was a perfect one way mirror for radiation from the sun. The space between the mirror and the earth would quickly rise to unbearable temps. So what do you call an imperfect one way mirror. I call it an insulator. What does an insulator do? It slows down radiation loss. What does that mean? The temps between the insulator and the earth will rise.

What we are trying to find is the R-value of that insulation so we can determine the temp rise. It doesn’t violate the 2nd law, it simply makes it more complicated to calculate the values.

August 16, 2018 4:22 pm

Mr Gorman is of course correct. The greenhouse effect is no violation of the second law.

jordan
August 15, 2018 1:45 pm

Alan – I think we only need to consider delta T and delta-J (radiative power kW/m2). If you can go with this, it should be possible to recognise the possibility of a sustained increase in radiative power from the atmosphere if there is a large enough and sustained rise in temperature within the body of the atmosphere.

Note, there is no need to suggest regions of the atmosphere are warmer than the surface in absolute T.

The enhanced greenhouse effect postulates such a rise in temperature, where The source of additional energy supposed to be increased absorption of OLR due to increased opacity of the atmosphere to OLR. This is shown in full technicolour in IPCC AR4 Figure 9.1 and became known as the tropospheric hot spot, bearing in mind the diagram shows predicted rate of change of temperature.

No tropospheric hot spot therefore should be refutation of the postulated enhanced greenhouse effect: no tropospheric hotspot means no source for the supposed additional downward LR.

So all we need to look for is a sustained rise in temperature aloft (greater than the surface rise) to make a case that ANY warming at the surface is attributable to change of CO2. Even better would be confirmation of the “scaling ratio” in the regions where the effect is expected to be most pronounced.

Christie et al carried out this test some 10 years ago and reported the models were in error.

Anthony Banton
August 15, 2018 3:30 pm

“No tropospheric hot spot therefore should be refutation of the postulated enhanced greenhouse effect: no tropospheric hotspot means no source for the supposed additional downward LR.”

Yes there is – and also it is not dependent on the GHE.
It would occur due to any surface warming as it is the release of LH aloft from enhanced TROPICAL convection. Note it is the not the Troposphere in general.
Also “no source for the supposed additional downward LR” comment shows ignorance of the radiative thermodynamics in play – at the top of the troposphere, where the tropical “hotspot” is (300-200mb) the atmosphere would cool to space not back-radiate to the surface.

http://iopscience.iop.org/article/10.1088/1748-9326/10/5/054007/meta

Figure 1. Temperature trend 1960–2012 versus latitude and pressure. The value for each latitude and pressure is the medians of the trends at individual stations in that (10°) latitude bin. Units are °C per decade.

August 15, 2018 4:59 pm

Mr Banton has found one of the very few papers that purports to discover a tropical mid-troposphere hot spot. However, the paper concerned does not report a dataset: it reports a more than somewhat questionable meta-analysis using kriging, a technique that is of dubious applicability to the climate.

Nearly all of the datasets show no tropical mid-troposphere hot spot, though all the models predict it. What is more, as Dr Christy has shown, the rate of warming in the tropical mid-troposphere falls below the rates predicted in more than 100 models – another powerful indication that the putative tropical mid-troposphere “hot spot” does not in fact exist.

jordan
August 15, 2018 10:05 pm

Anthony Banton – you blew your credibility when you cited Sherwood et al. The last IPCC assigned low confidence to his approach of using wind speed as a proxy for deltaT. -Sherwood doesn’t seem to be able to make up his mind whether wind speed or wind shear is best to get the result he is looking for as his papers flip-flop between the two. He doesn’t report any robustness testing: for his methods: how would Sherwood’s results come out if he dropped the wind data and relied only on temperature? Not good I suspect.

The IPCC figure I referred to above provides a clear prediction that CO2 will cause a hotspot pattern. You try to obfuscate by suggesting other agents can do the same. But the logic remains unaltered: no hotspot means CO2 is not an active agent in warming the climate.

No hotspot means no source for the imagined additional downwelling radiation. If you cannot grasp that basic point, then I suggest it is you who needs to learn more radiative physics.

Sunsettommy
Editor
August 19, 2018 9:07 am

No, Sherwood’s paper is horrid as shown in these TWO links below:

Desperation — who needs thermometers? Sherwood finds missing hot spot with homogenized “wind” data

http://joannenova.com.au/2015/05/desperation-who-needs-thermometers-sherwood-finds-missing-hot-spot-with-homogenized-wind-data/

and,

A rebuttal to Steven Sherwood and the solar forcing pundits of the IPCC AR5 draft leak

https://wattsupwiththat.com/2012/12/16/a-rebuttal-to-steven-sherwood-and-the-solar-forcing-pundits-of-the-ipcc-ar5-draft-leak/

Trying to create temperature data out of wind speed data to prove the “hot spot” exist is hilarious.

August 15, 2018 3:26 pm

I have much sympathy with Mr Tomalty’s point about the rather small influence of Man on climate, but the merit of our method is that it provides a simple and formal proof that official climatology has erred. After correction of that error, global warming – at 1.2 K per CO2 doubling – is just not going to be sufficient to make any more detailed calculation of the strength of the CO2 forcing worthwhile.

That is why we have adopted the disciplined approach of accepting all of official climatology except what we can prove to be false.

Alan Tomalty
August 15, 2018 1:03 pm

Mod please delete my original post on this.

David L. Hagen
August 15, 2018 1:30 pm

Alan Tomalty Re “large back radiation flux”
Go back and study foundational radiative absorption-emission physics.
Then study Line By Line atmospheric radiative transfer models. For review see:
See Clough, S.A., Shephard, M.W., Mlawer, E.J., Delamere, J.S., Iacono, M.J., Cady-Pereira, K., Boukabara, S. and Brown, P.D., 2005. Atmospheric radiative transfer modeling: a summary of the AER codes. Journal of Quantitative Spectroscopy and Radiative Transfer, 91(2), pp.233-244.
https://pdfs.semanticscholar.org/b2a5/c8dd360d50cb900d39b28a5a68639df02edf.pdf
Saunders, R., Rayer, P., Brunel, P., Von Engeln, A., Bormann, N., Strow, L., Hannon, S., Heilliette, S., Liu, X., Miskolczi, F. and Han, Y., 2007. A comparison of radiative transfer models for simulating Atmospheric Infrared Sounder (AIRS) radiances. Journal of Geophysical Research: Atmospheres, 112(D1).
https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2006JD007088

Anthony Banton
August 15, 2018 12:36 pm

Can I point out that the photo of “James Morrison”, appears to be of one…

https://en.wikipedia.org/wiki/James_Morrison_(singer)

And not the one who has a Bachelor of Science – Environmental Science.
at the University of East Anglia | UEA · School of Environmental Sciences

https://www.researchgate.net/profile/James_Morrison16

I assume it should be the latter.