IPCC has at least doubled true climate sensitivity: a demonstration

Guest essay by Christopher Monckton of Brenchley

Roger Taguchi, who often circulates fascinating emails on climatological physics, has sent me a beautifully simple and elegant demonstration that IPCC has at least doubled true climate sensitivity, turning a non-problem into a wolf-criers’ crisis. To assist in grasping the beauty of his brief but devastating argument, Fig. 1 shows the official climate-sensitivity equation:

clip_image002

Fig. 1 The official climate-sensitivity equation. Equilibrium or post-feedback sensitivity ΔTeq is the product of pre-feedback sensitivity ΔT0 and the post-feedback gain factor G.

Global temperature rose by 0.83 K from 1850-2016 (HadCRUT4: Fig. 2), while CO2 concentration rose from 280 to 400 ppmv. Officially-predicted pre-feedback sensitivity ΔT0 to this increase in CO2 concentration is thus 0.312 [5.35 ln (400/280)] = 0.60 K.

Even if CO2 were the sole cause of all the warming, the post-feedback gain factor G would be 0.83/0.60 = 1.38. Then, at doubled CO2 concentration and after all feedbacks had acted, equilibrium sensitivity ΔTeq would be only 0.312 x 5.35 ln (2) x 1.38 = 1.6 K.

Yet the AR4, CMIP3 and CMIP5 central equilibrium-sensitivity predictions are of order 3.2 K.

Not all feedbacks have acted yet. On the other side of the ledger, much of the global warming since 1850 is attributable either to natural causes or to other anthropogenic forcings than CO2. Netting off these two considerations, it is virtually certain that IPCC and the general-circulation models are overestimating Man’s influence on climate by well over double.

clip_image004

Fig. 2 The curve and least-squares trend of global mean surface temperature since 1850 (HadCRUT4).

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

322 Comments
Inline Feedbacks
View all comments
August 4, 2016 11:17 am

If the climate models (or a specific climate model) are/is adequately skillful then shouldn’t the sensitivity values be emergent values from these/it? So why is there a debate about the sensitivity values?
Which model has proven to be the best at fitting to global average temperature going forward since the first IPCC report? Which of these has been the best at retropredicting the previous global average temperature record prior to the same start time?
Given the best fitting models what sensitivities do they give both forward and backwards? What would be the largest range between the forward and backward sensitivites that would still indicate skill in the models?
Are the models skillfull?

August 4, 2016 12:14 pm

I think we would all be better served by reading, comprehending, and agreeing with this:
https://wattsupwiththat.com/2016/06/20/greenpeace-co-founder-pens-treatise-on-the-positive-effects-of-co2-says-there-is-no-crisis/comment-page-1/#comment-2249104
We wouldn’t have to worry about putting CO2 in the atmosphere because we NEED it..
The politics and biases would be removed.
We can then concentrate on actual scientific issues surrounding more CO2 in the atmosphere and how to solve our energy needs in the future.

August 4, 2016 1:19 pm

lol “The Team” and “white blood cells” springs to mind

Svend Ferdinandsen
August 4, 2016 3:09 pm

Would it give a better understanding if at least some of these feedback terms instead was concidered as radiation loss/resistance. The feedback acts as if the extra radiation from a temperature rise gives less extra radiation to space, meaning that the temperature must rise more to get rid of a certain amount of extra radiation.
It would even be possible to measure it as the extra radiation seen at TOA versus the extra radiation a higher temperature at ground would give.
As the ground temperature varies a lot over the globe, it would just be to measure how the TOA changes relative to the temperature variations from pole to pole, more or less.
Maybe Willis could dig into that.

Reply to  Svend Ferdinandsen
August 4, 2016 11:43 pm

Mr Ferdinandsen makes an interesting and constructive suggestion, but I am not sure how practicable it is. Satellites “see” near-infrared radiation emerging not from the top of the atmosphere but from an irregularly-shaped surface that is rhe locus of all points at or (nearly always) above the hard-deck surface at which incoming and outgoing radiation are equal.
That “emission surface”, one optical depth down into the atmosphere, is at a mean pressure altitude of about 300 hPa, safely above the region of the lower troposphere that is disturbed by non-radiative transports such as evaporation and convection up, advection across and subsidence and precipitation down.
However, since the temperature lapse rate with altitude is near-invariant at sub-centennial timescales at about 6.5 K/km, for each degree of warming the emission surface rises by about 150 meters. It is nit easy either for satellites or for radiosondes to identify such small differences in altitude for the emission surface. The satellites will always “see” the same outgoing radiation and emission-surface temperature at the mean emission altitude,for it is rhe altitude and not these quantities that changes.
So it is not possible to “see” any additional outgoing radiation at or above the emission surface, for there is none unless the incoming radiation from the Sun changes.

Svend Ferdinandsen
Reply to  Monckton of Brenchley
August 5, 2016 3:57 am

My reference to TOA as such, was not so good. I meant all the LW radiation from Earth in total seen from space.
Not all the radiation going out comes from the air and clouds, some also passes directly through the atmospheric window.
I am not sure it helps, but to deal with the effect as feedback has also some drawbacks.
Like dealing with the resistance in a wire as a super conducting wire with som feedback.

Reply to  Svend Ferdinandsen
August 5, 2016 12:14 am

I think it’s a bit off to expect to see a radiation balance over anything less than the length of positive negative cycles, which is why ocean heat content (as best as we know it, is probably our best indicator) . Heat comes in and is churned around, and expelled in positive phases of ENSO PDO AMO heat blobs.
The Hysteria has been a pretty big positive phase and a lot of that heat went into the oceans post 1930s warming as things cooled from the 50s 60s and 70s. 80s 90s and 2000s have seen that heat come back out, throw in solar cycles and we have a warming and cooling phases becoming non uniform (colder or hotter depending on solar cycles)
As is, as much as many dont want to admit it, data sets can only really be valid to determine regional trends. Not actual temperatures, there is too much work being done with heat to nail down actual accurate temperatures.

Paul_B
August 4, 2016 3:53 pm

Sorry, but you can’t calculate climate sensitivity from recent historical data because there is insufficient data on aerosols. (Currently aerosol forcing is probably about -1/2 that of CO2’s forcing.)

Reply to  Paul_B
August 5, 2016 4:38 pm

IPCC has for decades used the “aerosol forcing” as a fudge-factor to make climate sensitivity seem greater than it truly is.

August 5, 2016 12:16 am

The way I see it. These equations of energy balance and the focus on them cause myopia.

August 5, 2016 12:42 am

Was thinking Southern Hemisphere ice is the canary, arctic ice levels are a residue of changes in SH.

Reply to  Mark - Helsinki
August 5, 2016 3:56 am

this is why they appear to do opposite, lag

Richard
August 5, 2016 12:55 am

Monckton wrote: ”Mr Born is Insufficiently educated in the response time curve. Equilibrium response to the initial forcing occurs quite quickly”.
The main positive feedbacks according to the IPCC (water vapour and clouds) that they claim should amplify the warming from CO2 by a factor of about 3 are defined as fast-feedbacks which occur within a time-frame of years (under a decade) and so I think his equation is appropriate here and can be used as an approximation. On a side-note, I got a warming of 1.6°K as well using a different method:
Quote:
”First, we need to estimate the radiative forcing from a doubling of CO2. Apparently the IPCC use the following equation to calculate this: ΔRF = 5.35*Ln(C1/C0). Where C1 is the final CO2 concentration, C0 is the reference concentration, Ln is the natural logarithm of, and ΔRF stands for increment of radiative forcing. If we accept it for argument’s sake we can calculate the amount of radiative forcing that the IPCC say CO2 would have. The pre-industrial CO2 level was 280ppmv and doubling it gives 560ppmv. Slotting those values into C1 and C0 we get: 5.35*Ln(560/280) = 3.7 W/m2. According to Trenberth the total greenhouse from all sources amounts to 333 W/m2. Therefore the anthropogenic contribution from a doubling of CO2 to the entire planetary greenhouse amounts to 1% (i.e. 3.7/333).
To estimate the resultant temperature increase we must calculate the temperature of the Earth without a greenhouse. The effective blackbody temperature of the Earth (i.e. the assumed temperature of the planet without a greenhouse) can be calculated with the equation: T = [(L)(1 – A)(R)^2/4σ(D)^2]^0.25. Where L is the luminosity of the Sun (6.32×10^7 W/m2), A is the albedo of Earth (0.3), R is the radius of the Sun (6.96×10^8m), σ is the Stefan-Boltzmann constant and D is the distance from Earth to the Sun (1.496×10^11m). Slotting the values into the equation gives us an effective temperature of: T = [(6.32×10^7)(1-0.3)(6.96×10^8)^2/4σ(1.52×10^11)^2]^0.25 = 254.9°K.
Because the Earth’s blackbody temperature is 255°K and its average surface temperature is 288°K it is suggested that the difference of 33°K is due to the atmospheric greenhouse. As the IPCC say: “For the Earth to radiate 235 W/m2 [or 239.4 W/m2] it should radiate at an effective temperature of -19°C [or -18°C] with typical wavelengths in the infrared part of the spectrum. This is 33°C lower than the average temperature of 14°C [or 15°C] at the Earth’s surface. To understand why this is so one must take into account the radiative properties of the atmosphere in the infrared part of the spectrum”.
This implies that the greenhouse back-radiation of 333 W/m2 from all sources (as calculated by lead IPCC author Kevin Trenberth) is sufficient to increase the global mean surface temperature of the Earth by 33°C above its blackbody temperature of -18°C. Hence this gives us a linear relationship between ΔT (at the surface) and ΔRF (by the atmospheric greenhouse) of 0.1°C per 1 W/m2 (i.e. 33/333).
Thus the radiative forcing of 3.7 W/m2 produced by a doubling of CO2 would be sufficient to increase the global mean surface temperature by 0.37°C. This calculation assumes that the relationship between ΔT and ΔRF is linearly proportional, which of course it isn’t. The Stefan-Boltzmann law governs the relationship between radiation and temperature and the law deems that the absolute temperature of a body will increase according to the 4th-root of radiation that is warming it. When the Stefan-Boltzmann law is taken into account the effect is to reduce the possible size of the human component to 0.31°C. However we do not need to bother with this small adjustment and can simply conclude that the total global warming on a doubling of CO2 must be no more than 0.37°C.
We must now take into account the hypothesised positive feedbacks inherent in the climate-system. The IPCC have a second feedback equation for this (as before, it may not be correct): ΔT = λ*ΔF. Where ΔT is the temperature increase, λ is the climate sensitivity parameter (a typical value is about 0.8, occasionally referred to as the ‘Hansen Factor’) and ΔF is the radiance from CO2. The IPCC’s second formula takes the radiance from CO2 and converts it into a corresponding temperature increase. The increase in surface temperature of 0.37°C from CO2 corresponds to a radiance of 2 W/m2 at the mean surface temperature of 288°K by the Stefan-Boltzmann law. The IPCC’s second formula tells us that the new temperature achieved after feedbacks have occurred should be as follows: ΔT = λ*ΔRF = 0.8×2 = 1.6°C. Hence the amplification-factor implied by the IPCC’s formula is about 4.
So, based on the IPCC’s own figures we should get a benign warming of just 1.6°C and CO2’s direct effect can be no more than 0.37°C. Clearly this figure is less than the IPCC’s 3°C and 1°C respectively. So to my mind, the IPCC’s claim that unchecked human emissions will cause 3°C of global warming is a gross exaggeration that contradicts the implications of its own “science”.”

ulriclyons
Reply to  Richard
August 5, 2016 3:46 pm

“Because the Earth’s blackbody temperature is 255°K”
No it’s 279K, a black-body doesn’t have 0.3 albedo. The net greenhouse effect is +8K to +9K.

August 5, 2016 3:58 am

doesn’t HO2 and CO2 block more energy than they trap? So increases will always be negative, not positive, over time?

HenryP
Reply to  Mark - Helsinki
August 5, 2016 10:42 am

yes, they do as they have absorptions in the solar spectrum 0-5 um. griff does not understand this. most agw testing is from tyndall and arrhenius, i.e. the closed box experiments [earth spectrum 5-15 um]

Svend Ferdinandsen
August 5, 2016 4:10 am

Interresting calculation. In fact the effect you calculate could be even smaller, because the albedo of 0.3 is including clouds, which will not be there without the water vapor , which is the main GHG.
http://www.geocraft.com/WVFossils/greenhouse_data.html

August 5, 2016 1:23 pm

That straight line through HadCRUT4 in Figure 2 does not belong there. We are looking at information, not random data, and you should make use of information when it is there. Even with its poorly articulated data I can eyeball several temperature regions in that graph that do not belong together. For instance, the section from 1850 to 1910 has a distinctly negative slope and must not be subsumed into a “warming” curve. And the section from 1910 to 1940 is a rising curve with a slope higher than that of your straight line. All attempts to explain this temperature segment as greenhouse warming have failed. It starts abruptly in 1910 and is cut off abruptly by the WW II cold spell in 1940. You cannot start greenhouse warming without injecting additional carbon dioxide into the air and this did not happen in 1910 according bto the ice core extension of the Keeling curve. And you cannot stop greenhouse warming without plucking out all those absorbing CO2 molecules in the air, an impossible task. The segment from 1910 to 1940 clearly is not greenhouse warming and neither is the segment from 1850 to 1910 that shows cooling. Lately global warming advocates have a new technique to overcome this problem. They claim now that greenhouse warming earlier than 1950 was too weak to be observable and only warming after 1950 shows AGW. If I remember correctly, they have been pushing for ages the idea that AGW started with the industrial age. Admitting now that it cannot be observed before 1950 cuts that period down to less than half of their original claim. But despite that, the straight line starting in 1850 on your graph stands for the lasting expression their propaganda machine can leave on its victims.

Reply to  Arno Arrak (@ArnoArrak)
August 5, 2016 4:01 pm

The limitations of the least-squares linear-regression trend are well known. However, it has its uses too. It tells us roughly how much global warming has occurred over the period.

toncul
Reply to  Monckton of Brenchley
August 5, 2016 8:25 pm

You are still here ?
So did you get the difference between equilibrium and transient warming ?
Or still not ?

Reply to  Monckton of Brenchley
August 6, 2016 3:38 pm

It seems that “toncul” is not aware that the least-squares linear-regression trend is a method of determining an approximate trend on stochastic data. It matters not to that method whether the warming is direct (which occurs within a few years of the forcing) or amplified by feedbacks (in which event the bulk of the feedbacks also operate over a period of years rather than decades or centuries).
Even if that were not the case, my follow-up posting will demonstrate how it is that one can be strongly confident, on the basis of the observed record, that IPCC has overestimated climate sensitivity by at least double.

toncul
Reply to  Monckton of Brenchley
August 7, 2016 12:15 am

No.
What you calculate is a transient response. Look at Otto et al 2013 “Energy budget constraints on climate response”.
You use exactly the same equation they use for the transient response. The one for ECS have a term for ocean heat uptake. Of course, they are correct and you are wrong :
you assume the present warming is in equilibrium whereas it is not. This is not a matter related to the strength of the feedbaks, but to the heat capacity of the ocean. You don’t understand what is a climate feeback You don’t understand the equation you use (note also that there is a better way to write this equation, and clearly, you are not aware of it). Also, you put aerosol forcing to 0. Also, there are a lot of others hypothesis. As a result we can’t get an accurate estimation of TCR or ECS with such method but the one estimated are consistent with the Charney sensitivity, and also, even lower, consistent with the sensitivity of climate models.
You demonstrated nothing, except that your understanding is extremely limited.

Reply to  Monckton of Brenchley
August 8, 2016 5:01 am

toncul does not seem to appreciate that the simple equation in the head posting is capable of determining both transient and equilibrium sensitivities. As the equation itself shows, the pre-feedback response is invariant, and, as Roe (2009, fig. 6) shows, it is also near-immediate, so that transient and equilibrium pre-feedback responses may safely be taken as identical. In fact, without introducing significant error, one could say that the transient system response is approximately equal to the pre-feedback system response.
Therefore, the difference between the magnitudes (as opposed to the times of occurrence) of transient and equilibrium sensitivity is entirely determinable from the feedback-sum shown in bright blue in the equation. Should you imagine, as IPCC does, that the feedback-sum is so strongly net-positive as to double or even to triple equilibrium sensitivity, then you can use the diagram in Roe (2009, fig. 6) as the basis for determining the fraction of the equilibrium feedback response that will have occurred over any chosen period, and set a new value for the feedback-sum accordingly, allowing immediate determination of transient sensitivity (or any sub-equilibrium sensitivity) over any desired timescale.

toncul
Reply to  Monckton of Brenchley
August 8, 2016 5:20 pm

Roe (2009) is not a great paper, but this is the only paper you know. So let’s use it to show how wrong you are.
The simple equation in the leading post is equation 1 in Roe (2009).
Delta_Teq= Delta_F / lambda (Eq 1 witten as in the heading post).
By writing the climate sensitivity parameter lambda=lambda_0/(1- lambda_0*sum(ci)):
As said in Roe (2009), this equation governs “the new equilibrium response”. Did you read that ? I write it again : “the new equilibrium response”. This equation is valid only for equilibrium sensitivities, and not transient sensitivity, which is straightforward : you can not use the same equation to calculate two things that are different !!! I could stop here. It’s over: you are wrong.
Fig 6 in Roe (2009) shows the time evolution of the temperature change for a step forcing DeltaF. As you can see, equilibrium is reached only after several thousand of years. After few decades, the warming is only half of the equilibrium warming. So, once again, transient warming and equilibrium warming are NOT the same thing.
So what is missing in Eq 1 to represent the transient warming ? Answer: heat uptake. This is why Roe (2009) shows equation 26 and 27. Eq 27 can be written more simply : CdT/dt = F – lambda*deltaT. Where delta T is the warming (not necessary in equilibrium). When the climate system is in radiative equilibrium, CdT/dt = 0, and we find back Eq 1. In this case, deltaT is the equilibrium warming. We can see that the solution of this equation has a time scale = C / lambda. So the time scale increases with decreasing lambda, which means that it larger for larger sensitivities. This is why Roe (2009) says (Fig. 6 caption) : “the higher sensitivity climates have a larger response time and thus take longer to equilibrate”.
Let’s write equation 27 more generally (the LHS is not really good as written like that) :
DeltaQ = DeltaF – lambda*deltaT where DeltaQ is heat uptake (mainly ocean heat uptake). Then lambda =(DeltaF- DeltaQ)/deltaT (let’s call this Eq 2).
I turn back to your heading post.
What you tried to estimate is lambda. Then because we know the forcing for a doubling of CO2, you could get ECS from equation 1 (which is the equilibrium warming for a doubling of CO2). However to get lambda, you assume that the present warming is in equilibrium i.e. DeltaQ = 0 in Eq 2. This is wrong. Both observations and common sense doesn’t support that DeltaQ in presently 0 (once again, Fig 6 of Roe et al illustrate that you need several thousand of years to reach equilibrium).
So, from Eq 1 and Eq 2, we have :
ECS = F_2x / lambda
=> ECS = F_2x * deltaT / (DeltaF-DeltaQ).
Oh my God !
This is equation 1 in Otto et al 2013 !!!
It appears that the calculation that you do in the heading post, is the one to get an estimate of TCR (Eq 2 in Otto et al, even if we don’t get this equation by using the equation of the heading post). You calculate TCR and you say it’s ECS whereas TCR is lower than ECS.
It’s wrong.

Reply to  toncul
August 8, 2016 7:45 pm

Fig 6 in Roe (2009) shows the time evolution of the temperature change for a step forcing DeltaF. As you can see, equilibrium is reached only after several thousand of years. After few decades, the warming is only half of the equilibrium warming. So, once again, transient warming and equilibrium warming are NOT the same thing.

But the day to day response in temperature to length of day changes happens in days.

August 6, 2016 12:25 am

I’m a bit baffled by Greg’s contribution. If the “science is settled” then the IPCC must’ve written it down somewhere. All Greg need to is quote the real IPCC formula to refute this one he disagrees with. Should be easy for him because the “science is settled”.

Reply to  mark4asp
August 6, 2016 3:43 pm

Greg is not interested in the truth, or he would not have whined, twice, that the official climate-sensitivity equation in the head posting had not been referenced, when of course it had.
The equation for pre-feedback direct warming will be found, for instance, in Chapter 6.1 of IPCC (2001), and the equation for the post-feedback or system gain factor will be found in what is admittedly a more than customarily Sibylline footnote on p. 631 of IPCC (2007). There are also plenty of descriptions and discussions of the equation in the reviewed journals, and one particularly useful pedagogical paper is Roe (2009), written by a former pupil of the great Dick Lindzen.
The equation is, of course, a simplification, but it is capable of replicating official predictions of global warming with reasonable precision.

August 6, 2016 12:31 am

IPCC uses sophisticated computer models to predict ECS = 3.2 K. How dare the good Lord question IPCC with mere pen and paper at his disposal?
CMIP3 and CMIP5 and Pokemoncomment image

Reply to  Dr. Strangelove
August 9, 2016 11:49 am

Dr. Strangelove:
Using the disambiguation that has become standard in climatological discourse it can be said that today’s climate models do not “predict.” Instead they “project.” To maintain this distinction between “predict” and “project” is methodologically important as a model that “predicts” is susceptible to falsification by the evidence while a model that “projects” is insusceptible to falsification by the evidence. As they are insusceptible to falsification by the evidence, none of the models that you describe as “sophisticated” are scientific.

HenryP
August 6, 2016 4:18 am

Alan Macintyre says
With a 5% incrrease in the sun’s luminosity, wed get 1.05 times 480 watts from the sun during the day, 1.05 times 250 watts from the atmosphere both during the day and during the night, ,
and the RATIO of day to nighttime temps would stay the same, but with ABSOLuTE warming, the
DIFFERENCE between day and nighttime temps should INCREASE with increased solar luminosity
Henry says
unfortunately in the real world it does not work like this. the sun is brighter and hotter than ever before, yet the general global trend [over the past 16 years] is cooling. I find not only minima are dropping, but also maxima and means.
what [I think] happens is:
lower solar polar field strengths => more of the most energetic particles escape from the sun=>more ozone, peroxides, N-oxides formed TOA =>more UV deflected back to space [back radiation] => less UV in the oceans
Hence, earth is cooling.

Reply to  HenryP
August 8, 2016 4:53 am

It is going a little too far to say that the trend this millennium is a cooling trend. It is in fact a slight warming trend, which is about what I’d expect to see. It is nothing like the rapid warming trend that would justify spending a single red cent of other people’s money on making global warming go away.

HenryP
Reply to  Monckton of Brenchley
August 8, 2016 11:03 am

Dear Monckton
[is that your first name and am I right in addressing you like that?]
you say
It is going a little too far to say that the trend this millennium is a cooling trend. It is in fact a slight warming trend,
I looked at RSS and it did look right to me until 2016 – flat from 1997 until 2016. Now it looks like it is going up, with el nino 2015-2016 even bigger than el nino 1998/? does that fit your data?
http://www.woodfortrees.org/graph/rss/from:1970/plot/rss/from:1970/to:2017/trend/plot/rss/from:1997/to:2016/trend
if you go by the current data sets, it might look like it is getting warmer. Unfortunately, what with the bad sun out there, I don’t really trust any data sets anymore but my own. The sun is currently so bad, that it destroys – or is busy destroying – any material exposed to it, unless it is protected by atmosphere.
That rules out RSS and UAH. [what version are we in now?]
The other sets have been fiddled with to fit a certain [political] narrative. This is no joke. For example, I could not reconcile the data from Gibraltar with three stations around it, so I had to dismiss the station on my [random] sample. BOM data has been fiddled with after I reported cooling in certain places….
all data sets I have compiled show cooling, currently at a rate of -0.015K/annum [for means] from 2000. That means we already fell 0.2K and I predict that in the next 10 years we will fall at least as much.
obviously my wife still laughs at me at such small changes, as the rooms in our house differ in temperature by much more….

Gabro
Reply to  Monckton of Brenchley
August 9, 2016 11:55 am

The trend for the past millennium is the same as for the prior two millennia (at least), ie cooling. One thousand years ago, we were still in the Medieval Warm Period, which was warmer than the Modern Warm Period so far, and stayed there for hundreds of years. This was followed for more hundreds of years by the cold Little Ice Age. The Modern WP hasn’t been warm enough or lasted long enough for the millennial trend to have turned up.

m d mill
August 7, 2016 9:24 am

Lord Monckton…why not use data from 1965 to the present. This would provide a sensitivity more strictly dependent on the large co2 increase during that time? It would greatly reduce one of the possible errors due to natural variation you describe. [I have been meaning to do this my self]
I agree with you that the TCR “ResistiveCapacitive” delay is on the order of years, and is not an issue or a problem with your equation (ie the deep ocean capacitance(vs mix layer) is virtually irrelevant in the TCR response). As an EE, I consider the TCR (75 years!!) to be, in fact, a quasi-equilibrium sensitivity.
I would also say the TCR vs ECS difference is purely theoretical at this point, and will probably never be observationally verified apart from natural variation [and even if the higher latitudes do warm from -5F ave to +1F ave…is this significant to most of the world?…sub freezing is sub freezing!]. But even if your result is only applicable to TCR, it is equally relevant.

m d mill
August 7, 2016 9:29 am

Lord Monckton…why not use data from 1965 to the present. This would provide a sensitivity more strictly dependent on the large co2 increase during that time? It would greatly reduce one of the possible errors due to natural variation you describe. [I have been meaning to do this my self]
I agree with you that the TCR “ResistiveCapacitive” delay is on the order of years, and is not an issue or a problem with your equation (ie the deep ocean capacitance(vs mix layer) is virtually irrelevant in the TCR response). As an E.E., I consider the TCR(75 years!!) to be, in fact, a quasi-equilibrium sensitivity.
I would also say the TCR vs ECS difference is purely theoretical at this point, and will probably never be observationally verified apart from natural variation [even if the higher latitudes do warm from -5F ave to +1F ave…is this significant to most of the world?…sub freezing is sub freezing!]. But even if your result is only applicable to TCR, it is equally relevant.

m d mill
August 7, 2016 9:32 am

Lord Monckton…why not use data from 1965 to the present. This would provide a sensitivity more strictly dependent on the large co2 increase during that time? It would greatly reduce one of the possible errors due to natural variation you describe. [I have been meaning to do this my self]
I agree with you that the TCR “ResistiveCapacitive” delay is on the order of years, and is not an issue or a problem with your equation (ie the deep ocean capacitance(vs mix layer) is virtually irrelevant in the TCR response). As an EE, I consider the TCR(75 years!!) to be, in fact, a quasi-equilibrium sensitivity.
I would also say the TCR vs ECS difference is purely theoretical at this point, and will probably never be observationally verified apart from natural variation [even if the higher latitudes do warm from -5F ave to +1F ave…is this significant to most of the world?…sub freezing is sub freezing!]. But even if your result is only applicable to TCR, it is equally relevant.

August 8, 2016 4:50 am

MD Mill makes the constructive suggestion that one should confine the analysis to the period since 1965. However, I think that as far as possible one should take time considerations out of the equation. I have found a way to do this and the results will be presented in a follow-up posting.
I agree with him that the magnitude of the difference between transient and equlibrium response is likely to be small, for the feedback sum – which is just about the only reason for any difference of more than a decade or two between transient and equilibrium response – is likely to be far smaller than the IPCC would have us believe.

August 9, 2016 7:40 am

The issue has been raised of the origin of the “official” climate sensitivity equation. It was produced through
an application of the reification fallacy that placed the term “the” in front of the phrase “climate sensitivity.”
Through this act climate “scientists” bilked dupes into thinking that the concrete Earth on which
they live exhibits this sensitivity when it is only an abstract Earth that exhibits it.