Sensitivity? Schmensitivity!

Even on business as usual, there will be <1° Kelvin warming this century

By Christopher Monckton of Brenchley

Curiouser and curiouser. As one delves into the leaden, multi-thousand-page text of the IPCC’s 2013 Fifth Assessment Report, which reads like a conversation between modelers about the merits of their models rather than a serious climate assessment, it is evident that they have lost the thread of the calculation. There are some revealing inconsistencies. Let us expose a few of them.

The IPCC has slashed its central near-term prediction of global warming from 0.28 K/decade in 1990 via 0.23 K/decade in the first draft of IPCC (2013) to 0.17 K/decade in the published draft. Therefore, the biggest surprise to honest climate researchers reading the report is why the long-term or equilibrium climate sensitivity has not been slashed as well.

In 1990, the IPCC said equilibrium climate sensitivity would be 3 [1.5, 4.5] K. In 2007, its estimates were 3.3 [2.0, 4.5] K. In 2013 it reverted to the 1990 interval [1.5, 4.5] K per CO2 doubling. However, in a curt, one-line footnote, it abandoned any attempt to provide a central estimate of climate sensitivity – the key quantity in the entire debate about the climate. The footnote says models cannot agree.

Frankly, I was suspicious about what that footnote might be hiding. So, since my feet are not yet fit to walk on, I have spent a quiet weekend doing some research. The results were spectacular.

Climate sensitivity is the product of three quantities:

Ø The CO2 radiative forcing, generally thought to be in the region of 5.35 times the logarithm of the proportionate concentration change – thus, 3.71 Watts per square meter;

Ø The Planck or instantaneous or zero-feedback sensitivity parameter, which is usually taken as 0.31 Kelvin per Watt per square meter; and

Ø The system gain or overall feedback multiplier, which allows for the effect of temperature feedbacks. The system gain is 1 where there are no feedbacks or they sum to zero.

In the 2007 Fourth Assessment Report, the implicit system gain was 2.81. The direct warming from a CO2 doubling is 3.71 times 0.31, or rather less than 1.2 K. Multiply this zero-feedback warming by the system gain and the harmless 1.2 K direct CO2-driven warming becomes a more thrilling (but still probably harmless) 3.3 K.

That was then. However, on rootling through chapter 9, which is yet another meaningless expatiation on how well the useless models are working, there lies buried an interesting graph that quietly revises the feedback sum sharply downward.

In 2007, the feedback sum implicit in the IPCC’s central estimate of climate sensitivity was 2.06 Watts per square meter per Kelvin, close enough to the implicit sum f = 1.91 W m–2 K–1 (water vapor +1.8, lapse rate –0.84, surface albedo +0.26, cloud +0.69) given in Soden & Held (2006), and shown as a blue dot in the “TOTAL” column in the IPCC’s 2013 feedback graph (fig. 1):

clip_image002

Figure 1. Estimates of the principal positive (above the line) and negative (below it) temperature feedbacks. The total feedback sum, which excludes the Planck “feedback”, has been cut from 2 to 1.5 Watts per square meter per Kelvin since 2007.

Note in passing that the IPCC wrongly characterizes the Planck or zero-feedback climate-sensitivity parameter as itself being a feedback, when it is in truth part of the reference-frame within which the climate lives and moves and has its being. It is thus better and more clearly expressed as 0.31 Kelvin of warming per Watt per square meter of direct forcing than as a negative “feedback” of –3.2 Watts per square meter per Kelvin.

At least the IPCC has had the sense not to attempt to add the Planck “feedback” to the real feedbacks in the graph, which shows the 2013 central estimate of each feedback in red flanked by multi-colored outliers and, alongside it, the 2007 central estimate shown in blue.

Look at the TOTAL column on the right. The IPCC’s old feedback sum was 1.91 Watts per square meter per Kelvin (in practice, the value used in the CMIP3 model ensemble was 2.06). In 2013, however, the value of the feedback sum fell to 1.5 Watts per square meter per Kelvin.

That fall in value has a disproportionately large effect on final climate sensitivity. For the equation by which individual feedbacks are mutually amplified to give the system gain G is as follows:

clip_image004 clip_image006 (1)

where g, the closed-loop gain, is the product of the Planck sensitivity parameter λ0 = 0.31 Kelvin per Watt per square meter and the feedback sum f = 1.5 Watts per square meter per Kelvin. The unitless overall system gain G was thus 2.81 in 2007 but is just 1.88 now.

And just look what effect that reduction in the temperature feedbacks has on final climate sensitivity. With f = 2.06 and consequently G = 2.81, as in 2007, equilibrium sensitivity after all feedbacks have acted was then thought to be 3.26 K. Now, however, it is just 2.2 K. As reality begins to dawn even in the halls of Marxist academe, the reduction of one-quarter in the feedback sum has dropped equilibrium climate sensitivity by fully one-third.

Now we can discern why that curious footnote dismissed the notion of determining a central estimate of climate sensitivity. For the new central estimate, if they had dared to admit it, would have been just 2.2 K per CO2 doubling. No ifs, no buts. All the other values that are used to determine climate sensitivity remain unaltered, so there is no wriggle-room for the usual suspects.

One should point out in passing that equation (1), the Bode equation, is of general application to dynamical systems in which, if there is no physical constraint on the loop gain exceeding unity, the system response will become one of attenuation or reversal rather than amplification at loop-gain values g > 1. The climate, however, is obviously not that kind of dynamical system. The loop gain can exceed unity, but there is no physical reality corresponding to the requirement in the equation that feedbacks that had been amplifying the system response would suddenly diminish it as soon as the loop gain exceeded 1. The Bode equation, then, is the wrong equation. For this and other reasons, temperature feedbacks in the climate system are very likely to sum to net-zero.

The cut the IPCC has now made in the feedback sum is attributable chiefly to Roy Spencer’s dazzling paper of 2011 showing the cloud feedback to be negative, not strongly positive as the IPCC had previously imagined.

But, as they say on the shopping channels, “There’s More!!!” The IPCC, to try to keep the funds flowing, has invented what it calls “Representative Concentration Pathway 8.5” as its business-as-usual case.

On that pathway (one is not allowed to call it a “scenario”, apparently), the prediction is that CO2 concentration will rise from 400 to 936 ppmv; that including projected increases in CH4 and N2O concentration one can make that 1313 ppmv CO2 equivalent; and that the resultant anthropogenic forcing of 7.3 Watts per square meter, combined with an implicit transient climate-sensitivity parameter of 0.5 Kelvin per Watt per square meter, will warm the world 3.7 K by 2100 (at a mean rate equivalent to 0.44 K per decade, or more than twice as fast on average as the maximum supra-decadal rate of 0.2 K/decade in the instrumental record to date) and a swingeing 8 K by 2300 (fig. 2). Can They not see the howling implausibility of these absurdly fanciful predictions?

Let us examine the IPCC’s “funding-as-usual” case in a little more detail.

clip_image008

Figure 2. Projected global warming to 2300 on four “pathways”. The business-as-usual “pathway” is shown in red. Source: IPCC (2013), fig. 12.5.

First, the CO2 forcing. From 400 ppmv today to 936 ppmv in 2100 is frankly implausible even if the world, as it should, abandons all CO2 targets altogether. There has been very little growth in the annual rate of CO2 increase: it is little more than 2 ppmv a year at present. Even if we supposed this would rise linearly to 4 ppmv a year by 2100, there would be only 655 ppmv CO2 in the air by then. So let us generously call it 700 ppmv. That gives us our CO2 radiative forcing by the IPCC’s own method: it is 5.35 ln(700/400) = 3 Watts per square meter.

We also need to allow for the non-CO2 greenhouse gases. For a decade, the IPCC has been trying to pretend that CO2 accounts for as small a fraction of total anthropogenic warming as 70%. However, it admits in its 2013 report that the true current fraction is 83%. One reason for this large discrepancy is that once Gazputin had repaired the methane pipeline from Siberia to Europe the rate of increase in methane concentration slowed dramatically in around the year 2000 (fig. 3). So we shall use 83%, rather than 70%, as the CO2 fraction.

clip_image010

Figure 3. Observed methane concentration (black) compared with projections from the first four IPCC Assessment Reports. This graph, which appeared in the pre-final draft, was removed from the final draft lest it give ammunition to skeptics (as Germany and Hungary put it). Its removal, of course, gave ammunition to skeptics.

Now we can put together a business-as-usual warming case that is a realistic reflection of the IPCC’s own methods and data but without the naughty bits. The business-as-usual warming to be expected by 2100 is as follows:

3.0 Watts per square meter CO2 forcing

x 6/5 (the reciprocal of 83%) to allow for non-CO2 anthropogenic forcings

x 0.31 Kelvin per Watt per square meter for the Planck parameter

x 1.88 for the system gain on the basis of the new, lower feedback sum.

The answer is not 8 K. It is just 2.1 K. That is all.

Even this is too high to be realistic. Here is my best estimate. There will be 600 ppmv CO2 in the air by 2100, giving a CO2 forcing of 2.2 Watts per square meter. CO2 will represent 90% of all anthropogenic influences. The feedback sum will be zero. So:

2.2 Watts per square meter CO2 forcing from now to 2100

x 10/9 to allow for non-CO2 anthropogenic forcings

x 0.31 for the Planck sensitivity parameter

x 1 for the system gain.

That gives my best estimate of expected anthropogenic global warming from now to 2100: three-quarters of a Celsius degree. The end of the world may be at hand, but if it is it won’t have anything much to do with our paltry influence on the climate.

Your mission, gentle reader, should you choose to accept it, is to let me know in comments your own best estimate of global warming by 2100 compared with the present. The Lord Monckton Foundation will archive your predictions. Our descendants 85 years hence will be able to amuse themselves comparing them with what happened in the real world.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

145 Comments
Inline Feedbacks
View all comments
jim
June 9, 2014 3:20 am

Comparing the rate of temperature increase in the early 20th century with the rate of the late 20th century, one finds little difference. But there was a large difference in man’s CO2 emissions.
Therefore I predict zero degrees of warming from man’s CO2.
As to the final temperature, I predict cooling per solar conditions. The amount of cooling being unknown, since it is dependent on what the sun does.

Popeye
June 9, 2014 3:25 am

I’ve got a really good feeling that it’s going to be way cooler by 2100.
Doesn’t really matter though – I won’t be around (but nor will any of the shysters either)
Or, maybe it will be a bit warmer – oh well – I’m sure there are going to be a whole lot of other scares around by then 🙂
Cheers,

thingadonta
June 9, 2014 3:40 am

Ignoring the sun, you did say AGW effect by 2100.
Water vapour and associated cloud changes give negative feedback, as in the tropics (which is why they don’t get hotter than the deserts).
Best estimate about ?0.3-0.6? (0.4 best guess) C by 2100 from AGW alone, but offset by what the sun ends up doing. I’d the sun goes quiet it might even be cooler than now.

June 9, 2014 3:50 am

ppmv. IPCC clearly states their ppm is molar. Then there is GtC, carbon, CtCO2, (3.67 times C), GtCO2eq, (all of the GHGs in equivalent CO2 forcing) and Pt (peta tons, E15 or Gt Giga tons, E9). How ’bout tossing all this confusing unit & constituent nonsense overboard. CO2 is the only real topic so CO2 is the only constituent of Forget ppm, use GTonnes for atmosphere, CO2 amount, anthropogenic production, CO2 added, subtracted, etc.

June 9, 2014 3:57 am

Keep in mind also how the UK MET office quietly revised their own real forecasts of climate downward at the beginning of 2013:
http://tallbloke.wordpress.com/2013/01/05/major-change-in-uk-met-office-global-warming-forecast/

Lawrie Ayres
June 9, 2014 3:58 am

Despite your skillful skewering of the IPCC they are still being quoted by every believer as gospel. Locally a council is still promoting a .9 metre sea level rise and by doing so preventing land owners from building in areas subject to such a rise. Current data supports a sea level rise of less than 200 mm by 2100. The facts don’t matter to the faithful.
And Anthony; your incompetent President still wants to ban CO2 in power stations. Obviously his advisers are as dumb as their boss.

June 9, 2014 3:59 am

Since man-made CO2 has nothing at all to do with changes in the climate or else so little as to be undetectable, I can only go by the ups and downs over the last 20,000 years and guess that the climate will continue to be about the same as now. I predict perhaps 1 degree cooler than 2014. However, since the government agencies are fudging the books via “adjustments” to keep their religion going, the climate may be be one degree cooler in reality while at the same time the government data sets may be reporting that the climate is hotter than it has ever been on planet earth.
“All the leaders of groups tend to be frauds. If they were not, it would be impossible for them to retain the allegiance of their dupes…” ~ H. L. Mencken

Lawrie Ayres
June 9, 2014 4:01 am

Addendum: Obviously his advisers are as dumb and duplicitous as their boss.

June 9, 2014 4:02 am

Lord Monckton falls into the same trap as the IPCC, by completely ignoring The Scientific Method. It is perfectly legitimate to discuss hypothetical numbers, just so long as no-one takes any notice of them, unless and until they have been confirmed by actually measuring them. Since we cannot do controlled experiments on the earth’s atmosphere, it is impractical, currently, to actually measure climate sensitivity, however defined. So none of the numbers manipulated by your Lordship have any meaning in physics. They are all just meaningless guesses. “All sound and fury, signifying nothing” Shakespeare.
When we look at what little measured data we have, we observe that no-one has measured a CO2 signal in any modern temperature /time graph. This gives a strong indication that the actual climate sensitivity for CO2 added to the atmosphere from recent levels is indistinguishable from zero.

June 9, 2014 4:07 am

The temperature forecasting record of the IPCC
I’m not sure if there is anything new in this post. I compare the temperature forecasts from the FAR, TAR, AR4 and AR5 with the HadCRUT4 temperature record. The best and most honest IPCC forecast is the FAR from 1990 and this shows clearly that the “Low” forecast is that which lies closest to reality. Thereafter, the IPCC approach has been to obfuscate and fudge data to try and create the image of pending climatic melt down.

Chris D.
June 9, 2014 4:08 am

“…by 2100”
I’m more concerned about the occasional asteroid.

Chuck L
June 9, 2014 4:12 am

I am not sure if this was taken into consideration but as oceans cool (-PDO, -AMO) their absorption of CO2 will increase further reducing sensitivity.

June 9, 2014 4:12 am

(A) Based on the lack of any enhanced global warming signal in either the oldest thermometer records as a proxy for the global average nor in the global average itself, I predict a boring continuation of the same trend:
http://s6.postimg.org/uv8srv94h/id_AOo_E.gif
http://s16.postimg.org/54921k0at/image.jpg
The oceans considered as a classic liquid expansion thermometer also show strict defiance of fossil fuel emissions in a way that falsifies claims of a sudden switch to deep ocean heating, so no later anthropogenic heat is being stored for later release after all:
http://postimg.org/image/uszt3eei5/
(B) However, a caveat is necessary since after many such warming spikes found in the main Greenland ice core going back thousands of years, the current spike may collapse at any time in a fundamentally unpredictable manner likely due to the chaotic fluidid dynamic nature of ocean currents and associated thermodynamic release versus uptake of heat:
http://s6.postimg.org/zatdndwq9/image.jpg
(C) Since peer review in contemporary climate “science” is now proven to be fully corrupt, no result can be trusted, not even basic background plots that all studies rely upon such as the global average temperature. The proof appeared in the latest 2013 “super hockey stick” and this peer review corruption alone makes all support of an enhanced instead of moderated greenhouse effect moot:
http://s6.postimg.org/jb6qe15rl/Marcott_2013_Eye_Candy.jpg

June 9, 2014 4:20 am

This year another paper appeared to support Spencer’s negative feedback measurement:
“Tiny warming of residual anthropogenic CO2” which measured warming and actual greenhouse radiative changes with satellites to estimate a 2100 additional warming of 0.1 °C.
http://www.worldscientific.com/doi/abs/10.1142/S0217979214500957

Mike Flynn
June 9, 2014 4:30 am

According to real scientists such as geophysicists, the Earth is cooling, albeit slowly. If we are talking about the so called surface temperatures, it will still be impossible to arrive at any useful figure.
My estimate of global warming is minus three to three freckles – which is an arbitrary Warmist measure, so that whatever happens, I will be proven to be correct. Do not make the mistake of confusing the climatological freckle with ordinary or common freckle, or the climatological surface with the actual surface.
Live well and prosper,
Mike Flynn.

Dr Burns
June 9, 2014 4:31 am

Over here I always feel cooler when a cloud passes in front of the sun. Perhaps it’s different elsewhere. How are clouds supposed to cause warming? Does the night effect of clouds swamp the daylight effect?

Sasha
June 9, 2014 4:31 am

Jim Cripwell says:
“…no-one has measured a CO2 signal in any modern temperature /time graph.”
Not true. You can find it on this site here
http://wattsupwiththat.com/2008/12/17/the-co2-temperature-link/
There is more in a PDF here
http://icecap.us/images/uploads/CO2,Temperaturesandiceages-f.pdf
I draw your attention to Page 4:
How come a CO2 level of 253 ppm in the B-situation does not lead to rise in temperatures? Even from very low levels? When 253 ppm in the A situation manages to raise temperatures very fast even from amuch higher level?
One thing is for sure: “Other factors than CO2 easily overrules any forcing from CO2. Only this way can the B-situations with high CO2 lead to falling temperatures.”
This is essential, because, the whole idea of placing CO2 in a central role for driving temperatures was: “We cannot explain the big changes in temperature with anything else than CO2”.
But simple fact is: “No matter what rules temperature, CO2 is easily overruled by other effects, and this CO2-argument falls.”
So we are left with graphs showing that CO2 follows temperatures, and no arguments that CO2 even so could be the main driver of temperatures.

commieBob
June 9, 2014 4:36 am

I have no clue what the climate will be in 2100. The calculations based on CO2 should have the rider “ceteris paribus” (all other things being equal). Perhaps by then we will have the sophistication to actually be able to accurately tell what the CO2 contribution is. At this point though, the best we can achieve is little more than a lucky guess because, of course, all other things won’t be equal.

June 9, 2014 4:47 am

The science is settled, has been for 3 and 1/2 years now, but not by any climate scientist (nor lukewarm sceptic, still clinging to consensus theories, that change a real, stable atmosphere into one balanced on the razor edge of made-up “forcings”):
CO2 Climate Sensitivity Vs. Reality

bobl
June 9, 2014 4:50 am

M, this is Q….
I do it this way,
There has been an all cause warming for the radiative gasses of 10 degrees, from blackbody theoretical of the 33 degrees total. CO2 absorption band is 85 % opaque. So that gives an average warming rate of 10/85 = 0.117 degrees per percent energy absorbed by CO2. Doubling CO2 would retain half the remaining 15 % of energy in the absorption band leading to a warming of 7.5 x 0.117 or 0.81 degrees for the next doubling. I don’t agree that the thickening in the tails of the CO2 stopband will be significant where the mass of the atmosphere is not being added to by CO2, since we are merely replacing O2 by CO2 and H2O, the burning of hydrocarbons in theory subtracts from the mass of the atmosphere, and makes it thinner and colder as some O2 is converted to H2O
My estimate is 0.81 degrees at the absolute outside maximum, with no more than 1.7 degrees before energy saturation (the stop band being opaque) will prevent further warming. CO2 won’t follow the log law at a such large values of energy extraction, it will reach a limit, has to because the atmosphere is NOT being expanded, just substituted.

June 9, 2014 4:58 am

I predict the CO2 levels to stabilize within several years and drop sometime thereafter, timing dependent upon two major factors : electric car market penetration and nuclear plant construction.

June 9, 2014 5:00 am

Jim Cripwell asserted: “Since we cannot do controlled experiments on the earth’s atmosphere, it is impractical, currently, to actually measure climate sensitivity, however defined. So none of the numbers manipulated by your Lordship have any meaning in physics.”
But we *have* run a very simple single variable experiment on earth’s atmosphere, and theory works just fine in science so far, which is what science is about above and beyond pure empiricism and rules of thumb. When you release Christopher from his trap, where are you suggesting he escapes to if not the usual Science 101 world of physical theory coupled to measurements on the ground?

aaron
June 9, 2014 5:03 am

About .2-.4C warmer.

rgbatduke
June 9, 2014 5:04 am

The only problem I have with this sort of reasoning is that it presumes a linearization of a highly nonlinear process. It’s a mistake when “warmist” climate scientists do it. It is a mistake when “denier” climate scientists do it. One cannot reduce the climate to block diagrams of linearly projected average energy gain or loss, because climate variation is not linear in the atmospheric concentration of GHGs. If it were, the climate would have been nearly flat before 1950, and even though Michael Mann tried hard to make it so, a sober reconsideration of all of the data has fortunately restored the (more likely to be correct, although with large error bars) previous picture of a climate with large, named climate variations over even the last 3000 years, let alone over geological time. The Earth’s atmospheric chemistry did not change significantly over that stretch, yet the climate did.
To put it bluntly, we do not know the likely trajectory of the “feedbacks”. We do not know (and it is frankly doubtful) that the feedbacks are a linear multiplier of the forcing. We do not know how to assess the effect of random or uncontrollable events — variations in vulcanism, the Gulf Oil disaster, breaks in methane pipelines, forest fires. We cannot even predict the consequences of the gradual conversion from coal to methane based energy and the increasing use of “fracking” to access methane. We do not know what will happen to the climate as solar cycle 24 starts to end its comparatively paltry peak activity over the next six to twelve months and begins a slow decent to solar minimum, although if one examines the neutron count graphs on the WUWT solar activity page, it seems likely that it will be attended by a substantial neutron activity peak, quite likely the highest such peak in the record, and a low TOA insolation, quite likely the lowest mean insolation on the record. We do not know how to deal with highly nonlinear phenomena such as ENSO — we cannot predict ENSO, we are barely starting to understand its effect on the climate, and ENSO is a drop in the ocean of the ill-understood effect of the global ocean on the climate. If Tisdale and (even) mainstream climate scientists like Trenberth are asserting that ENSO is a solar charge/discharge cycle in the visible part of the spectrum (where CO_2 is literally irrelevant), and if all of the major warming events of the latter half of the 20th century (and possible before) are attendant on ENSO, if soot and other albedo-destroying pollution is (as it appears to be) the major factor in Arctic ice melt, simply cleaning up smokestacks without altering CO_2 emissions at all could a) improve quality of life in places like China and India where particulate emissions are currently out of control; b) put a sudden stop to the summer melt problem in the Arctic, which may well owe more to Chinese soot than to carbon dioxide.
I know that block diagrams are fun and easy to build. Energy comes in, some goes through, some goes up, some goes back down, it bounces around like a pinball in a pinball machine before draining to space, and, as is the case with a pinball table, it is a capital mistake to presume that just because you’ve watched one or two balls bounce through in a certain way that this means all future balls will do the same thing, or that if you get two balls in play at the same time you will get twice as many points. Pinball, and the climate, are chaotic dynamical systems. They just don’t work like that. Not even nonlinear climate models work to predict the climate. So why should we expect linearized block diagram flow diagrams to do better?
By 2100 it could be warmer than it is at the present. It could be colder. That’s the thing about the future — full of possibilities, it is. And we simply do not understand the climate system well enough to assert more than a “blue-sky” prediction of that future — literally.
To put it bluntly, does anybody here think that they can solve two coupled sets of Navier-Stokes equations on a spinning, tipped, globe in an eccentric orbit around a variable star against a background of varying atmospheric chemistry and unpredictable volcanic events in their heads?
I didn’t think so.
We can’t do this at the currently attainable scale with anything vaguely approaching predictive accuracy using the world’s largest computer clusters, and probably will not be able to do so. Possibly ever. If one has to go down to the Kolmogorov scale to get predictive results, never — even Moore’s Law won’t do it, not with the practical limitations of physics. If one has to go down to (say) a horizontal kilometer scale, we might make it by 2050, might not, and there is still the problem of what to do about volcanoes and other random perturbations. And in the meantime, there is still considerable doubt about the physics that goes into those models, especially given that it is input as a kind of “mean field approximation” coarse-grain averaged over their 100+ kilometer square cells.
rgb

James Strom
June 9, 2014 5:38 am

Jim Cripwell says:
June 9, 2014 at 4:02 am
Lord Monckton falls into the same trap as the IPCC,
I believe you could add that he deliberately falls into this trap. That is, assuming IPCC’s methods and its empirical results, one gets a very modest number for sensitivity. The method is hypothetical, and Monckton doesn’t have to endorse IPCC’s science to point out this conclusion.

1 2 3 6
Verified by MonsterInsights