Sensitivity? Schmensitivity!

Even on business as usual, there will be <1° Kelvin warming this century

By Christopher Monckton of Brenchley

Curiouser and curiouser. As one delves into the leaden, multi-thousand-page text of the IPCC’s 2013 Fifth Assessment Report, which reads like a conversation between modelers about the merits of their models rather than a serious climate assessment, it is evident that they have lost the thread of the calculation. There are some revealing inconsistencies. Let us expose a few of them.

The IPCC has slashed its central near-term prediction of global warming from 0.28 K/decade in 1990 via 0.23 K/decade in the first draft of IPCC (2013) to 0.17 K/decade in the published draft. Therefore, the biggest surprise to honest climate researchers reading the report is why the long-term or equilibrium climate sensitivity has not been slashed as well.

In 1990, the IPCC said equilibrium climate sensitivity would be 3 [1.5, 4.5] K. In 2007, its estimates were 3.3 [2.0, 4.5] K. In 2013 it reverted to the 1990 interval [1.5, 4.5] K per CO2 doubling. However, in a curt, one-line footnote, it abandoned any attempt to provide a central estimate of climate sensitivity – the key quantity in the entire debate about the climate. The footnote says models cannot agree.

Frankly, I was suspicious about what that footnote might be hiding. So, since my feet are not yet fit to walk on, I have spent a quiet weekend doing some research. The results were spectacular.

Climate sensitivity is the product of three quantities:

Ø The CO2 radiative forcing, generally thought to be in the region of 5.35 times the logarithm of the proportionate concentration change – thus, 3.71 Watts per square meter;

Ø The Planck or instantaneous or zero-feedback sensitivity parameter, which is usually taken as 0.31 Kelvin per Watt per square meter; and

Ø The system gain or overall feedback multiplier, which allows for the effect of temperature feedbacks. The system gain is 1 where there are no feedbacks or they sum to zero.

In the 2007 Fourth Assessment Report, the implicit system gain was 2.81. The direct warming from a CO2 doubling is 3.71 times 0.31, or rather less than 1.2 K. Multiply this zero-feedback warming by the system gain and the harmless 1.2 K direct CO2-driven warming becomes a more thrilling (but still probably harmless) 3.3 K.

That was then. However, on rootling through chapter 9, which is yet another meaningless expatiation on how well the useless models are working, there lies buried an interesting graph that quietly revises the feedback sum sharply downward.

In 2007, the feedback sum implicit in the IPCC’s central estimate of climate sensitivity was 2.06 Watts per square meter per Kelvin, close enough to the implicit sum f = 1.91 W m–2 K–1 (water vapor +1.8, lapse rate –0.84, surface albedo +0.26, cloud +0.69) given in Soden & Held (2006), and shown as a blue dot in the “TOTAL” column in the IPCC’s 2013 feedback graph (fig. 1):

clip_image002

Figure 1. Estimates of the principal positive (above the line) and negative (below it) temperature feedbacks. The total feedback sum, which excludes the Planck “feedback”, has been cut from 2 to 1.5 Watts per square meter per Kelvin since 2007.

Note in passing that the IPCC wrongly characterizes the Planck or zero-feedback climate-sensitivity parameter as itself being a feedback, when it is in truth part of the reference-frame within which the climate lives and moves and has its being. It is thus better and more clearly expressed as 0.31 Kelvin of warming per Watt per square meter of direct forcing than as a negative “feedback” of –3.2 Watts per square meter per Kelvin.

At least the IPCC has had the sense not to attempt to add the Planck “feedback” to the real feedbacks in the graph, which shows the 2013 central estimate of each feedback in red flanked by multi-colored outliers and, alongside it, the 2007 central estimate shown in blue.

Look at the TOTAL column on the right. The IPCC’s old feedback sum was 1.91 Watts per square meter per Kelvin (in practice, the value used in the CMIP3 model ensemble was 2.06). In 2013, however, the value of the feedback sum fell to 1.5 Watts per square meter per Kelvin.

That fall in value has a disproportionately large effect on final climate sensitivity. For the equation by which individual feedbacks are mutually amplified to give the system gain G is as follows:

clip_image004 clip_image006 (1)

where g, the closed-loop gain, is the product of the Planck sensitivity parameter λ0 = 0.31 Kelvin per Watt per square meter and the feedback sum f = 1.5 Watts per square meter per Kelvin. The unitless overall system gain G was thus 2.81 in 2007 but is just 1.88 now.

And just look what effect that reduction in the temperature feedbacks has on final climate sensitivity. With f = 2.06 and consequently G = 2.81, as in 2007, equilibrium sensitivity after all feedbacks have acted was then thought to be 3.26 K. Now, however, it is just 2.2 K. As reality begins to dawn even in the halls of Marxist academe, the reduction of one-quarter in the feedback sum has dropped equilibrium climate sensitivity by fully one-third.

Now we can discern why that curious footnote dismissed the notion of determining a central estimate of climate sensitivity. For the new central estimate, if they had dared to admit it, would have been just 2.2 K per CO2 doubling. No ifs, no buts. All the other values that are used to determine climate sensitivity remain unaltered, so there is no wriggle-room for the usual suspects.

One should point out in passing that equation (1), the Bode equation, is of general application to dynamical systems in which, if there is no physical constraint on the loop gain exceeding unity, the system response will become one of attenuation or reversal rather than amplification at loop-gain values g > 1. The climate, however, is obviously not that kind of dynamical system. The loop gain can exceed unity, but there is no physical reality corresponding to the requirement in the equation that feedbacks that had been amplifying the system response would suddenly diminish it as soon as the loop gain exceeded 1. The Bode equation, then, is the wrong equation. For this and other reasons, temperature feedbacks in the climate system are very likely to sum to net-zero.

The cut the IPCC has now made in the feedback sum is attributable chiefly to Roy Spencer’s dazzling paper of 2011 showing the cloud feedback to be negative, not strongly positive as the IPCC had previously imagined.

But, as they say on the shopping channels, “There’s More!!!” The IPCC, to try to keep the funds flowing, has invented what it calls “Representative Concentration Pathway 8.5” as its business-as-usual case.

On that pathway (one is not allowed to call it a “scenario”, apparently), the prediction is that CO2 concentration will rise from 400 to 936 ppmv; that including projected increases in CH4 and N2O concentration one can make that 1313 ppmv CO2 equivalent; and that the resultant anthropogenic forcing of 7.3 Watts per square meter, combined with an implicit transient climate-sensitivity parameter of 0.5 Kelvin per Watt per square meter, will warm the world 3.7 K by 2100 (at a mean rate equivalent to 0.44 K per decade, or more than twice as fast on average as the maximum supra-decadal rate of 0.2 K/decade in the instrumental record to date) and a swingeing 8 K by 2300 (fig. 2). Can They not see the howling implausibility of these absurdly fanciful predictions?

Let us examine the IPCC’s “funding-as-usual” case in a little more detail.

clip_image008

Figure 2. Projected global warming to 2300 on four “pathways”. The business-as-usual “pathway” is shown in red. Source: IPCC (2013), fig. 12.5.

First, the CO2 forcing. From 400 ppmv today to 936 ppmv in 2100 is frankly implausible even if the world, as it should, abandons all CO2 targets altogether. There has been very little growth in the annual rate of CO2 increase: it is little more than 2 ppmv a year at present. Even if we supposed this would rise linearly to 4 ppmv a year by 2100, there would be only 655 ppmv CO2 in the air by then. So let us generously call it 700 ppmv. That gives us our CO2 radiative forcing by the IPCC’s own method: it is 5.35 ln(700/400) = 3 Watts per square meter.

We also need to allow for the non-CO2 greenhouse gases. For a decade, the IPCC has been trying to pretend that CO2 accounts for as small a fraction of total anthropogenic warming as 70%. However, it admits in its 2013 report that the true current fraction is 83%. One reason for this large discrepancy is that once Gazputin had repaired the methane pipeline from Siberia to Europe the rate of increase in methane concentration slowed dramatically in around the year 2000 (fig. 3). So we shall use 83%, rather than 70%, as the CO2 fraction.

clip_image010

Figure 3. Observed methane concentration (black) compared with projections from the first four IPCC Assessment Reports. This graph, which appeared in the pre-final draft, was removed from the final draft lest it give ammunition to skeptics (as Germany and Hungary put it). Its removal, of course, gave ammunition to skeptics.

Now we can put together a business-as-usual warming case that is a realistic reflection of the IPCC’s own methods and data but without the naughty bits. The business-as-usual warming to be expected by 2100 is as follows:

3.0 Watts per square meter CO2 forcing

x 6/5 (the reciprocal of 83%) to allow for non-CO2 anthropogenic forcings

x 0.31 Kelvin per Watt per square meter for the Planck parameter

x 1.88 for the system gain on the basis of the new, lower feedback sum.

The answer is not 8 K. It is just 2.1 K. That is all.

Even this is too high to be realistic. Here is my best estimate. There will be 600 ppmv CO2 in the air by 2100, giving a CO2 forcing of 2.2 Watts per square meter. CO2 will represent 90% of all anthropogenic influences. The feedback sum will be zero. So:

2.2 Watts per square meter CO2 forcing from now to 2100

x 10/9 to allow for non-CO2 anthropogenic forcings

x 0.31 for the Planck sensitivity parameter

x 1 for the system gain.

That gives my best estimate of expected anthropogenic global warming from now to 2100: three-quarters of a Celsius degree. The end of the world may be at hand, but if it is it won’t have anything much to do with our paltry influence on the climate.

Your mission, gentle reader, should you choose to accept it, is to let me know in comments your own best estimate of global warming by 2100 compared with the present. The Lord Monckton Foundation will archive your predictions. Our descendants 85 years hence will be able to amuse themselves comparing them with what happened in the real world.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

145 Comments
Inline Feedbacks
View all comments
June 9, 2014 9:05 am

“funding-as-usual” Hahahahahahahaha! Totally nails it! 😀

June 9, 2014 9:06 am

Lord Monkton, my comment about not being sure about anything new was directed at my own post on IPPC forecasting record 😉 My own view is that the upper bound for climate sensitivity is in the region 1.5˚K (since we now use the absolute scale) and the research focus of the last 20 years should have been on determining how much of late 20th century warming was natural instead of pretending that none of it was.

June 9, 2014 9:10 am

“The only problem I have with this sort of reasoning is that it presumes a linearization of a highly nonlinear process. It’s a mistake when “warmist” climate scientists do it. It is a mistake when “denier” climate scientists do it. One cannot reduce the climate to block diagrams of linearly projected average energy gain or loss, because climate variation is not linear in the atmospheric concentration of GHGs. ”
Of course one CAN.
one can reduce the climate to block diagrams. just do it.
this comment reminds me of the engineers who would argue with me that one could not reduce
future wars to block diagrams of linearly projected average troup gains and losses.
The response was simple. Of COURSE one CAN. look at the white board. I just Did it.
The question is.. HOW USEFUL is that reduction of the system? and is there a better reduction?
Now in projecting futures wars, future battles, we have precious little historical data to work on.
And the physics are mostly by analogy. Here is an simple example
http://en.wikipedia.org/wiki/Lanchester's_laws
In any case when faced with an intractalbe problem we have these choices
A) through up your hands and say.. its too hard
B) Make assumptions and draw conclusions based on the information you have.
Any way, when the policy deciders ( say in country X ) would come to my teame (circa 1985)and ask
What forces do we need to defend our country in 2000?
That discussion would start with a list of assumptions.
1. I assume China will still be the threat. I have no lab experiments that confirm this
2. I assume that China will continue to improve its technology. Here is the technology
growth curve I use. It assumes they advance at the same rate as the US.
3. I assume they build a new plane. Here is the performance of that plane. Its not
built yet. It will look something like this.
4. I assume they will build this many planes and base them here here and over here.
5. I assume the US will still be your ally. I assume our forces will be here, here and here
and I assume we will already be engaged in a war with russia. This means we will
not be able to respond for the following number of days.
Lots of assumptions. Lots of things you cant do.
The problem is then reduced to How long can you last before we show up under the given scenario?
Then comes the question of how I figure out how long they can last.
Here are the approaches i used
A) “rules of thumb” Here are some rules of thumb..
B) simple modeling. We reduce the problem to some block diagrams,
C) complex war gaming. We ran some war gaming excercises. this is what we found.
D) warfare simulation. We simulated both sides. We looked at optimal strategies
E) technology improvements. We made a bunch of guesses about technology. here
are our guesses.
Of course during the course of these discussions somebody would always say
“You cant do that!” You cant assume that. you cant prove that!
Well, of course one can do all the things we did. we just did them.
of course one can assume the things we assumed. we just assumed them.
And of course we could prove it. All proof depends upon assumption.
These guys were silenced by the following question: Ok, you tell me the right assumptions,
you tell me the better methods, you do the work,
Planning for future wars against the soviets took the same course of action.
Assumption upon assumption and models all the way down. Simple, complex,
rules of thumb.
The question was what’s the best course of action. Saying “I dont know” is not an option.
Saying “Im the critic” didnt work.
Saying “the chinese might not be a threat” is not an option.
Saying “the soviets will disintigrate was never a consideration” The job was to work from what
we knew, using the tools at hand, making the assumptions we needed to make, to provide the
Best informed Opinion we could.
Climate science is not too different. Its different in degree, not in kind.
is that science “good enough” to decide policy? Good question. Here is the deal.
If you dont have a better approach. If all you do is criticize, in the end you will lose the
debate. Obama has a pen and a phone. you dont.

rgbatduke
June 9, 2014 9:15 am

Rarely, Professor Brown errs. I fear he may have done so here. For the feedback sum may be attained either by linear or by non-linear processes (see e.g. a remarkably clear and detailed pedagogical paper by Dick Lindzen’s pupil Gerard Roe in 1999). So there is nothing inherent in the IPCC’s treatment of feedbacks that precludes non-linearity in the system response of which the feedbacks are at once an effect and a cause.

The point I was making is that the decomposition that they are attempting to make is impossible to extract from the nonlinear models they are solving. It’s the same separability problem that keeps them from being able to make any claim for the natural vs anthropogenic fractions of their results, or from the correct way to balance aerosol or CO_2-linked contributions, or from being able to prove (or even provide positive evidence) that they are treating clouds and water vapor correctly.
One day I’ll have to post a graph or two of actual chaotic processes so that people can understand the problem. A teensy change in anything in a numerical solution can kick the system from a trajectory around one attractor to a completely distinct trajectory around a completely distinct attractor, with completely distinct “average” properties and completely distinct feedbacks. In a problem with a few well separated attractors, one cannot even compute an “average feedback” that has any meaning whatsover — each fixed point has distinct local dynamics. And this is still for simple problems, problems that do indeed have only a few stable attractors, toy problems.
In the case of climate, it is like saying that the dynamical feedbacks that are important during a glacial era are the same as those that are important during an interglacial, or (since climate probably has a fractal decomposition/distribution of these attractors, in an absurdly high dimensional space) that the feedback of the LIA are the same as those of the Dalton minimum are the same as those of the first half of the 20th century are the same as those of the stretch from 1983 to 1998 are the same as those today. The phrase “the same as” has no meaning whatsoever in this sort of context — they aren’t the same as, period, and we don’t even know how to break them down and compare them in terms of the grossest of features because for nearly all of that record we have no useful measurements. Our proxy-derived knowledge of the LIA is next to useless in the present — we literally have no idea why it occurred, and so we cannot assign any reasonable probability to it recurring in the next decade or over the next century. We don’t know why it warmed rapidly (comparatively speaking) from 1983 to 1998. We don’t know why it stopped warming from (somewhere in there) to the present. That’s the entire point of Box 9.2 of AR5. We don’t know. They openly acknowledge not knowing, although they don’t openly acknowledge that the correct explanation is as likely not to be in their list of possibilities as as on it, since there is no data to support ANY known possibility and NO GCM predicts a lack of warming (while paradoxially, ALL GCMs predict a lack of warming — if you tweak the butterfly’s wings a bit).
So all I was trying to do is point out that you (following the IPCC in the SPM, but not necessarily everywhere in AR5) are doing the moral equivalent of trying to claim that Schrodinger’s Cat is 50% dead after a certain amount of time in the box. No, it’s not. It’s either 100% dead or 100% alive — we just don’t know which not because the cat is or isn’t dead, but because we cannot measure the subtle changes in the Universe’s state that correspond to the instant the cat dies — or doesn’t — outside of the box, which is not adiabatically disconnected from the rest of the Universe. If you reason on this basis, you (and the IPCC) are all too likely to utter absurdities.
That’s again the rub. If the IPCC openly presented the full set of Perturbed Parameter Ensemble results from each model contributing to CMIP5 (as they did, for a few models, in the earlier draft and which might still be tucked away in the final draft — I haven’t looked) people would be shocked to see that with exactly the same initial conditions, well within our uncertainty even models that predict a fair bit of warming on average, predict actual extended cooling for a rather large fraction of runs.
The PPE average is not what the model is predicting. What it is doing is predicting the probability distribution of possible futures, but only one of these futures will be realized, just as the cat is either alive or dead, not an average of the two. Or more likely still, none of these futures, because the models are absurdly oversimplified and don’t even get the coarse grain averaged physics right for the hydrodynamics problem for the short term, let alone the medium term. The models don’t even generate solutions that satisfy e.g. mass conservation laws — they have to constantly be projected back to restore it. And what will you bet that the distribution of outcomes depends on how and when the projection is done, in a chaotic nonlinear PDE solution?
You will very likely never see the model results presented in this way, in part because one could then see how often they produce results that are as cool as the Earth has actually been, how accurately their pattern of fluctuations (which is directly sensitive to the feedbacks we are discussing via the Fluctuation-Dissipation Theorem) corresponds to the observed pattern of global temperature fluctuations on similar time scales, and so on. And if we saw that, we would simply conclude, one at a time, that the models were failing, for nearly all of the models contributing to the meaningless average.
rgb

rgbatduke
June 9, 2014 9:29 am

The question is.. HOW USEFUL is that reduction of the system? and is there a better reduction?

I humbly stand corrected. Pointlessly corrected, but corrected.
Or perhaps not so pointlessly. Utility depends on purpose. If your purpose is to predict the future, this reduction is pointless because (as I just pointed out) the cat isn’t half dead, it is all dead or all alive, half the time, and only one future is actually realized and the only way to see if your theory is correct is to compare its predictions to reality. Ay, mate, that’s indeed the rub.
If your purpose is to generate maximum alarm and divert the creative energies of an entire civilization to some personally desired end, the reduction is very useful indeed. It was a popular technique back in the 1930’s, for example — create unverifiable generalizations also known as “big lies”, present them as fact, and generate support for what might well otherwise by an untenably immoral position.
If I asked you how ACCURATE the reduction above or presented in AR5’s SPM is, you would — if you were honest — have to respond with “as far as we can tell, not very accurate” in precisely the places the IPCC is asserting “high confidence”, or “medium confidence”. Their assertions of “confidence” are literally indefensible. If you think otherwise, I would cheerfully debate you on this issue — textbooks on statistics at twenty paces, may the best argument win.
rgb

Monckton of Brenchley
June 9, 2014 10:01 am

Professor Brown is, of course, right that the climate is unpredictable because it is a complex, non-linear, chaotic object (IPCC, 2001, para. 14.2.2.2; Lorenz, 1963; Lighthill, 1998; Giorgi; 2005). However, the game I play is to use the IPCC’s admittedly dopey methods, and to say to them that if it is by these methods that they determine climate sensitivity it follows that the central estimate of climate sensitivity to which they should adhere is 2.2 K per CO2 doubling.
One can tell them till one is blue in the face that the models are useless because the climate object is chaotic, but they will respond by sneering that they can predict the summer will be warmer than the winter (that dumb response was once given to me by the head of research at the University of East Anglia).
But if instead one says, “Right, you say the CO2 forcing is thus and thus, and the Planck parameter is so and so, and you allow for feedbacks and consequent non-linearities in the system response by saying that at present the feedback sum is this and that. Fair enough: in that event your conclusion should be that this century’s funding-as-usual warming should be more like 2 K than 8 K” – which was the conclusion of the head posting.
In truth, no prediction will be reliable, but I’d be prepared to bet quite a large sum, for the sake of my heirs, that this century’s global warming will indeed be closer to 2 K than to 8 K, even if there be no more nonsense about curbing “carbon emissions”.

catweazle666
June 9, 2014 10:08 am

Here is the sensitivity estimate taken from the paper:
Schneider S. & Rasool S., “Atmospheric Carbon Dioxide and Aerosols – Effects of Large Increases on Global Climate”, Science, vol.173, 9 July 1971, p.138-141
We report here on the first results of a calculation in which separate estimates were made of the effects on global temperature of large increases in the amount of CO2 and dust in the atmosphere. It is found that even an increase by a factor of 8 in the amount of CO2, which is highly unlikely in the next several thousand years, will produce an increase in the surface temperature of less than 2 deg. K.

Cheshirered
June 9, 2014 10:22 am

Lord Monckton, have you considered taking the services of a personal protection specialist? The way you’re dismantling the AGW scare The Team may be plotting a number on you. Next time you venture out, best check under your car…. 😉

June 9, 2014 10:31 am

Monckton and RGB You are both saying that the IPCC models are useless for forecasting . Surely it is time to quit talking about models at all and make predictions using another approach For forecasts of the possible coming cooling based on the 60 and 1000 year quasi- periodicities in the temperature data and using the neutron count and 10Be as the most useful proxy for solar activity see
http://climatesense-norpag.blogspot.com/2013/10/commonsense-climate-science-and.html

Beta Blocker
June 9, 2014 10:31 am

Monckton of Brenchley asks that WUWT readers submit their best estimate of global warming by 2100 compared with the present.
Eighty-five years from now, his foundation will review those estimates to see who was right and who was wrong about what actually transpired in Global Mean Temperature by the end of the century.
An approach you might find useful in making your estimate would be to use:
Beta Blocker’s CET Pattern Picker:
Here’s how it works:
1: Using the top half of the Beta Blocker form, study the pattern of trends in Central England Temperature (CET) between 1659 and 2007.
2: Using CET trends as proxies for GMT trends, make your best guess as to where you think GMT will go between 2007 and 2100.
3: Linearize your predicted series of rising/falling trend patterns into a single 2007-2100 trend line.
4: Using the bottom half of the Beta Blocker form, summarize the reasoning behind your guess.
5: Add additional pages containing more detailed reasoning and analysis, as little or as much as you see necessary.
6: Give your completed form and your supplementary documentation to your friends for peer review.
7: If your friends like your prediction, submit your analysis to your favorite climate science journal.
8: If your friends don’t like your prediction:
— Challenge them to write their own peer-reviewed climate science paper.
— Hand them a blank copy of the Beta Blocker CET Pattern Picker form.
Just follow these eight easy steps and you too can become a peer reviewed climate scientist right here in the Year 2014!
But, we must ask a question …. Will you be seen by future generations as not being a certified peer-reviewed climate scientist if your prediction turns out to be wrong? (On the other hand, will you be seen by future generations as not being a certified peer-reviewed climate scientist if your prediction turns out to be right?)
No matter, just fill out the form and take the chance. What do you have to lose?

June 9, 2014 10:37 am

Though I must admit at the outset that I am a natural pessimist, I predict the Earth will cool by 0.99999 degree C (approx.) by 2100. Now that’s a real scary story!

June 9, 2014 10:40 am

A prediction for 2100 is useless because you can not check it in a reasonable time span. Real science has to be checked as soon as possble. Otherwise you have no progress.

rgbatduke
June 9, 2014 10:40 am

Monckton and RGB You are both saying that the IPCC models are useless for forecasting . Surely it is time to quit talking about models at all and make predictions using another approach For forecasts of the possible coming cooling based on the 60 and 1000 year quasi- periodicities in the temperature data and using the neutron count and 10Be as the most useful proxy for solar activity see
http://climatesense-norpag.blogspot.com/2013/10/commonsense-climate-science-and.html

A model by any other name still smells so — sweet — only this isn’t even a model. What is a “quasi”-periodicity, and how can one predict that it will be, um, periodic into the future? Numerology is numerology. At least the GCMs are trying to solve the actual physics problem, even if the problem is probably unsolvable with current (or any reasonably projected future) computational capacity for the human species.
rgb

milodonharlani
June 9, 2014 10:46 am

Pamela Gray says:
June 9, 2014 at 7:30 am
On which side of the Pacific the warm water is concentrated has IMO a large effect on climate. For starters, it helps determine how much Arctic sea ice there will be.

Alberta Slim
June 9, 2014 10:50 am

I agree with “rgbatduke”, Jim Cripwell, and harrydhuffman (@harrydhuffman)
Thanks for that…………….

milodonharlani
June 9, 2014 10:58 am

I’ve already predicted the period 2007-36 to be statistically significantly cooler than 1977 to 2006 by small fractions of a degree, based upon the observation that to 1947-76 was cooler than 1917-46, & that 1887 to 1916 was cooler than the following thirty years, while 1857-86 had been warmer.
So IMO odds are that 2037-66 should be warmer than the prior three decade period & 2067-96. If the Modern Warm Period hasn’t peaked yet, then it’s possible that early in the 2097 to 2126 period, ie c. AD 2100, the earth might be a bit warmer than in 1998, the peak of the past warm phase. Add in maybe a fraction of a degree C from more CO2, & I’ll venture a guess of a degree warmer in 2100 than now, although present T has been overadjusted. I hope by 2100 the data will be improved, with less adjustment.

Pamela Gray
June 9, 2014 11:12 am

Milo, I agree. The location of the warm and cool pools of water in the North Pacific Ocean not only affect ocean biomes and coastal climate on both sides of the Pacific, the effect extends clear into the Rockies and beyond, forcing land-based and river-based plants and animals to also respond to this multi-decadal spatial swing. The Jet Stream brings weather that does something entirely different depending on where those pools are located. As for Arctic Ice, I would imagine that as well, though don’t know if changes in Arctic Ice come before or after a PDO shift is in place.

Alan Robertson
June 9, 2014 11:17 am

rgbatduke says:
June 9, 2014 at 8:39 am
Much as I generally love xkcd, this just in:
IPCC Claim’s 4.5 C by 2100
http://xkcd.com/1379/
____________________________
I think the answer to the question, “how did the IPCC arrive at the estimate of 4.5C per doubling of CO2”, is really quite simple.
I’d speculate that they took a figure at the upper end of estimates of climate sensitivity, 1.5C/doubling of CO2 and added error bars of 100%, moving the upper limit to 3C/2xCO2 and used the new 3C figure as baseline, placing the upper error bar at 4.5C. This not only gave them a scary figure to feed to the gullible, but also gave them a (slight) measure of deniability if climate sensitivity turns out to be at the upper end of actual estimates of 1.5C/doubling.

Pamela Gray
June 9, 2014 11:18 am

To also extend my poor attempt at clarifying this PDO issue, I agree with Bob that the PDO is an after-affect of ENSO processes (which it must be, given how it is calculated). However, the PDO shift can then be the source of weather pattern variation shifts brought to land riding on the Jet Stream as it adjusts to the change in pool location.

Robertvd
June 9, 2014 11:23 am

After 27 days trekking across Greenland’s vast ice sheet, members of the Seven Continents Exploration Club (KE7B) have completed their journey and returned home.
When they arrived at Kuala Lumpur International Airport in Malaysia, friends and family members were waiting to welcome them back.
Team leader Yanizam Mohamad Supiah described the extreme cold, which plummeted to as low as -35C at times, as being among the toughest challenges they had to face. He thanked God that they managed to complete the journey earlier than the 35 days they had expected it to take.
http://www.icenews.is/2014/06/09/greenland-expedition-team-returns-home/

Rod Leman
June 9, 2014 11:24 am

The most repeated point Monckton makes above is that, basically, the entire climate science field and virtually all credible science organizations in the world are ethically corrupted by “funding”. Does that not sound like a convenient way to always dismiss climate science without having to deal with a lot of compelling evidence from over 10,000 active, publishing climate scientists?
Where is the extraordinary evidence to support the extraordinary claim of massive, international scientific corruption? Claimed by a person who has a degree in journalism and, to my knowledge, has never published a scientific paper in a top tier, peer-reviewed scientific journal. Has anyone asked for a list of all of HIS funding sources?
Does anyone question the credibility of such a claim from such a source coupled with such limited evidence? Is such a claim really that likely considering the breadth of such a suggested corruption?
As an engineer who has worked with real scientists, I have found competency to be the most valued characteristic of peers and a deep respect for facts, truth, logic. Corruption on the scale suggested by Monckton would utterly decimate virtually ALL science since the various disciplines rely on cross pollination of research to support and verify everything they do. It would affect disciplines from meteorology to biology to archaeology to chemistry………
Do you guys realize how incredible Monckton’s claim of science corruption is?

Alan Robertson
June 9, 2014 11:26 am

Robertvd says:
June 9, 2014 at 11:23 am
http://www.icenews.is/2014/06/09/greenland-expedition-team-returns-home/
_________________________
THANKS!

June 9, 2014 11:32 am

Random walk.

Stephen Richards
June 9, 2014 11:34 am

Sasha says:
June 9, 2014 at 4:31 am
Jim Cripwell says:
“…no-one has measured a CO2 signal in any modern temperature /time graph.”
Key word for you Sasha “measured”

NikFromNYC
June 9, 2014 11:37 am

Steve Mosher asserts: “If all you do is criticize, in the end you will lose the debate. Obama has a pen and a phone. you dont.”
…demonstrating the triumphantalism of a classic egomaniacal sociopath who jumps on board a profitable trend without any concern for his own future downfall in disgrace. His use of the very word “debate” is actually a term for a culture war, not what the dictionary defines it as:
“A formal contest of argumentation in which two opposing teams defend and attack a given proposition.”
His background in French philosophy has gotten the better of him, as he deconstructs our lives for us, all the while our “mere criticism“ has over the last few years converted Canada, Australia, much of Britain and fully half of the US political machines over to serious and outspoken climate model skepticism, not to mention Russia and China and the decline and fall of all climate treaties.
Obama’s pen promises to create an even bigger backlash that will likely topple the entire left wing agenda for a generation or two, much more so than a soft landing would have created, for it exposes economy killing activism supported by junk science fraud that has become so obvious to insiders that the division now is merely between thoughtful critics and outright scammers.
His own Berkeley project US data plot shows over 1000% greater recent warming than Jim Hansen’s plot all the while everybody can feel that it’s just not very hot out compared to the Dust Bowl era. But we can’t criticize it effectively unless we build our own black box? How about we just plot the oldest records to see if they falsify the hockey stick shape and post these plots far and wide on the Internet? Yup, we did that. And it helped us *win* the climate war so terribly effectively that Obama is now forced to go it alone, for after all he already had both houses of congress as large majorities and no carbon tax resulted. But now we are to cower from his mere pen as he is exposed more and more as a sorry and unprepared hack, and the most brazen liar of all time?

Verified by MonsterInsights