Can we deduce climate sensitivity from temperature?

By Christopher Monckton of Brenchley

Central to Professor Lovejoy’s paper attempting to determine climate sensitivity from recent temperature trends is the notion that in any 125-year period uninfluenced by anthropogenic forcings there is only a 10% probability of a global temperature trend greater than +0.25 K or less than –0.25 K.

Like most of the hypotheses that underpin climate panic, this one is calculatedly untestable. The oldest of the global temperature datasets – HadCRUt4 – starts only in 1850, so that the end of the earliest 125-year period possible in that dataset is 1974, well into the post-1950 period of potential anthropogenic influence.

However, the oldest regional instrumental dataset, the Central England Temperature Record, dates back to 1659. It may give us some pointers. 

The CET record has its drawbacks. It is regional rather than global, and its earliest temperature data have a resolution no better than 0.5-1.0 K. However, its area of coverage is on the right latitude. Also, over the past 120 years, representing two full cycles of the Pacific Decadal Oscillation, its trend is within 0.01 K of the trend on the mean of the GISS, HadCRUT4 and NCDC global terrestrial datasets. It is not entirely without value.

I took trends on 166 successive 125-year periods from 1659-1784 to 1824-1949. Of these, 57, or 34%, exhibited absolute trends greater than |0.25| K (Table 1).

clip_image002

Table 1. Least-squares linear-regression trends (K) on the monthly mean regional surface temperature anomalies from the Central England Temperature dataset for 166 successive 125-year periods from 1659-1784 to 1824-1949. Of these periods, 57 (or 34%) show absolute temperature trends greater than |0.25| K.

Most of the 125-year periods exhibiting a substantial absolute trend occur at the beginning or the end of the interval tested. The trends in the earlier periods capture the recovery from the Little Ice Age, which independent historical records show was rapid. In the later periods the trends capture the rapid warming from 1910-1945.

Subject to the cautions about the data that I have mentioned, the finding that more than a third of all 125-year periods terminating before the anthropogenic influence on global climate began in 1950 suggests the possibility that 125-year periods showing substantial temperature change may be at least thrice as frequent as Professor Lovejoy had assumed.

Taken with the many other defects in the Professor’s recent paper – notably his assumption that the temperature datasets on which he relied had very small error intervals when in fact they have large error intervals that increase the further back one goes – his assumption that rapid temperature change is rare casts more than a little doubt on his contention that one can determine climate sensitivity from the recent temperature record.

How, then, can we determine how much of the 20th-century warming was natural? The answer, like it or not, is that we can’t. But let us assume, ad argumentum and per impossibile, that the temperature datasets are accurate. Then one way to check the IPCC’s story-line is to study its values of the climate-sensitivity parameter over various periods (Table 2).

clip_image004

Table 2. IPCC’s values for the climate-sensitivity parameter

Broadly speaking, the value of the climate-sensitivity parameter is independent of the cause of the direct warming that triggers the feedbacks that change its value. Whatever the cause of the warming, little error arises by assuming the feedbacks in response to it will be about the same as they would be in response to forcings of equal magnitude from any other cause.

The IPCC says there has been 2.3 W m–2 of anthropogenic forcing since 1750, and little natural forcing. In that event, the climate-sensitivity parameter is simply the 0.9 K warming since 1750 divided by 2.3 W m–2, or 0.4 K W–1 m2. Since most of the forcing since 1750 has occurred in the past century, that value is in the right ballpark, roughly equal to the centennial sensitivity parameter shown in Table 2.

Next, we break the calculation down. Before 1950, according to the IPCC, the total anthropogenic forcing was 0.6 W m–2. Warming from 1750-1949 was 0.45 K. So the pre-1950 climate sensitivity parameter was 0.75 K W–1 m2, somewhat on the high side, suggesting that some of the pre-1950 warming was natural.

How much of it was natural? Dividing 0.45 K of pre-1950 warming by the 200-year sensitivity parameter 0.5 K W–1 m2 gives 0.9 W m–2. If IPCC (2013) is correct in saying 0.6 W m–2 was anthropogenic, then 0.3 W m–2 was natural.

From 1950 to 2011, there was 1.7 W m–2 of anthropogenic forcing, according to the IPCC. The linear temperature trend on the data from 1950-2011 is 0.7 K. Divide that by 1.7 W m–2 to give a plausible 0.4 K W–1 m2, again equivalent to the IPCC’s centennial sensitivity parameter, but this time under the assumption that none of the global warming since 1950 was natural.

This story-line, as far as it goes, seems plausible. But the plausibility is entirely specious. It was achieved by the simplest of methods. Since 1990, the IPCC has all but halved the anthropogenic radiative forcing to make it appear that its dead theory is still alive.

In 1990, the IPCC predicted that the anthropogenic forcing from greenhouse gases since 1765 would amount to 4 W m–2 on business as usual by 2014 (Fig. 1).

clip_image006

Figure 1. Projected anthropogenic greenhouse-gas forcings, 1990-2100 (IPCC, 1990).

However, with only 0.9 K global warming since the industrial revolution began, the implicit climate-sensitivity parameter would have been 0.9 / 4 = 0.23 K W–1 m2, or well below even the instantaneous value. That is only half the 0.4-0.5 K W–1 m2 that one would expect if the IPCC’s implicit centennial and bicentennial values for the parameter (Table 2) are correct.

In 1990 the IPCC still had moments of honesty. It admitted that the magnitude and even the sign of the forcing from anthropogenic particulate aerosol emissions (soot to you and me) was unknown.

Gradually, however, the IPCC found it expedient to offset not just some but all of the CO2 radiative forcing with a putative negative forcing from particulate aerosols. Only by this device could it continue to maintain that its very high centennial, bicentennial, and equilibrium values for the climate-sensitivity parameter were plausible.

Fig. 2 shows the extent of the tampering. The positive forcing from CO2 emissions and the negative forcing from anthropogenic aerosols are visibly near-identical.

clip_image008

Figure 2. Positive forcings (left panel) and negative forcings 1950-2008 (Murphy et al., 2009).

As if that were not bad enough, the curve of global warming in the instrumental era exhibits 60-year cycles, following the ~30-year cooling and ~30-year warming phases of the Pacific Decadal Oscillation (Fig. 3). This oscillation appears to have a far greater influence on global temperature, at least in the short to medium term, than any anthropogenic forcing.

The “settled science” of the IPCC cannot yet explain what causes the ~60-year cycles of the PDO, but their influence on global temperature is plainly visible in Fig. 3.

clip_image010

Figure 3. Monthly global mean surface temperature anomalies and trend, January 1890 to February 2014, as the mean of the GISS, HadCRUT4 and NCDC global mean surface temperature anomalies, with sub-trends during the negative or cooling (green) and positive or warming (red) phases of the Pacific Decadal Oscillation. Phase dates are provided by the Joint Institute for the Study of the Atmosphere and Ocean at the University of Washington: http://jisao.washington.edu/pdo/. Anthropogenic radiative forcings are apportionments of the 2.3 W m–2 anthropogenic forcing from 1750-2011, based on IPCC (2013, Fig. SPM.5).

Startlingly, there have only been three periods of global warming in the instrumental record since 1659. They were the 40 years 1694-1733, before the industrial revolution had even begun, with a warming trend of +1.7 K as solar activity picked up after the Maunder Minimum; the 22 years 1925-1946, with a warming trend of +0.3 K, in phase with the PDO; and the 24 years 1977-2000, with a warming trend of +0.6 K, also in phase with the PDO.

clip_image012

Table 3. Periods of cooling (blue), warming (red), and no trend (green) since 1659. Subject to uncertainties in the Central England Temperature Record, there may have been more warming in the 91 years preceding 1750 than in the three and a half centuries thereafter.

There was a single period of cooling, –0.6 K, in the 35 years 1659-1693 during the Maunder Minimum. The 191 years 1734-1924, industrial revolution or no industrial revolution, showed no trend; nor was there any trend during the negative or cooling phases of the PDO in the 30 years 1947-1976 or in the 13 years since 2001.

Table 3 summarizes the position. All of the 2 K global warming since 1750 could be simply a slow and intermittent recovery of global temperatures following the Little Ice Age.

There is a discrepancy between the near-linear projected increase in anthropogenic radiative forcing (Fig. 1) and the three distinct periods of global warming since 1659, the greatest of which preceded the industrial revolution and was almost twice the total warming since 1750.

No satisfactory mechanism has been definitively demonstrated that explains why the PDO operates in phases, still less why all of the global warming since 1750 should have shown itself only during the PDO’s positive or warming phases.

A proper understanding of climate sensitivity depends heavily upon the magnitude of the anthropogenic radiative forcing, but since 1990 the IPCC has almost halved that magnitude, from 4 to 2.3 W m–2.

To determine climate sensitivity from temperature change, one would need to know the temperature change to a sufficient precision. However, just as the radiative forcing has been tampered with to fit the theory, so the temperature records have been tampered with to fit the theory.

Since just about every adjustment in global temperature over time has had the effect of making 20th-century warming seem steeper than it was, however superficially plausible the explanations for the adjustments may be, all may not be well.

In any event, since the published early-20th-century error interval is of the same order of magnitude as the entire global warming from all causes since 1750, it is self-evident that attempting to derive climate sensitivity from the global temperature trends is self-defeating. It cannot be done.

The bottom line is that the pattern of global warming, clustered in three distinct periods the first and greatest of which preceded any possible anthropogenic influence, fits more closely with stochastic natural variability than with the slow, inexorable increase in anthropogenic forcing predicted by the IPCC.

The IPCC has not only slashed its near-term temperature projections (which are probably still excessive: it is quite possible that we shall see no global warming for another 20 years): it has also cut its estimate of net business-as-usual anthropogenic radiative forcing by almost half. Inch by inch, hissing and spitting, it retreats and hopes in vain that no one will notice, while continuing to yell, “The sky is falling! The sky is falling!”.

A happy Easter to one and all.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
100 Comments
Inline Feedbacks
View all comments
Santa Baby
April 19, 2014 7:15 pm

There is to much UHI in and policy based adjustments of the data to be able to make any scientific sense of it?

Niff
April 19, 2014 7:16 pm

Liars change their story to fit the evidence that is presented. Eventually they have to contradict themselves and paint themselves into corners. Unless seekers after truth such as the the good lord focus attention on them they will continue to get away with it.
Their supporters don’t ask any of these questions and assume that those who do must be insane deniers. Eventually the truth will out.
Happy Easter Lord Monckton!

Lew Skannen
April 19, 2014 7:22 pm

I cannot help thinking that it is crazy to even try to pretend that a single number can be used to model the climate. There are so many inter-dependent parameters governing the climate that any single parameter, (no matter how sacred and holy), can never really be separated out and measured. The IPCC seem to think that if they can just get the magic number for sensitivity then the whole climate will be able to be modeled as y=mx+c.
The lack of an exact sensitivity guess (which they are happy to adjust and fake as needed) is the least of the problems with modelling the climate. The hundred other parameters and chaotic mechanisms should at least be worth a mention … if only they all had their own cheer leaders as CO2 does…

Nick Stokes
April 19, 2014 7:48 pm

A global trend is the combined effect of a large number of regional trends. It will always be less variable than the trends of which it is composed. Averages are. It is not at all surprising that CET trend is more variable than the global average.

April 19, 2014 7:48 pm

“I cannot help thinking that it is crazy to even try to pretend that a single number can be used to model the climate. ”
That is not the point of the sensitivity parameter.

Robert JM
April 19, 2014 7:49 pm

Cloud forcing is being ignored by the IPCC.
There was a 5% decrease in a 13 year period from 1990 to 2003 amounting to a 0.9w/m2 increase in shortwave forcing. Each % cloud change results a temp change of about 0.06deg C.
Overall cloud forcing was responsible for 75% of the warming in the satellite period which has been falsely attributed to CO2.
You can also determine the climate sensitivity from this, 0.9W/m2 causes 0.3deg warming,
Very close to neutral.
Check out Ole Humlums climate4you.com page under climate and clouds.

Oracle
April 19, 2014 7:52 pm

If they cannot reconcile the fact that we are essentially in a co2 famine, and that much higher co2 levels than now are more ‘normal’ for earth, then they are living in delusion in fantasy land. This entire co2 panic is insanely silly.
https://www.google.com/search?q=%22co2+famine%22&complete=0&num=99

April 19, 2014 8:03 pm

The Climate Inquisition, whose members include Prof Lovejoy, are determined to stamp out the self-evident fact of natural climate cycles.
Take away the highly manipulated temperature statistics of the pre-satellite era and today’s inaccurate and biased computer models and then the latest cycle of global warming can be seen to be mostly natural.
The IPCC owes its existence to being able to produce scary tales for the political establishment, but worse is the fact that these scary tales were initially dreamed up by.environmental activist organisations with the practices and ethics of a pseudo-Christian cult.

SIGINT EX
April 19, 2014 8:50 pm

PV = nRT
“sensitivity” (the ‘n’ above, no ha ha) is a non-sensical word in this context.
The Earth’s atmosphere and even the atmosphere of Venus (Hansen’s sponge cake that almost cost him his phony baloney federal government career at GISS, but NASA Goddard kept him there to occupy the office and warm the seat, but after a while he started having delusions and phantasms of his supreme magnificence if that is what you can call it) have no “sensitivity.”
“Climate” is a non-sensical word from the mouths of “Geographers” i.e. well-heeled well-to-do with nothing to do but cause trouble in this context and has no merit and no value regarding Physics and no value nor merit as a measurable physical variable.
“Climate” is a phantasmagoria of the deranged human psyche and does not and cannot exist outside the deranged human brain.
“Climate” is where psychopaths live, and “Climate Change” is their battle ground in the sky.
Get over it !
Ha ha

Patrick
April 19, 2014 8:50 pm

And we still have people like Nick Stokes prattling on about averages derived from datasets that we know to have been altered.

April 19, 2014 9:02 pm

Oracle says at 7:52 pm
If they cannot reconcile the fact that we are essentially in a co2 famine, and that much higher co2 levels than now are more ‘normal’ for earth, then they are living in delusion in fantasy land. This entire co2 panic is insanely silly.
https://www.google.com/search?q=%22co2+famine%22&complete=0&num=99
—————————————————————————————
Oracle, I agree that we have been real low on CO2, and if it had gotten much lower, then plants would have kicked the bucket, and thus we as well. But the problem when I click your Google Search link, and get the results, is the 2nd link in big words says “Exxon paid scientist says the earth in CO2 famine.” The thing is that I have heard that the oil industry has been supporting the leftist scare mongers, and we just have to figure out a way to nullify their bs “oils supported scientist” line. Just because if I were somebody else, an independent thinker that is undecided on climate change, and the first thing I saw with the google results is “Exxon paid scientist” I could unfortunately dismiss within seconds the whole idea that we are in CO2 famine, even though we are.

April 19, 2014 9:09 pm

Steven Mosher,
You have an opinion that local temperature flux is non extant, are you working with temperatures on a global scale?

April 19, 2014 9:13 pm

Steven Mosher,
Are you providing anyone anywhere on this planet with local temperature forecasts?

April 19, 2014 9:30 pm

Steven Mosher,
Are you providing anyone anywhere on this planet anything useful?

April 19, 2014 10:01 pm

Time is not on the side of the IPCC, the Climate Change Alarmists, or those who want carbon tax schemes to provide monies for other green initiatives.
At this point in time, it is obvious that the Global Warming paradigm is failing… badly. The Believers, not to encourage a debate, vainly declare, “The debate is over.” That is a clear indication that they cannot defend what they want, and need to move on to mindless “action.”.
But the hope for carbon tax windfall that was once envisioned for redistribution is a difficult dream to part with by the greens. Tho’ mightily they struggle to maintain the myth, like the great Ozymandias (of Percy Bysshe Shelley), it too will crumble with time. The end of the Kingdom of Anthropogenic CO2 Global Warming paradigm is near. The reality (data) is not a thing made of or controlled by man. But the models and myths that spring forth are surely as dead as King Oz.

John Colby
April 19, 2014 11:01 pm

A new paper arrives at an ECS of 1.99 °C: Loehle, C. (2014). A minimal model for estimating climate sensitivity. Ecological Modelling, 276, 80-84.
http://www.sciencedirect.com/science/article/pii/S0304380014000404

Editor
April 19, 2014 11:08 pm

Steven Mosher says:
April 19, 2014 at 7:48 pm

“I cannot help thinking that it is crazy to even try to pretend that a single number can be used to model the climate. ”

That is not the point of the sensitivity parameter.

Well, I can emulate the output of the models extremely well using nothing but that parameter plus a time constant … which strongly supports Lord Moncktons claim that (according to the models) a single parameter can indeed be used to model the climate, whether or not that is the “point” of the sensitivity parameter.
However, you are nobody’s fool, and your science-fu is strong. So, since you say that it not the point, then please enlighten us—what IS the point of the sensitivity parameter?
Thanks,
w.

ren
April 19, 2014 11:09 pm

Very concrete data. Solid job. Regards.

thingadonta
April 19, 2014 11:16 pm

“No satisfactory mechanism has been definitively demonstrated that explains why the PDO operates in phases,”
I’ve got an idea about this.
Ocean currents and deep circulation from ocean depths to near surface occurs over the order of decades. My hypothesis is that ocean currents near surface might be only ‘near the surface’, to be warmed or cooled by the prevailing solar output of the time, for say about 30 years. So any significant change in solar output would change the temperature of a large chunk of the ocean for that 30 years, before being replaced by deeper ocean water of a different temperature, warmed or cooled by previous solar cycles. The idea is the transfer of heat in the oceans occurs in cycles related to changes in solar output, together with the time it takes to complete one complete oceanic current circuit (for the PDO about 60 years).
This oceanic conveyor cycle, operating within changing solar activity, would be dragged up or down by any longer term trend of solar activity, and in fact would produce both a lag in the temperature with respect to solar output, but also the cycle amplitudes might also become magnified by being dragged up or down by any longer term changes in solar output. In other words, the strong positive PDO of the late 20th century might have occurred as a result of the longer term increase in solar output since ~1700, dragging the 30 year cycle upwards and then continuing with its own momentum to continue well past the peak of solar activity in the mid to late 20th century. (Kind of like a bathtub backwash getting stronger as long as you force the water in the same direction each time). The 30 year oscillations might be expected to continue, but decline in amplitude over the next century. In this model, the amplitude of each cycle is partly a result of longer term changes in solar activity (either up or down) but the period is a function of oceanic circulation over a period of roughly 60 years.
(It is in some ways similar to convection currents in the mantle and upper crust which drive plate tectonics, but in this case the currents are within the ocean. Because there are very few examples in the familiar world similar to the plate tectonic model, it took a long time for someone to come up with a convective mantle-derived circulation model which drives plate that float on top of these currents (even Einstein had a small go at it but couldn’t think of it- we know he certainly rejected the idea of moving continents), it might take a long time to figure out the PDO as well).
Just some ideas anyway.
Incidentally I think that Lovejoy first assumes that natural variation is minor in the past (from the hockeystick data), then sets out to prove the assumption using the same hockeystick results; he is simply self-confirming what he has already cherry picked to fit that way.

ren
April 19, 2014 11:34 pm

Why in North America will be chilly spring? Polar vortex at 17 km.
http://earth.nullschool.net/#2014/04/23/0300Z/wind/isobaric/70hPa/orthographic=-117.89,62.68,482

Santa Baby
April 19, 2014 11:46 pm

They have mostly used xx billions of USD so far to get “action”, climate treaty, über national global government and policies that in effect will give us international socialism. The Norwegian Labour Party admits this in their last election “program”.

April 19, 2014 11:52 pm

who the F is ren? and why does ren post such off-thread nonsense?

Greg Goodman
April 20, 2014 12:22 am

There is an underlying and undeclared assumption in all this modelling game: that there is nothing causing a secular centennial scale upward trend. Hence, any upward trend must be attibutable to on of the inputs or the modelled reactions to the inputs.
Based on that set of assumptions it is a forgone conclusion that the only input that rises on a centennial scale will get found to be the “cause” of long term warming.
All the rest is a red scarf decoy.
Lacis et al 1992 estimated volcanic forcing to be about 30 W/m2 * optical density. This was later reduced to make model output better fit the temperature record
Why? Because if you don’t , you have to conclude that there are strong feedbacks that cancel the real volcanic effect ( and by implication the CO2 forcing ) which leaves a centennial rise that is not accouted for and the models don’t work.
Instead of correcting the models , they decided to correct the data and reduces the factor to 21 W/m2 in Hansen et al 2002. The declared reason for doing this was to reconcile model output with the temperature record.
The trouble with this value is that the tropical TOA net flux anomaly changes _before_ the change in atmpspheric optical density which is supposed to be causing it.
http://climategrog.wordpress.com/?attachment_id=925
It can be seen that the flux anomaly returns to zero barely a year after the eruption, when the AOD was still at 50% of its peak value. What the Hansen 2002 value is effectively trying to do is to fit the AOD estimed forcing DIRECTLY to the response, without any lags and near neutral feedbacks.
Note that I say “effectively” , this is not the way they are going about it, but just arbitrarily fiddling with scaling factors until the residual deviations of the model output is minimised is really the same a doing a non-lagged multi-variate regression.
It perfectly clear , from this graph alone that there is an active climate response to Mt Pinatubo that cancels the AOD within a year. Once that response is recognised, the physically based Lacis et al value is not problematic. Indeed the work I’m doing suggests there central value of 30 may be somewhat low.

Greg Goodman
April 20, 2014 12:27 am

“their central value of 30 may be somewhat low.”

Greg Goodman
April 20, 2014 12:34 am

It is also interesting in that graph , that once the flux anomaly goes positive around 1992.6 it stays +ve until the 1998 El Nino. After the initial hit of the first 12 months, the volcano provokes a climate counter-reaction of about 1W/m2 that is still present and the end of the data.
Now if you ignore the actual shape of the response and draw a straight line through 15y of data you could pretend that was CO2 “counter the cooling effects of volcanoes”.

1 2 3 4