Can we deduce climate sensitivity from temperature?

By Christopher Monckton of Brenchley

Central to Professor Lovejoy’s paper attempting to determine climate sensitivity from recent temperature trends is the notion that in any 125-year period uninfluenced by anthropogenic forcings there is only a 10% probability of a global temperature trend greater than +0.25 K or less than –0.25 K.

Like most of the hypotheses that underpin climate panic, this one is calculatedly untestable. The oldest of the global temperature datasets – HadCRUt4 – starts only in 1850, so that the end of the earliest 125-year period possible in that dataset is 1974, well into the post-1950 period of potential anthropogenic influence.

However, the oldest regional instrumental dataset, the Central England Temperature Record, dates back to 1659. It may give us some pointers. 

The CET record has its drawbacks. It is regional rather than global, and its earliest temperature data have a resolution no better than 0.5-1.0 K. However, its area of coverage is on the right latitude. Also, over the past 120 years, representing two full cycles of the Pacific Decadal Oscillation, its trend is within 0.01 K of the trend on the mean of the GISS, HadCRUT4 and NCDC global terrestrial datasets. It is not entirely without value.

I took trends on 166 successive 125-year periods from 1659-1784 to 1824-1949. Of these, 57, or 34%, exhibited absolute trends greater than |0.25| K (Table 1).


Table 1. Least-squares linear-regression trends (K) on the monthly mean regional surface temperature anomalies from the Central England Temperature dataset for 166 successive 125-year periods from 1659-1784 to 1824-1949. Of these periods, 57 (or 34%) show absolute temperature trends greater than |0.25| K.

Most of the 125-year periods exhibiting a substantial absolute trend occur at the beginning or the end of the interval tested. The trends in the earlier periods capture the recovery from the Little Ice Age, which independent historical records show was rapid. In the later periods the trends capture the rapid warming from 1910-1945.

Subject to the cautions about the data that I have mentioned, the finding that more than a third of all 125-year periods terminating before the anthropogenic influence on global climate began in 1950 suggests the possibility that 125-year periods showing substantial temperature change may be at least thrice as frequent as Professor Lovejoy had assumed.

Taken with the many other defects in the Professor’s recent paper – notably his assumption that the temperature datasets on which he relied had very small error intervals when in fact they have large error intervals that increase the further back one goes – his assumption that rapid temperature change is rare casts more than a little doubt on his contention that one can determine climate sensitivity from the recent temperature record.

How, then, can we determine how much of the 20th-century warming was natural? The answer, like it or not, is that we can’t. But let us assume, ad argumentum and per impossibile, that the temperature datasets are accurate. Then one way to check the IPCC’s story-line is to study its values of the climate-sensitivity parameter over various periods (Table 2).


Table 2. IPCC’s values for the climate-sensitivity parameter

Broadly speaking, the value of the climate-sensitivity parameter is independent of the cause of the direct warming that triggers the feedbacks that change its value. Whatever the cause of the warming, little error arises by assuming the feedbacks in response to it will be about the same as they would be in response to forcings of equal magnitude from any other cause.

The IPCC says there has been 2.3 W m–2 of anthropogenic forcing since 1750, and little natural forcing. In that event, the climate-sensitivity parameter is simply the 0.9 K warming since 1750 divided by 2.3 W m–2, or 0.4 K W–1 m2. Since most of the forcing since 1750 has occurred in the past century, that value is in the right ballpark, roughly equal to the centennial sensitivity parameter shown in Table 2.

Next, we break the calculation down. Before 1950, according to the IPCC, the total anthropogenic forcing was 0.6 W m–2. Warming from 1750-1949 was 0.45 K. So the pre-1950 climate sensitivity parameter was 0.75 K W–1 m2, somewhat on the high side, suggesting that some of the pre-1950 warming was natural.

How much of it was natural? Dividing 0.45 K of pre-1950 warming by the 200-year sensitivity parameter 0.5 K W–1 m2 gives 0.9 W m–2. If IPCC (2013) is correct in saying 0.6 W m–2 was anthropogenic, then 0.3 W m–2 was natural.

From 1950 to 2011, there was 1.7 W m–2 of anthropogenic forcing, according to the IPCC. The linear temperature trend on the data from 1950-2011 is 0.7 K. Divide that by 1.7 W m–2 to give a plausible 0.4 K W–1 m2, again equivalent to the IPCC’s centennial sensitivity parameter, but this time under the assumption that none of the global warming since 1950 was natural.

This story-line, as far as it goes, seems plausible. But the plausibility is entirely specious. It was achieved by the simplest of methods. Since 1990, the IPCC has all but halved the anthropogenic radiative forcing to make it appear that its dead theory is still alive.

In 1990, the IPCC predicted that the anthropogenic forcing from greenhouse gases since 1765 would amount to 4 W m–2 on business as usual by 2014 (Fig. 1).


Figure 1. Projected anthropogenic greenhouse-gas forcings, 1990-2100 (IPCC, 1990).

However, with only 0.9 K global warming since the industrial revolution began, the implicit climate-sensitivity parameter would have been 0.9 / 4 = 0.23 K W–1 m2, or well below even the instantaneous value. That is only half the 0.4-0.5 K W–1 m2 that one would expect if the IPCC’s implicit centennial and bicentennial values for the parameter (Table 2) are correct.

In 1990 the IPCC still had moments of honesty. It admitted that the magnitude and even the sign of the forcing from anthropogenic particulate aerosol emissions (soot to you and me) was unknown.

Gradually, however, the IPCC found it expedient to offset not just some but all of the CO2 radiative forcing with a putative negative forcing from particulate aerosols. Only by this device could it continue to maintain that its very high centennial, bicentennial, and equilibrium values for the climate-sensitivity parameter were plausible.

Fig. 2 shows the extent of the tampering. The positive forcing from CO2 emissions and the negative forcing from anthropogenic aerosols are visibly near-identical.


Figure 2. Positive forcings (left panel) and negative forcings 1950-2008 (Murphy et al., 2009).

As if that were not bad enough, the curve of global warming in the instrumental era exhibits 60-year cycles, following the ~30-year cooling and ~30-year warming phases of the Pacific Decadal Oscillation (Fig. 3). This oscillation appears to have a far greater influence on global temperature, at least in the short to medium term, than any anthropogenic forcing.

The “settled science” of the IPCC cannot yet explain what causes the ~60-year cycles of the PDO, but their influence on global temperature is plainly visible in Fig. 3.


Figure 3. Monthly global mean surface temperature anomalies and trend, January 1890 to February 2014, as the mean of the GISS, HadCRUT4 and NCDC global mean surface temperature anomalies, with sub-trends during the negative or cooling (green) and positive or warming (red) phases of the Pacific Decadal Oscillation. Phase dates are provided by the Joint Institute for the Study of the Atmosphere and Ocean at the University of Washington: Anthropogenic radiative forcings are apportionments of the 2.3 W m–2 anthropogenic forcing from 1750-2011, based on IPCC (2013, Fig. SPM.5).

Startlingly, there have only been three periods of global warming in the instrumental record since 1659. They were the 40 years 1694-1733, before the industrial revolution had even begun, with a warming trend of +1.7 K as solar activity picked up after the Maunder Minimum; the 22 years 1925-1946, with a warming trend of +0.3 K, in phase with the PDO; and the 24 years 1977-2000, with a warming trend of +0.6 K, also in phase with the PDO.


Table 3. Periods of cooling (blue), warming (red), and no trend (green) since 1659. Subject to uncertainties in the Central England Temperature Record, there may have been more warming in the 91 years preceding 1750 than in the three and a half centuries thereafter.

There was a single period of cooling, –0.6 K, in the 35 years 1659-1693 during the Maunder Minimum. The 191 years 1734-1924, industrial revolution or no industrial revolution, showed no trend; nor was there any trend during the negative or cooling phases of the PDO in the 30 years 1947-1976 or in the 13 years since 2001.

Table 3 summarizes the position. All of the 2 K global warming since 1750 could be simply a slow and intermittent recovery of global temperatures following the Little Ice Age.

There is a discrepancy between the near-linear projected increase in anthropogenic radiative forcing (Fig. 1) and the three distinct periods of global warming since 1659, the greatest of which preceded the industrial revolution and was almost twice the total warming since 1750.

No satisfactory mechanism has been definitively demonstrated that explains why the PDO operates in phases, still less why all of the global warming since 1750 should have shown itself only during the PDO’s positive or warming phases.

A proper understanding of climate sensitivity depends heavily upon the magnitude of the anthropogenic radiative forcing, but since 1990 the IPCC has almost halved that magnitude, from 4 to 2.3 W m–2.

To determine climate sensitivity from temperature change, one would need to know the temperature change to a sufficient precision. However, just as the radiative forcing has been tampered with to fit the theory, so the temperature records have been tampered with to fit the theory.

Since just about every adjustment in global temperature over time has had the effect of making 20th-century warming seem steeper than it was, however superficially plausible the explanations for the adjustments may be, all may not be well.

In any event, since the published early-20th-century error interval is of the same order of magnitude as the entire global warming from all causes since 1750, it is self-evident that attempting to derive climate sensitivity from the global temperature trends is self-defeating. It cannot be done.

The bottom line is that the pattern of global warming, clustered in three distinct periods the first and greatest of which preceded any possible anthropogenic influence, fits more closely with stochastic natural variability than with the slow, inexorable increase in anthropogenic forcing predicted by the IPCC.

The IPCC has not only slashed its near-term temperature projections (which are probably still excessive: it is quite possible that we shall see no global warming for another 20 years): it has also cut its estimate of net business-as-usual anthropogenic radiative forcing by almost half. Inch by inch, hissing and spitting, it retreats and hopes in vain that no one will notice, while continuing to yell, “The sky is falling! The sky is falling!”.

A happy Easter to one and all.


newest oldest most voted
Notify of
Santa Baby

There is to much UHI in and policy based adjustments of the data to be able to make any scientific sense of it?


Liars change their story to fit the evidence that is presented. Eventually they have to contradict themselves and paint themselves into corners. Unless seekers after truth such as the the good lord focus attention on them they will continue to get away with it.
Their supporters don’t ask any of these questions and assume that those who do must be insane deniers. Eventually the truth will out.
Happy Easter Lord Monckton!

Lew Skannen

I cannot help thinking that it is crazy to even try to pretend that a single number can be used to model the climate. There are so many inter-dependent parameters governing the climate that any single parameter, (no matter how sacred and holy), can never really be separated out and measured. The IPCC seem to think that if they can just get the magic number for sensitivity then the whole climate will be able to be modeled as y=mx+c.
The lack of an exact sensitivity guess (which they are happy to adjust and fake as needed) is the least of the problems with modelling the climate. The hundred other parameters and chaotic mechanisms should at least be worth a mention … if only they all had their own cheer leaders as CO2 does…

A global trend is the combined effect of a large number of regional trends. It will always be less variable than the trends of which it is composed. Averages are. It is not at all surprising that CET trend is more variable than the global average.

“I cannot help thinking that it is crazy to even try to pretend that a single number can be used to model the climate. ”
That is not the point of the sensitivity parameter.

Robert JM

Cloud forcing is being ignored by the IPCC.
There was a 5% decrease in a 13 year period from 1990 to 2003 amounting to a 0.9w/m2 increase in shortwave forcing. Each % cloud change results a temp change of about 0.06deg C.
Overall cloud forcing was responsible for 75% of the warming in the satellite period which has been falsely attributed to CO2.
You can also determine the climate sensitivity from this, 0.9W/m2 causes 0.3deg warming,
Very close to neutral.
Check out Ole Humlums page under climate and clouds.


If they cannot reconcile the fact that we are essentially in a co2 famine, and that much higher co2 levels than now are more ‘normal’ for earth, then they are living in delusion in fantasy land. This entire co2 panic is insanely silly.

Peter Miller

The Climate Inquisition, whose members include Prof Lovejoy, are determined to stamp out the self-evident fact of natural climate cycles.
Take away the highly manipulated temperature statistics of the pre-satellite era and today’s inaccurate and biased computer models and then the latest cycle of global warming can be seen to be mostly natural.
The IPCC owes its existence to being able to produce scary tales for the political establishment, but worse is the fact that these scary tales were initially dreamed up by.environmental activist organisations with the practices and ethics of a pseudo-Christian cult.


PV = nRT
“sensitivity” (the ‘n’ above, no ha ha) is a non-sensical word in this context.
The Earth’s atmosphere and even the atmosphere of Venus (Hansen’s sponge cake that almost cost him his phony baloney federal government career at GISS, but NASA Goddard kept him there to occupy the office and warm the seat, but after a while he started having delusions and phantasms of his supreme magnificence if that is what you can call it) have no “sensitivity.”
“Climate” is a non-sensical word from the mouths of “Geographers” i.e. well-heeled well-to-do with nothing to do but cause trouble in this context and has no merit and no value regarding Physics and no value nor merit as a measurable physical variable.
“Climate” is a phantasmagoria of the deranged human psyche and does not and cannot exist outside the deranged human brain.
“Climate” is where psychopaths live, and “Climate Change” is their battle ground in the sky.
Get over it !
Ha ha


And we still have people like Nick Stokes prattling on about averages derived from datasets that we know to have been altered.

Oracle says at 7:52 pm
If they cannot reconcile the fact that we are essentially in a co2 famine, and that much higher co2 levels than now are more ‘normal’ for earth, then they are living in delusion in fantasy land. This entire co2 panic is insanely silly.
Oracle, I agree that we have been real low on CO2, and if it had gotten much lower, then plants would have kicked the bucket, and thus we as well. But the problem when I click your Google Search link, and get the results, is the 2nd link in big words says “Exxon paid scientist says the earth in CO2 famine.” The thing is that I have heard that the oil industry has been supporting the leftist scare mongers, and we just have to figure out a way to nullify their bs “oils supported scientist” line. Just because if I were somebody else, an independent thinker that is undecided on climate change, and the first thing I saw with the google results is “Exxon paid scientist” I could unfortunately dismiss within seconds the whole idea that we are in CO2 famine, even though we are.

Steven Mosher,
You have an opinion that local temperature flux is non extant, are you working with temperatures on a global scale?

Steven Mosher,
Are you providing anyone anywhere on this planet with local temperature forecasts?

Steven Mosher,
Are you providing anyone anywhere on this planet anything useful?


Time is not on the side of the IPCC, the Climate Change Alarmists, or those who want carbon tax schemes to provide monies for other green initiatives.
At this point in time, it is obvious that the Global Warming paradigm is failing… badly. The Believers, not to encourage a debate, vainly declare, “The debate is over.” That is a clear indication that they cannot defend what they want, and need to move on to mindless “action.”.
But the hope for carbon tax windfall that was once envisioned for redistribution is a difficult dream to part with by the greens. Tho’ mightily they struggle to maintain the myth, like the great Ozymandias (of Percy Bysshe Shelley), it too will crumble with time. The end of the Kingdom of Anthropogenic CO2 Global Warming paradigm is near. The reality (data) is not a thing made of or controlled by man. But the models and myths that spring forth are surely as dead as King Oz.

John Colby

A new paper arrives at an ECS of 1.99 °C: Loehle, C. (2014). A minimal model for estimating climate sensitivity. Ecological Modelling, 276, 80-84.

Willis Eschenbach

Steven Mosher says:
April 19, 2014 at 7:48 pm

“I cannot help thinking that it is crazy to even try to pretend that a single number can be used to model the climate. ”

That is not the point of the sensitivity parameter.

Well, I can emulate the output of the models extremely well using nothing but that parameter plus a time constant … which strongly supports Lord Moncktons claim that (according to the models) a single parameter can indeed be used to model the climate, whether or not that is the “point” of the sensitivity parameter.
However, you are nobody’s fool, and your science-fu is strong. So, since you say that it not the point, then please enlighten us—what IS the point of the sensitivity parameter?


Very concrete data. Solid job. Regards.


“No satisfactory mechanism has been definitively demonstrated that explains why the PDO operates in phases,”
I’ve got an idea about this.
Ocean currents and deep circulation from ocean depths to near surface occurs over the order of decades. My hypothesis is that ocean currents near surface might be only ‘near the surface’, to be warmed or cooled by the prevailing solar output of the time, for say about 30 years. So any significant change in solar output would change the temperature of a large chunk of the ocean for that 30 years, before being replaced by deeper ocean water of a different temperature, warmed or cooled by previous solar cycles. The idea is the transfer of heat in the oceans occurs in cycles related to changes in solar output, together with the time it takes to complete one complete oceanic current circuit (for the PDO about 60 years).
This oceanic conveyor cycle, operating within changing solar activity, would be dragged up or down by any longer term trend of solar activity, and in fact would produce both a lag in the temperature with respect to solar output, but also the cycle amplitudes might also become magnified by being dragged up or down by any longer term changes in solar output. In other words, the strong positive PDO of the late 20th century might have occurred as a result of the longer term increase in solar output since ~1700, dragging the 30 year cycle upwards and then continuing with its own momentum to continue well past the peak of solar activity in the mid to late 20th century. (Kind of like a bathtub backwash getting stronger as long as you force the water in the same direction each time). The 30 year oscillations might be expected to continue, but decline in amplitude over the next century. In this model, the amplitude of each cycle is partly a result of longer term changes in solar activity (either up or down) but the period is a function of oceanic circulation over a period of roughly 60 years.
(It is in some ways similar to convection currents in the mantle and upper crust which drive plate tectonics, but in this case the currents are within the ocean. Because there are very few examples in the familiar world similar to the plate tectonic model, it took a long time for someone to come up with a convective mantle-derived circulation model which drives plate that float on top of these currents (even Einstein had a small go at it but couldn’t think of it- we know he certainly rejected the idea of moving continents), it might take a long time to figure out the PDO as well).
Just some ideas anyway.
Incidentally I think that Lovejoy first assumes that natural variation is minor in the past (from the hockeystick data), then sets out to prove the assumption using the same hockeystick results; he is simply self-confirming what he has already cherry picked to fit that way.


Why in North America will be chilly spring? Polar vortex at 17 km.,62.68,482

Santa Baby

They have mostly used xx billions of USD so far to get “action”, climate treaty, über national global government and policies that in effect will give us international socialism. The Norwegian Labour Party admits this in their last election “program”.


who the F is ren? and why does ren post such off-thread nonsense?

Greg Goodman

There is an underlying and undeclared assumption in all this modelling game: that there is nothing causing a secular centennial scale upward trend. Hence, any upward trend must be attibutable to on of the inputs or the modelled reactions to the inputs.
Based on that set of assumptions it is a forgone conclusion that the only input that rises on a centennial scale will get found to be the “cause” of long term warming.
All the rest is a red scarf decoy.
Lacis et al 1992 estimated volcanic forcing to be about 30 W/m2 * optical density. This was later reduced to make model output better fit the temperature record
Why? Because if you don’t , you have to conclude that there are strong feedbacks that cancel the real volcanic effect ( and by implication the CO2 forcing ) which leaves a centennial rise that is not accouted for and the models don’t work.
Instead of correcting the models , they decided to correct the data and reduces the factor to 21 W/m2 in Hansen et al 2002. The declared reason for doing this was to reconcile model output with the temperature record.
The trouble with this value is that the tropical TOA net flux anomaly changes _before_ the change in atmpspheric optical density which is supposed to be causing it.
It can be seen that the flux anomaly returns to zero barely a year after the eruption, when the AOD was still at 50% of its peak value. What the Hansen 2002 value is effectively trying to do is to fit the AOD estimed forcing DIRECTLY to the response, without any lags and near neutral feedbacks.
Note that I say “effectively” , this is not the way they are going about it, but just arbitrarily fiddling with scaling factors until the residual deviations of the model output is minimised is really the same a doing a non-lagged multi-variate regression.
It perfectly clear , from this graph alone that there is an active climate response to Mt Pinatubo that cancels the AOD within a year. Once that response is recognised, the physically based Lacis et al value is not problematic. Indeed the work I’m doing suggests there central value of 30 may be somewhat low.

Greg Goodman

“their central value of 30 may be somewhat low.”

Greg Goodman

It is also interesting in that graph , that once the flux anomaly goes positive around 1992.6 it stays +ve until the 1998 El Nino. After the initial hit of the first 12 months, the volcano provokes a climate counter-reaction of about 1W/m2 that is still present and the end of the data.
Now if you ignore the actual shape of the response and draw a straight line through 15y of data you could pretend that was CO2 “counter the cooling effects of volcanoes”.


The IPCC’s concept of ‘Forcing’ is unscientific, based on the fallacy that the atmospheric Thermal Radiation Field, expressed in W/m^2, is a real energy flux. Far too many imagine it’s a heat flow, when that is completely to misinterpret radiative physics.
Net radiative heat flux to the surface is the negative of the difference of surface and atmospheric TRFs, the net energy in the electromagnetic domain emitted from the surface..
As the atmospheric TRF (aka ‘Forcing’, ‘back radiation’) increases, that energy flux decreases. Surface temperature rises because less net IR means more convection and evapo-transpiration is needed to ensure constant total energy flux to atmosphere and space.
Only when we slay the two dragons of this fundamental mistake in IR physics and the consequential tripling of real GHE can Climate Alchemy become Climate Science…….:o)


Umm, why did he select 125 years?

Peter Kirby

I am just a simple lad who had his last physics lesson in 1945. Looking at figure 3 I notice that in the period covered during which CO2 has been increasing 78 of the years temperature was falling and 46 of the years it was rising. Is CO2 a cooling gas? Or am I just a cherry picker?

Last year I wrote an article ‘noticeable climate change’ which took my extended CET record to 1538 and looked at annual, decadal and 50 year changes.
Noticeable climate change is the norm, especially in the annual and decadal record.. I am currently researching the period 1200 to 1400AD and this transitional period between the MWP and LIA (it oscillates between warm and cold) shows some extraordinary temperature changes and notable periods of extreme weather.

“I cannot help thinking that it is crazy to even try to pretend that a single number can be used to model the climate. ”
That is not the point of the sensitivity parameter. There is no point to the sensitivity parameter, There is certainly no scientific point to the sensitivity parameter. There is no point to the sensitivity parameter other than propaganda.

Typo at figure 3: it says (period) 1922-1956, where it should be 1922-1947.
Could you please fix it? I´d like to save that graph

Henry Clark

An investigation of climate sensitivities can look at a broader range of timeframes, while being not as heavily based on a combination of data from too-local sources and from too-untrustworthy sources:
“The 11-year solar cycle (averaged over the past 300 years).
Warming over the 20th century
Warming since the last glacial maximum (i.e, 20,000 years ago)
Cooling from the Eocene to the present epoch
Cooling from the mid-Cretaceous
Comparison between the Phanerozoic temperature variations (over the past 550 Million years) to different the CO2 reconstructions
Comparison between the Phanerozoic temperature variations and the cosmic ray flux reaching the Earth”

And the result is
Regarding this:
“All of the 2 K global warming since 1750 could be simply a slow and intermittent recovery of global temperatures following the Little Ice Age.”
There are important features within the climate record since then. Why a “downtrend of temperature since 1938 [which] has come nearly halfway back to the chill of the Little Ice Age 300 years ago,” as remarked on and plotted by a 1976 National Geographic issue during the global cooling scare? Why is there is quite a large spike of glacial advance in the very early 19th century (and the PDO doesn’t come close to explaining it as can be seen when Andes glacial history is plotted)? Those plus much more are illustrated and explainable in the context of the plots in my usual
As Usoskin noted in a study of Ti-44 from space meteorites (produced by cosmic ray bombardment and originating far above Earth weather), “there was indeed an increase in solar activity over the last 100 years” ( ).

Greg Goodman

Heber Rizzo says:
April 20, 2014 at 2:03 am
Typo at figure 3: it says (period) 1922-1956, where it should be 1922-1947.
Could you please fix it? I´d like to save that graph
Well spotted. A quick cut and paste with Gimp fixes it 😉

Thanks, Greg. Saved


The IPCC, in their wisdom, do not look for any natural variations in climate. All variations, as far as they are concerned, are human caused.
I do not know what planet they live on but it is NOT earth.


Also, over the past 120 years, representing two full cycles of the Pacific Decadal Oscillation,…

What is the PDO? When was it discovered? It was discovered after the setting up of the IPCC. If the oceans can eat global warming why can’t they burp it out? Buuuurrrp.

Abstract – 2009
Compo G. P. and P. D. Sardeshmukh
Oceanic influences on recent continental warming.
Evidence is presented that the recent worldwide land warming has occurred largely in response to a worldwide warming of the oceans rather than as a direct response to increasing greenhouse gases (GHGs) over land. Atmospheric model simulations of the last half-century with prescribed observed ocean temperature changes, but without prescribed GHG changes, account for most of the land warming. The oceanic influence has occurred through hydrodynamic-radiative teleconnections, primarily by moistening and warming the air over land and increasing the downward longwave radiation at the surface. The oceans may themselves have warmed from a combination of natural and anthropogenic influences.”
Clim. Dyn., 32 (2-3), 333-342. doi:10.1007/s00382-008-0448-9
Abstract – 1997
A Pacific Interdecadal Climate Oscillation with Impacts on Salmon Production.
Evidence gleaned from the instrumental record of climate data identifies a robust, recurring pattern of ocean-atmosphere climate variability centered over the midlatitude North Pacific basin. Over the past century, the amplitude of this climate pattern has varied irregularly at interannual-to-interdecadal timescales. There is evidence of reversals in the prevailing polarity of the oscillation occurring around 1925, 1947, and 1977; the last two reversals correspond to dramatic shifts in salmon production regimes in the North Pacific Ocean. This climate pattern also affects coastal sea and continental surface air temperatures, as well as streamflow in major west coast river systems, from Alaska to California.…78.1069M

“Global Warming as a Natural Response to Cloud Changes Associated with the Pacific Decadal Oscillation (PDO)”
…….The main arguments for global warming being manmade go something like this: “What else COULD it be? After all, we know that increasing carbon dioxide concentrations are sufficient to explain recent warming, so what’s the point of looking for any other cause?”
But for those who have followed my writings and publications in the last 18 months (e.g. Spencer et al., 2007; Spencer, 2008), you know that we are finding satellite evidence that the climate system is much less sensitive to greenhouse gas emissions than the U.N.’s Intergovernmental Panel on Climate Change (IPCC, 2007) climate models suggest that it is. And if that is true, then mankind’s CO2 emissions are not strong enough to have caused the global warming we’ve seen over the last 100 years…….

Richard M

Once again we have an article based on the questionable surface data. When is someone going to produce a “pure” dataset. Yes, there are reasons for adjustments, however, it is quite likely that over time the various errors cancel. I would love for someone to produce a simple, raw dataset that could be used in discussions. I have no idea where to find the data, and maybe that is the problem, but I can dream.

Bill Illis

HadCET in 1659 does provide us with a nice example to reflect on in terms of what is natural variability in an era with NO GHG forcing and what about temperature change.
The Industrial Revolution did not start in 1765 and CO2 did not start increasing until afterward (and it was only 278 ppm in 1765 – roughly the same number it was for the previous 4000 years).
How does the annual temps in HadCET from 1659 to 1765 (107 years) compare to HadCET from 1907 to 2013 (same 107 years).
I see the same type of variability. I see temps in 2013 being no different than temps in any period from 1659 to 1765.
[There may be a large response to a volcano in Japan in 1739 which is much larger than the imprint of any other volcano (including Tambora) in the HadCET record – noted as the great Irish frosts].
For those who like high resolution data, here is the monthly temp comparison.
A point being, there is no change in the extent of extremes. Monthly temps can vary by +4.0C/-6.0C and that has not changed at all in the two periods.
Pretty hard to pick out a global warming signal when there is no real difference between 2.5 W/m2 of GHG forcing and 0.0 W/m2 of GHG forcing.

Ron Clutz

@Eric Simpson
Yes, alarmists will dismiss Dr. Happer as being in the pay of Big Oil. Then they will turn around and appeal to those same companies as supporting man-made global warming Here’s an appeal for indoctrinating school children in Wyoming:
Editorial board: Join energy industries and admit climate change exists
A better source about the biosphere’s need for CO2 is here:
“Atmospheric CO2 concentration is just barely above the life-sustaining levels of 150 ppm. For life to have real buffer against mass extinction, CO2 needs to be closer to 1000 ppm.”

Warren in Minnesota

Note Heber Rizzo, the range in Figure 3 is 1922-1946 and not 1922-1947.


Richard M says:
April 20, 2014 at 5:53 am
“I would love for someone to produce a simple, raw dataset that could be used in discussions.”
And here is where Steven Mosher and his BEST collaborators could do the world a real service. To go with their gridded, homogenized, adjusted BEST data set they could produce an even more valuable WORST data set, ungridded, unadjusted, unhomogenized, a simple monthly average of the available data. Later would come, intermediate sets with the increasing “betterments”, the most desirable of which would be simple geographic gridding of the WORST data. It would be interesting to see which data sets were most in demand.

Mark Bofill

Goldie says:
April 20, 2014 at 1:14 am

Umm, why did he select 125 years?

Good question. He published stuff I think a year ago about his ideas about the difference between weather, macroweather, and climate. At least I think those are what he called them. There was a discussion about it at Judith Curry’s, Willis had some interesting comments. I’d link except I gotta run and do the Easter thing. But anyway, I think the 125 is important to his arguments because of this. Somehow. I’ve been looking at Dr. Lovejoy’s arguments in my miniscule spare time and I don’t claim expertise or even familiarity, but I think this has something to do with it anyway.

Evan Jones

Can we deduce climate sensitivity from temperature?
What we can do, melord, is what you and others (Including the IPCC) have already done:
Take HadCRUt4 back to 1950 and look at the slope from that point. Now, yes, the work by Anthony et al. (of which I’m a co-author, and let’s don’t leave out the surfacestations volunteers) demonstrates how seriously poor microsite — significantly — exaggerates trends, a factor quite unaccounted for. And yes, how and why homogenization essentially eliminates the low-trend “outliers” (i.e., the well sited stations). And, yes, extraordinary claims require extraordinary proof — and that, we got.
But we can stipulate that the HadCRut4’s 0.7C rise since 1950, for purposes of argument, is correct. And we can the stipulate that 100% of that warming is anthropogenic and furthermore that anthropogenic warming is 100% due to CO2 increase (either of which which not even the IPCC claims). But let’s stipulate it. That gives us a high-end bound for CO2 forcing.
1950 is a proper starting point because it begins right at the point from when CO2 became a significant factor and (very important) neatly encapsulates both positive and negative PDO.
As you and others have indicated, that is a warming of 1.1C per century (after a 30% increase in CO2, which effect, as is not controversial, has a continually diminishing effect).
Unless there are unknown, unaccounted for factors (quien sabe?), this leaves us, pretty much, with the mild warming of Arrhenius (1906). We’ll leave solar effect (unknown) aside for our current purposes.
That is the upper bound of what is currently on record. And after Anthony et al. are through, we’ll likely knock that down by at least, as Cracko put it to Captain Kirk, “A thoid. Skimmed right off the top.”
Having established an upper bound, we do not have all the answers (melord knows). Yet it appears to me we have certainly established a basis for policymaking (i.e., NOT) — at least until next Tuesday when the next (as yet unknown) factor crops up, at which point, we shall reassess.
One recalls Reagan’s comment regarding negotiations with the Soviets: “Don’t just do something. Stand there!”

Evan Jones

they could produce an even more valuable WORST data set, ungridded, unadjusted, unhomogenized, a simple monthly average of the available data.
Even that will show more warming than has actually occurred: TOBS-bias will spuriously knock the Tmean trend numbers down, yes, as will equipment conversion by a bit. But failure to account for microsite spuriously increases those numbers by almost twice that amount.
It is the well sited, unmoved stations without TOBS bias (raw, plus a bump up for MMTS conversion) that will give us the correct result, to the extent of our current knowledge.
That result will be considerably lower than even the WORST of it.

You want the simple answer? then it is NO!
The alarmist will not even go to actual chemical/gas make up of the entire global atmosphere with percentages of each gas or chemical. WHY? could it be they do not know?


Monckton of Brenchley: “Also, over the past 120 years, representing two full cycles of the Pacific Decadal Oscillation, its trend is within 0.01 K of the trend on the mean of the GISS, HadCRUT4 and NCDC global terrestrial datasets. It is not entirely without value.”
I agree that Lovejoy et al. is execrable. And I agree that in the present context the Central England Temperature Index “is not entirely without value.” But some months ago when I went through an exercise similar to Lord M.’s above, for a similar reason, I found the difference between the 125-year trend of CET and that of (I believe it was) HadCrut3 to be nearly 0.2 K/century for the period that ended in August of 2009: the CET trend was nearly half again as high as the HadCrut3 trend for that period.
So, unless my math is faulty (always a possibility), it seems to me that the above excerpt gives the impression that CET is a better proxy for global trends than it really is. No doubt it’s better than what Lovejoy used, and I still find its variations instructive, but not to the extent that readers might have inferred from that excerpt. .

I am a hard science guy – manufactured chemicals and semiconductors. I have a question for all you researchers – it appears that temperature records are being used from many sources what are the inaccuracies of the measurement equipment used in all the decades and centuries?
A very small delta factor can disrupt the entire theory can it not.
.07% is small number my electronic test gear was +- 99% accurate now if I tested enough times would that number become more or less valid?

Evan Jones

what are the inaccuracies of the measurement equipment used in all the decades and centuries?
CRS, the traditional common unit of measure, in and of itself produces considerably higher trends than MMTS. MMTS has been determined to be the more accurate of the two. There’s also a step-change upon conversion, which obviously affects trend. We account for this in our study, and also examine MMTS and CRS in isolation.


The only interesting period concerning any potential influence on global temperatures of our CO2 emissions is the one starting in the mid 1970s. There is no point looking for any signals of such influence earlier than that, seeing that even the IPCC concurs that there would be no such signals before about 1950, and there was no global warming between 1950 and 1977:
In other words, all that really needs explanation here is the global temperature rise between 1976/77 and 2001/02.
This period has been the focus of Bob Tisdale for the last 5 years or so. And he has shown pretty clearly through the available observational data from the real earth system that there is no need at all to look outside the processes behind the natural warming of the global oceans to explain the modern global warming period.
Between 1970 and 2014, global temperatures have shifted up permanently relative to the NINO3.4 SSTa on only three abrupt occasions, in 1978/79, in 1988 and in 1998:
During the remaining 40+ years? Nothing, except two major volcanic eruptions and some noise here and there globally.
Tisdale has thoroughly shown how the two latest upward shifts (in 1988 and 1998) originated in the western Pacific, that is, well outside the NINO3.4 region, but still most definitely inside the greater ENSO region.
The first shift comes at the heels of the Great Pacific Climate Shift of 1976/77, when the East Pacific outside the tropical regions all of a sudden experienced a major relative warming, as a result of the preceding downward shift in the SO index, that is, the abrupt and significant fall in the pressure gradient across the tropical Pacific basin.

Evan Jones

Kristian says:
April 20, 2014 at 12:20 pm (Edit)
The only interesting period concerning any potential influence on global temperatures of our CO2 emissions is the one starting in the mid 1970s. There is no point looking for any signals of such influence earlier than that, seeing that even the IPCC concurs that there would be no such signals before about 1950, and there was no global warming between 1950 and 1977:

You are missing an essential point. 1950 – 1977 was a negative PDO period. But instead of sharp cooling, it was flatline-to-mild. That’s because mild CO2 forcing was counteracting most of the cooling. But from 1977 – 2007, there was a positive PDO and only around half that warming at most could have been from any CO2 forcing.
Add it all up and you get a warming of 0.11C (adjusted data) per decade since 1950. Not all of that is anthropogenic (though I suppose half or more is), and not all anthropogenic warming is CO2-induced (think Arctic soot).
Anthropogenic CO2 increase has been relatively constant from 1950 on. And that’s right when it really took off.
So 1950 is the perfect start date from that perspective: one full cycle (plus and minus) of PDO. (Except for the metadata record issues like TOBS and moves, etc., which get spottier the further back you go.)
That’s the top-down approach. NOT including microsite . . .