By Christopher Monckton of Brenchley
Central to Professor Lovejoy’s paper attempting to determine climate sensitivity from recent temperature trends is the notion that in any 125-year period uninfluenced by anthropogenic forcings there is only a 10% probability of a global temperature trend greater than +0.25 K or less than –0.25 K.
Like most of the hypotheses that underpin climate panic, this one is calculatedly untestable. The oldest of the global temperature datasets – HadCRUt4 – starts only in 1850, so that the end of the earliest 125-year period possible in that dataset is 1974, well into the post-1950 period of potential anthropogenic influence.
However, the oldest regional instrumental dataset, the Central England Temperature Record, dates back to 1659. It may give us some pointers.
The CET record has its drawbacks. It is regional rather than global, and its earliest temperature data have a resolution no better than 0.5-1.0 K. However, its area of coverage is on the right latitude. Also, over the past 120 years, representing two full cycles of the Pacific Decadal Oscillation, its trend is within 0.01 K of the trend on the mean of the GISS, HadCRUT4 and NCDC global terrestrial datasets. It is not entirely without value.
I took trends on 166 successive 125-year periods from 1659-1784 to 1824-1949. Of these, 57, or 34%, exhibited absolute trends greater than |0.25| K (Table 1).
Table 1. Least-squares linear-regression trends (K) on the monthly mean regional surface temperature anomalies from the Central England Temperature dataset for 166 successive 125-year periods from 1659-1784 to 1824-1949. Of these periods, 57 (or 34%) show absolute temperature trends greater than |0.25| K.
Most of the 125-year periods exhibiting a substantial absolute trend occur at the beginning or the end of the interval tested. The trends in the earlier periods capture the recovery from the Little Ice Age, which independent historical records show was rapid. In the later periods the trends capture the rapid warming from 1910-1945.
Subject to the cautions about the data that I have mentioned, the finding that more than a third of all 125-year periods terminating before the anthropogenic influence on global climate began in 1950 suggests the possibility that 125-year periods showing substantial temperature change may be at least thrice as frequent as Professor Lovejoy had assumed.
Taken with the many other defects in the Professor’s recent paper – notably his assumption that the temperature datasets on which he relied had very small error intervals when in fact they have large error intervals that increase the further back one goes – his assumption that rapid temperature change is rare casts more than a little doubt on his contention that one can determine climate sensitivity from the recent temperature record.
How, then, can we determine how much of the 20th-century warming was natural? The answer, like it or not, is that we can’t. But let us assume, ad argumentum and per impossibile, that the temperature datasets are accurate. Then one way to check the IPCC’s story-line is to study its values of the climate-sensitivity parameter over various periods (Table 2).
Table 2. IPCC’s values for the climate-sensitivity parameter
Broadly speaking, the value of the climate-sensitivity parameter is independent of the cause of the direct warming that triggers the feedbacks that change its value. Whatever the cause of the warming, little error arises by assuming the feedbacks in response to it will be about the same as they would be in response to forcings of equal magnitude from any other cause.
The IPCC says there has been 2.3 W m–2 of anthropogenic forcing since 1750, and little natural forcing. In that event, the climate-sensitivity parameter is simply the 0.9 K warming since 1750 divided by 2.3 W m–2, or 0.4 K W–1 m2. Since most of the forcing since 1750 has occurred in the past century, that value is in the right ballpark, roughly equal to the centennial sensitivity parameter shown in Table 2.
Next, we break the calculation down. Before 1950, according to the IPCC, the total anthropogenic forcing was 0.6 W m–2. Warming from 1750-1949 was 0.45 K. So the pre-1950 climate sensitivity parameter was 0.75 K W–1 m2, somewhat on the high side, suggesting that some of the pre-1950 warming was natural.
How much of it was natural? Dividing 0.45 K of pre-1950 warming by the 200-year sensitivity parameter 0.5 K W–1 m2 gives 0.9 W m–2. If IPCC (2013) is correct in saying 0.6 W m–2 was anthropogenic, then 0.3 W m–2 was natural.
From 1950 to 2011, there was 1.7 W m–2 of anthropogenic forcing, according to the IPCC. The linear temperature trend on the data from 1950-2011 is 0.7 K. Divide that by 1.7 W m–2 to give a plausible 0.4 K W–1 m2, again equivalent to the IPCC’s centennial sensitivity parameter, but this time under the assumption that none of the global warming since 1950 was natural.
This story-line, as far as it goes, seems plausible. But the plausibility is entirely specious. It was achieved by the simplest of methods. Since 1990, the IPCC has all but halved the anthropogenic radiative forcing to make it appear that its dead theory is still alive.
In 1990, the IPCC predicted that the anthropogenic forcing from greenhouse gases since 1765 would amount to 4 W m–2 on business as usual by 2014 (Fig. 1).
Figure 1. Projected anthropogenic greenhouse-gas forcings, 1990-2100 (IPCC, 1990).
However, with only 0.9 K global warming since the industrial revolution began, the implicit climate-sensitivity parameter would have been 0.9 / 4 = 0.23 K W–1 m2, or well below even the instantaneous value. That is only half the 0.4-0.5 K W–1 m2 that one would expect if the IPCC’s implicit centennial and bicentennial values for the parameter (Table 2) are correct.
In 1990 the IPCC still had moments of honesty. It admitted that the magnitude and even the sign of the forcing from anthropogenic particulate aerosol emissions (soot to you and me) was unknown.
Gradually, however, the IPCC found it expedient to offset not just some but all of the CO2 radiative forcing with a putative negative forcing from particulate aerosols. Only by this device could it continue to maintain that its very high centennial, bicentennial, and equilibrium values for the climate-sensitivity parameter were plausible.
Fig. 2 shows the extent of the tampering. The positive forcing from CO2 emissions and the negative forcing from anthropogenic aerosols are visibly near-identical.
Figure 2. Positive forcings (left panel) and negative forcings 1950-2008 (Murphy et al., 2009).
As if that were not bad enough, the curve of global warming in the instrumental era exhibits 60-year cycles, following the ~30-year cooling and ~30-year warming phases of the Pacific Decadal Oscillation (Fig. 3). This oscillation appears to have a far greater influence on global temperature, at least in the short to medium term, than any anthropogenic forcing.
The “settled science” of the IPCC cannot yet explain what causes the ~60-year cycles of the PDO, but their influence on global temperature is plainly visible in Fig. 3.
Figure 3. Monthly global mean surface temperature anomalies and trend, January 1890 to February 2014, as the mean of the GISS, HadCRUT4 and NCDC global mean surface temperature anomalies, with sub-trends during the negative or cooling (green) and positive or warming (red) phases of the Pacific Decadal Oscillation. Phase dates are provided by the Joint Institute for the Study of the Atmosphere and Ocean at the University of Washington: http://jisao.washington.edu/pdo/. Anthropogenic radiative forcings are apportionments of the 2.3 W m–2 anthropogenic forcing from 1750-2011, based on IPCC (2013, Fig. SPM.5).
Startlingly, there have only been three periods of global warming in the instrumental record since 1659. They were the 40 years 1694-1733, before the industrial revolution had even begun, with a warming trend of +1.7 K as solar activity picked up after the Maunder Minimum; the 22 years 1925-1946, with a warming trend of +0.3 K, in phase with the PDO; and the 24 years 1977-2000, with a warming trend of +0.6 K, also in phase with the PDO.
Table 3. Periods of cooling (blue), warming (red), and no trend (green) since 1659. Subject to uncertainties in the Central England Temperature Record, there may have been more warming in the 91 years preceding 1750 than in the three and a half centuries thereafter.
There was a single period of cooling, –0.6 K, in the 35 years 1659-1693 during the Maunder Minimum. The 191 years 1734-1924, industrial revolution or no industrial revolution, showed no trend; nor was there any trend during the negative or cooling phases of the PDO in the 30 years 1947-1976 or in the 13 years since 2001.
Table 3 summarizes the position. All of the 2 K global warming since 1750 could be simply a slow and intermittent recovery of global temperatures following the Little Ice Age.
There is a discrepancy between the near-linear projected increase in anthropogenic radiative forcing (Fig. 1) and the three distinct periods of global warming since 1659, the greatest of which preceded the industrial revolution and was almost twice the total warming since 1750.
No satisfactory mechanism has been definitively demonstrated that explains why the PDO operates in phases, still less why all of the global warming since 1750 should have shown itself only during the PDO’s positive or warming phases.
A proper understanding of climate sensitivity depends heavily upon the magnitude of the anthropogenic radiative forcing, but since 1990 the IPCC has almost halved that magnitude, from 4 to 2.3 W m–2.
To determine climate sensitivity from temperature change, one would need to know the temperature change to a sufficient precision. However, just as the radiative forcing has been tampered with to fit the theory, so the temperature records have been tampered with to fit the theory.
Since just about every adjustment in global temperature over time has had the effect of making 20th-century warming seem steeper than it was, however superficially plausible the explanations for the adjustments may be, all may not be well.
In any event, since the published early-20th-century error interval is of the same order of magnitude as the entire global warming from all causes since 1750, it is self-evident that attempting to derive climate sensitivity from the global temperature trends is self-defeating. It cannot be done.
The bottom line is that the pattern of global warming, clustered in three distinct periods the first and greatest of which preceded any possible anthropogenic influence, fits more closely with stochastic natural variability than with the slow, inexorable increase in anthropogenic forcing predicted by the IPCC.
The IPCC has not only slashed its near-term temperature projections (which are probably still excessive: it is quite possible that we shall see no global warming for another 20 years): it has also cut its estimate of net business-as-usual anthropogenic radiative forcing by almost half. Inch by inch, hissing and spitting, it retreats and hopes in vain that no one will notice, while continuing to yell, “The sky is falling! The sky is falling!”.
A happy Easter to one and all.
The IPCC’s concept of ‘Forcing’ is unscientific, based on the fallacy that the atmospheric Thermal Radiation Field, expressed in W/m^2, is a real energy flux. Far too many imagine it’s a heat flow, when that is completely to misinterpret radiative physics.
Net radiative heat flux to the surface is the negative of the difference of surface and atmospheric TRFs, the net energy in the electromagnetic domain emitted from the surface..
As the atmospheric TRF (aka ‘Forcing’, ‘back radiation’) increases, that energy flux decreases. Surface temperature rises because less net IR means more convection and evapo-transpiration is needed to ensure constant total energy flux to atmosphere and space.
Only when we slay the two dragons of this fundamental mistake in IR physics and the consequential tripling of real GHE can Climate Alchemy become Climate Science…….:o)
Umm, why did he select 125 years?
I am just a simple lad who had his last physics lesson in 1945. Looking at figure 3 I notice that in the period covered during which CO2 has been increasing 78 of the years temperature was falling and 46 of the years it was rising. Is CO2 a cooling gas? Or am I just a cherry picker?
Last year I wrote an article ‘noticeable climate change’ which took my extended CET record to 1538 and looked at annual, decadal and 50 year changes.
http://judithcurry.com/2013/06/26/noticeable-climate-change/
Noticeable climate change is the norm, especially in the annual and decadal record.. I am currently researching the period 1200 to 1400AD and this transitional period between the MWP and LIA (it oscillates between warm and cold) shows some extraordinary temperature changes and notable periods of extreme weather.
tonyb
“I cannot help thinking that it is crazy to even try to pretend that a single number can be used to model the climate. ”
That is not the point of the sensitivity parameter. There is no point to the sensitivity parameter, There is certainly no scientific point to the sensitivity parameter. There is no point to the sensitivity parameter other than propaganda.
Typo at figure 3: it says (period) 1922-1956, where it should be 1922-1947.
Could you please fix it? I´d like to save that graph
An investigation of climate sensitivities can look at a broader range of timeframes, while being not as heavily based on a combination of data from too-local sources and from too-untrustworthy sources:
“The 11-year solar cycle (averaged over the past 300 years).
Warming over the 20th century
Warming since the last glacial maximum (i.e, 20,000 years ago)
Cooling from the Eocene to the present epoch
Cooling from the mid-Cretaceous
Comparison between the Phanerozoic temperature variations (over the past 550 Million years) to different the CO2 reconstructions
Comparison between the Phanerozoic temperature variations and the cosmic ray flux reaching the Earth”
And the result is http://www.sciencebits.com/OnClimateSensitivity
Regarding this:
“All of the 2 K global warming since 1750 could be simply a slow and intermittent recovery of global temperatures following the Little Ice Age.”
There are important features within the climate record since then. Why a “downtrend of temperature since 1938 [which] has come nearly halfway back to the chill of the Little Ice Age 300 years ago,” as remarked on and plotted by a 1976 National Geographic issue during the global cooling scare? Why is there is quite a large spike of glacial advance in the very early 19th century (and the PDO doesn’t come close to explaining it as can be seen when Andes glacial history is plotted)? Those plus much more are illustrated and explainable in the context of the plots in my usual http://tinyurl.com/nbnh7hq
As Usoskin noted in a study of Ti-44 from space meteorites (produced by cosmic ray bombardment and originating far above Earth weather), “there was indeed an increase in solar activity over the last 100 years” ( http://www.space.com/2942-sun-activity-increased-century-study-confirms.html ).
Heber Rizzo says:
April 20, 2014 at 2:03 am
Typo at figure 3: it says (period) 1922-1956, where it should be 1922-1947.
Could you please fix it? I´d like to save that graph
====
Well spotted. A quick cut and paste with Gimp fixes it 😉
http://i59.tinypic.com/i5anmh.png
Thanks, Greg. Saved
The IPCC, in their wisdom, do not look for any natural variations in climate. All variations, as far as they are concerned, are human caused.
I do not know what planet they live on but it is NOT earth.
What is the PDO? When was it discovered? It was discovered after the setting up of the IPCC. If the oceans can eat global warming why can’t they burp it out? Buuuurrrp.
DR. ROY SPENCER – 2008
“Global Warming as a Natural Response to Cloud Changes Associated with the Pacific Decadal Oscillation (PDO)”
…….The main arguments for global warming being manmade go something like this: “What else COULD it be? After all, we know that increasing carbon dioxide concentrations are sufficient to explain recent warming, so what’s the point of looking for any other cause?”
But for those who have followed my writings and publications in the last 18 months (e.g. Spencer et al., 2007; Spencer, 2008), you know that we are finding satellite evidence that the climate system is much less sensitive to greenhouse gas emissions than the U.N.’s Intergovernmental Panel on Climate Change (IPCC, 2007) climate models suggest that it is. And if that is true, then mankind’s CO2 emissions are not strong enough to have caused the global warming we’ve seen over the last 100 years…….
http://www.drroyspencer.com/research-articles/global-warming-as-a-natural-response/
Once again we have an article based on the questionable surface data. When is someone going to produce a “pure” dataset. Yes, there are reasons for adjustments, however, it is quite likely that over time the various errors cancel. I would love for someone to produce a simple, raw dataset that could be used in discussions. I have no idea where to find the data, and maybe that is the problem, but I can dream.
HadCET in 1659 does provide us with a nice example to reflect on in terms of what is natural variability in an era with NO GHG forcing and what about temperature change.
The Industrial Revolution did not start in 1765 and CO2 did not start increasing until afterward (and it was only 278 ppm in 1765 – roughly the same number it was for the previous 4000 years).
How does the annual temps in HadCET from 1659 to 1765 (107 years) compare to HadCET from 1907 to 2013 (same 107 years).
http://s10.postimg.org/rts9530qx/Had_CET_1659_vs_2013_Annual.png
I see the same type of variability. I see temps in 2013 being no different than temps in any period from 1659 to 1765.
[There may be a large response to a volcano in Japan in 1739 which is much larger than the imprint of any other volcano (including Tambora) in the HadCET record – noted as the great Irish frosts].
For those who like high resolution data, here is the monthly temp comparison.
http://s4.postimg.org/4d50ji259/Had_CET_1659_vs_2013_Monthly.png
A point being, there is no change in the extent of extremes. Monthly temps can vary by +4.0C/-6.0C and that has not changed at all in the two periods.
Pretty hard to pick out a global warming signal when there is no real difference between 2.5 W/m2 of GHG forcing and 0.0 W/m2 of GHG forcing.
@Eric Simpson
Yes, alarmists will dismiss Dr. Happer as being in the pay of Big Oil. Then they will turn around and appeal to those same companies as supporting man-made global warming Here’s an appeal for indoctrinating school children in Wyoming:
Editorial board: Join energy industries and admit climate change exists
http://trib.com/opinion/editorial/editorial-board-join-energy-industries-and-admit-climate-change-exists/article_ca4a1bd6-e7d4-5dde-acad-140c21c8067e.html
A better source about the biosphere’s need for CO2 is here:
“Atmospheric CO2 concentration is just barely above the life-sustaining levels of 150 ppm. For life to have real buffer against mass extinction, CO2 needs to be closer to 1000 ppm.”
http://www.nzcpr.com/more-united-nations-carbon-regulations-on-the-way/
Note Heber Rizzo, the range in Figure 3 is 1922-1946 and not 1922-1947.
Warren
Richard M says:
April 20, 2014 at 5:53 am
“I would love for someone to produce a simple, raw dataset that could be used in discussions.”
And here is where Steven Mosher and his BEST collaborators could do the world a real service. To go with their gridded, homogenized, adjusted BEST data set they could produce an even more valuable WORST data set, ungridded, unadjusted, unhomogenized, a simple monthly average of the available data. Later would come, intermediate sets with the increasing “betterments”, the most desirable of which would be simple geographic gridding of the WORST data. It would be interesting to see which data sets were most in demand.
Goldie says:
April 20, 2014 at 1:14 am
Good question. He published stuff I think a year ago about his ideas about the difference between weather, macroweather, and climate. At least I think those are what he called them. There was a discussion about it at Judith Curry’s, Willis had some interesting comments. I’d link except I gotta run and do the Easter thing. But anyway, I think the 125 is important to his arguments because of this. Somehow. I’ve been looking at Dr. Lovejoy’s arguments in my miniscule spare time and I don’t claim expertise or even familiarity, but I think this has something to do with it anyway.
Can we deduce climate sensitivity from temperature?
What we can do, melord, is what you and others (Including the IPCC) have already done:
Take HadCRUt4 back to 1950 and look at the slope from that point. Now, yes, the work by Anthony et al. (of which I’m a co-author, and let’s don’t leave out the surfacestations volunteers) demonstrates how seriously poor microsite — significantly — exaggerates trends, a factor quite unaccounted for. And yes, how and why homogenization essentially eliminates the low-trend “outliers” (i.e., the well sited stations). And, yes, extraordinary claims require extraordinary proof — and that, we got.
But we can stipulate that the HadCRut4’s 0.7C rise since 1950, for purposes of argument, is correct. And we can the stipulate that 100% of that warming is anthropogenic and furthermore that anthropogenic warming is 100% due to CO2 increase (either of which which not even the IPCC claims). But let’s stipulate it. That gives us a high-end bound for CO2 forcing.
1950 is a proper starting point because it begins right at the point from when CO2 became a significant factor and (very important) neatly encapsulates both positive and negative PDO.
As you and others have indicated, that is a warming of 1.1C per century (after a 30% increase in CO2, which effect, as is not controversial, has a continually diminishing effect).
Unless there are unknown, unaccounted for factors (quien sabe?), this leaves us, pretty much, with the mild warming of Arrhenius (1906). We’ll leave solar effect (unknown) aside for our current purposes.
That is the upper bound of what is currently on record. And after Anthony et al. are through, we’ll likely knock that down by at least, as Cracko put it to Captain Kirk, “A thoid. Skimmed right off the top.”
Having established an upper bound, we do not have all the answers (melord knows). Yet it appears to me we have certainly established a basis for policymaking (i.e., NOT) — at least until next Tuesday when the next (as yet unknown) factor crops up, at which point, we shall reassess.
One recalls Reagan’s comment regarding negotiations with the Soviets: “Don’t just do something. Stand there!”
they could produce an even more valuable WORST data set, ungridded, unadjusted, unhomogenized, a simple monthly average of the available data.
Even that will show more warming than has actually occurred: TOBS-bias will spuriously knock the Tmean trend numbers down, yes, as will equipment conversion by a bit. But failure to account for microsite spuriously increases those numbers by almost twice that amount.
It is the well sited, unmoved stations without TOBS bias (raw, plus a bump up for MMTS conversion) that will give us the correct result, to the extent of our current knowledge.
That result will be considerably lower than even the WORST of it.
You want the simple answer? then it is NO!
The alarmist will not even go to actual chemical/gas make up of the entire global atmosphere with percentages of each gas or chemical. WHY? could it be they do not know?
Monckton of Brenchley: “Also, over the past 120 years, representing two full cycles of the Pacific Decadal Oscillation, its trend is within 0.01 K of the trend on the mean of the GISS, HadCRUT4 and NCDC global terrestrial datasets. It is not entirely without value.”
I agree that Lovejoy et al. is execrable. And I agree that in the present context the Central England Temperature Index “is not entirely without value.” But some months ago when I went through an exercise similar to Lord M.’s above, for a similar reason, I found the difference between the 125-year trend of CET and that of (I believe it was) HadCrut3 to be nearly 0.2 K/century for the period that ended in August of 2009: the CET trend was nearly half again as high as the HadCrut3 trend for that period.
So, unless my math is faulty (always a possibility), it seems to me that the above excerpt gives the impression that CET is a better proxy for global trends than it really is. No doubt it’s better than what Lovejoy used, and I still find its variations instructive, but not to the extent that readers might have inferred from that excerpt. .
I am a hard science guy – manufactured chemicals and semiconductors. I have a question for all you researchers – it appears that temperature records are being used from many sources what are the inaccuracies of the measurement equipment used in all the decades and centuries?
A very small delta factor can disrupt the entire theory can it not.
.07% is small number my electronic test gear was +- 99% accurate now if I tested enough times would that number become more or less valid?
what are the inaccuracies of the measurement equipment used in all the decades and centuries?
CRS, the traditional common unit of measure, in and of itself produces considerably higher trends than MMTS. MMTS has been determined to be the more accurate of the two. There’s also a step-change upon conversion, which obviously affects trend. We account for this in our study, and also examine MMTS and CRS in isolation.
The only interesting period concerning any potential influence on global temperatures of our CO2 emissions is the one starting in the mid 1970s. There is no point looking for any signals of such influence earlier than that, seeing that even the IPCC concurs that there would be no such signals before about 1950, and there was no global warming between 1950 and 1977:
http://woodfortrees.org/plot/hadcrut4gl/from:1950/to:1977
In other words, all that really needs explanation here is the global temperature rise between 1976/77 and 2001/02.
This period has been the focus of Bob Tisdale for the last 5 years or so. And he has shown pretty clearly through the available observational data from the real earth system that there is no need at all to look outside the processes behind the natural warming of the global oceans to explain the modern global warming period.
Between 1970 and 2014, global temperatures have shifted up permanently relative to the NINO3.4 SSTa on only three abrupt occasions, in 1978/79, in 1988 and in 1998:
http://i1172.photobucket.com/albums/r565/Keyell/GWexplained_zps566ab681.png
During the remaining 40+ years? Nothing, except two major volcanic eruptions and some noise here and there globally.
Tisdale has thoroughly shown how the two latest upward shifts (in 1988 and 1998) originated in the western Pacific, that is, well outside the NINO3.4 region, but still most definitely inside the greater ENSO region.
The first shift comes at the heels of the Great Pacific Climate Shift of 1976/77, when the East Pacific outside the tropical regions all of a sudden experienced a major relative warming, as a result of the preceding downward shift in the SO index, that is, the abrupt and significant fall in the pressure gradient across the tropical Pacific basin.
Kristian says:
April 20, 2014 at 12:20 pm (Edit)
The only interesting period concerning any potential influence on global temperatures of our CO2 emissions is the one starting in the mid 1970s. There is no point looking for any signals of such influence earlier than that, seeing that even the IPCC concurs that there would be no such signals before about 1950, and there was no global warming between 1950 and 1977:
You are missing an essential point. 1950 – 1977 was a negative PDO period. But instead of sharp cooling, it was flatline-to-mild. That’s because mild CO2 forcing was counteracting most of the cooling. But from 1977 – 2007, there was a positive PDO and only around half that warming at most could have been from any CO2 forcing.
Add it all up and you get a warming of 0.11C (adjusted data) per decade since 1950. Not all of that is anthropogenic (though I suppose half or more is), and not all anthropogenic warming is CO2-induced (think Arctic soot).
Anthropogenic CO2 increase has been relatively constant from 1950 on. And that’s right when it really took off.
So 1950 is the perfect start date from that perspective: one full cycle (plus and minus) of PDO. (Except for the metadata record issues like TOBS and moves, etc., which get spottier the further back you go.)
That’s the top-down approach. NOT including microsite . . .