By Christopher Monckton of Brenchley
According to the RSS satellite data, whose value for May 2014 has just been published, the global warming trend in the 17 years 9 [months] since September 1996 is zero (Fig. 1). The 213 months without global warming represent more than half the 425-month satellite data record since January 1979. No one now in high school has lived through global warming.
Figure 1. RSS monthly global mean lower-troposphere temperature anomalies (dark blue) and trend (thick bright blue line), September 1996 to May 2014, showing no trend for 17 years 9 months.
The hiatus period of 17 years 9 months is the farthest back one can go in the RSS satellite temperature record and still show a zero trend. But the length of the pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the less exciting real-world temperature change that has been observed.
The First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº century–1. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded:
“Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.”
That “substantial confidence” was substantial over-confidence. A quarter of a century after 1990, the outturn to date – expressed as the least-squares linear-regression trend on the mean of the GISS, HadCRUT4 and NCDC monthly global mean surface temperature anomalies – is 0.34 Cº, equivalent to juar 1.4 Cº/century, or exactly half of the central estimate in IPCC (1990) and well below even the least estimate (Fig. 2).
Figure 2. Medium-term global temperature projections from IPCC (1990), January 1990 to April 2014 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) as the mean of the RSS and UAH monthly satellite lower-troposphere temperature anomalies.
Figure 3. Predicted temperature change since 2005 at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the observed anomalies (dark blue) and trend (bright blue).
Remarkably, even the IPCC’s latest and much reduced near-term global-warming projections are also excessive (Fig. 3).
In 1990, the IPCC’s central estimate of near-term warming was higher by two-thirds than it is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº – and, as Fig. 3 shows, even that is proving to be a substantial exaggeration.
On the RSS satellite data, there has been no statistically-significant global warming for more than 26 years. None of the models predicted that, in effect, there would be no global warming for a quarter of a century.
New attempts to explain away the severe and growing discrepancy between prediction and reality emerge almost every day. Far too few of the scientists behind the climate scare have yet been willing to admit the obvious explanation – that the models have been programmed to predict far more warming than is now likely.
The long Pause may well come to an end by this winter. An el Niño event has begun. The usual suspects have said it will be a record-breaker, but, as yet, there is too little information to say how much temporary warming it will cause. The temperature spikes caused by the el Niños of 1998, 2007, and 2010 are clearly visible in Figs. 1-3.
El Niños occur about every three or four years, though no one is entirely sure what triggers them. They cause a temporary spike in temperature, often followed by a sharp drop during the la Niña phase, as can be seen in 1999, 2008, and 2011-2012, where there was a “double-dip” la Niña.
The ratio of el Niños to la Niñas tends to fall during the 30-year negative or cooling phases of the Pacific Decadal Oscillation, the latest of which began in late 2001. So, though the Pause may pause for a few months at the turn of the year, it may well resume late in 2015.
Either way, it is ever clearer that global warming has not been happening at anything like the rate predicted by the climate models, and is not at all likely to occur even at the much-reduced rate now predicted. There could be as little as 1 Cº global warming this century, not the 3-4 Cº predicted by the IPCC.
Key facts about global temperature
Ø The RSS satellite dataset shows no global warming at all for 213 months from September 1996 to May 2014. That is more than half the entire 425-month satellite record.
Ø The fastest measured centennial warming rate was in Central England from 1663-1762, at 0.9 Cº/century – before the industrial revolution. It was not our fault.
Ø The global warming trend since 1900 is equivalent to 0.8 Cº per century. This is well within natural variability and may not have much to do with us.
Ø The fastest warming trend lasting ten years or more occurred over the 40 years from 1694-1733 in Central England. It was equivalent to 4.3 Cº per century.
Ø Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to 1.2 Cº per century.
Ø The fastest warming rate lasting ten years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century.
Ø In 1990, the IPCC’s mid-range prediction of the near-term warming trend was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction.
Ø The global warming trend since 1990, when the IPCC wrote its first report, is equivalent to 1.4 Cº per century – half of what the IPCC had then predicted.
Ø In 2013 the IPCC’s new mid-range prediction of the near-term warming trend was for warming at a rate equivalent to only 1.7 Cº per century. Even that is exaggerated.
Ø Though the IPCC has cut its near-term warming prediction, it has not cut its centennial warming prediction of 4.7 Cº warming to 2100 on business as usual.
Ø The IPCC’s prediction of 4.7 Cº warming by 2100 is more than twice the greatest rate of warming lasting more than ten years that has been measured since 1950.
Ø The IPCC’s 4.7 Cº-by-2100 prediction is almost four times the observed real-world warming trend since we might in theory have begun influencing it in 1950.
Ø Since 1 January 2001, the dawn of the new millennium, the warming trend on the mean of 5 datasets is nil. No warming for 13 years 4 months.
Ø Recent extreme weather cannot be blamed on global warming, because there has not been any global warming. It is as simple as that.
Technical note
Our latest topical graph shows the RSS dataset for the 213 months September 1996 to May 2014 – more than half the 425-months satellite record.
Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates appreciably below those that are published. The satellite datasets are based on measurements made by the most accurate thermometers available – platinum resistance thermometers, which not only measure temperature at various altitudes above the Earth’s surface via microwave sounding units but also constantly calibrate themselves by measuring via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years.
The graph is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file, takes their mean and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity.
The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line via two well-established and functionally identical equations that are compared with one another to ensure no discrepancy between them. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression.
Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat.
This is good news. Truly.
Everyone should be happy. But they won’t be. They will make up something that shows how true it is that oil companies are the anti-Christ. Or racist. Or maybe the greatest evil is now homophobia.
I can’t keep up.
The headline is tautological. It should be either ‘Still no Global Warming’ or ‘No Global Warming for 17 years 9 months.’
It can’t be ‘Still no Global Warming for 17 years 9 months’ because it has never before been ‘No Global warming for 17 years 9 months’ – last month the situation was ‘No Global Warming for 17 years 8 months’. See?
If we’re insisting on meticulous accuracy from the Warmists…..
Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat.
Analysis of the significance of regression in this way has a serious deficiency in the context in which it is being applied here. The “significance” of a regression is strongly weighted to the gradient of the regression. It is due to the nature of the null hypothesis. The null hypothesis is flatness, no gradient. So when you have a flat regression, you are testing it against itself and not surprisingly there is no significant difference. The question being asked is “how significantly different from a zero, flat gradient, is this regression gradient that you find?”
Thus as the gradient gets steeper, the significance of the regression will increase even if the amount of noise variation stays the same. This is correct of course if the objective is just to test difference of slope from zero.
However in order to assess the “significance” of a flat, near zero gradient, a different question needs to be asked with a different null hypothesis. We should be asking “how significantly different is this regression slope from any other regression slopes?” i.e. an indication of the likelihood that the obtained regression truly represents the underlying trend, in which flatness is regarded as a “trend” of equal validity to any other trend with a gradient.
Does anyone know what statistical tests would address this question?
Christopher Monckton writes, “El Niños occur about every three or four years, though no one is entirely sure what triggers them.”
Reason No. 1 why models are worthless.
RoHa says:
But maybe it takes twenty years for the effects of global warming to work its way through the system, so that the extreme weather now is the result of global warming that happened twenty years ago. …..
—————————————————————————————————————-
Please don’t give them any ideas !
They could then play with their computer “models” indefinitely, looking for the pattern -:)
Imagine a world of no 1998 peak and no Climategate to have ever afforded skeptics a voice in the media or half of politics being converted to skepticism. They would have gotten away with it all, wholesale.
Allistair inquired: “Can anyone point me to a sensible but not too technical critique of the arguments that the oceans have continued to warm during the pause and that the top of the atmosphere satellite measured energy imbalance shows that the total energy of the entire climate system continues to rise?”
(A) The simple average of world tide gauges shows a slight deceleration corresponding to the pause, and overall utterly no extra Global Warming signal in our postwar emissions burst era. Liquid expansion is how thermometers themselves work so the ocean acts as a thermometer through any extra rise in sea level above the natural trend in which ice keeps melting away in our interglacial period. The reference to tide gauges is Church & White 2011, the tide gauge plot being extracted here:
http://postimg.org/image/uszt3eei5/
Also, the actual sea surface temperature also shows the pause:
http://www.woodfortrees.org/plot/hadsst3gl
Which is why it’s hiding in the deep ocean, you see, where there’s no data, except remember how thermal expansion does give feedback on even deep ocean warming.
(B) A new paper in a physics journal checked satellite data to actually measure the overall extra warming over time as CO2 continues to rise, a physical experiment with a known variation in CO2 concentration, and it found very little impact, suggesting that feedbacks are not positive as the basis for all alarm is based upon and in fact is likely negative instead:
http://www.worldscientific.com/doi/abs/10.1142/S0217979214500957
The “pause” and its possible extension also to oceans is less of an effective argument than the lack of any change in overall trend that the pause represents, more so than it represents an actual pause since anybody can see how noisy such plots are on varoius time scales. It’s also more important by representing further falsification of climate models, even if it’s not even considered to be a pause but just boring noise as warming still refuses to accelerate. That recent warming is perfectly precedented is seen by merely looking at a fair plot of the global average temperature:
http://s16.postimg.org/54921k0at/image.jpg
That recent warming itself falsifies climate alarm is seen in how nearly all of the world’s very oldest single site theometer stations show it to be an exact continuation of the multi century natural warming trend:
http://s6.postimg.org/uv8srv94h/id_AOo_E.gif
Central England is a near perfect match to the global average temperature, variation wise, so it represents a wonderful proxy for it, showing that the song remains the same, and that recent warming is most likely mostly normal, not mostly enhanced.
The temperature fluctuations tend to follow the sunspots and solar flares. As long as this Landscheidt minimum continues, it will eventually lead to a distinct downward trend in global temperatures.
Even the given 1.2C per century since 1950 is way below what happened at the start of the roman and medieval warmings.
Beyond that, for the first time in history, we are capable of geoengineering our planet (either down here or from orbit) at any time. For the first time humanity has real control over our environment which makes scares about minor natural variations even sillier (or more corrupt for those making a living pushing the fraud).
change/decrease in maximum temperatures (henry’s global average)
last 40 years (from 1974) +0.034 degree C/yr
last 34 years (from 1980) +0.026 degree C/ yr
last 24 years (from 1990) +0.014 degree C/yr
last 14 years (from 2000) -0.010 degree C/yr
would any of you have an idea of what curve this is?
(I am beginning to doubt it is in fact from a sine wave)
Leaving out ocean temps, to be deliberately misleading. It is a crime deserving a harsh punishment.
Blasphemers! Average temperatures/no significant warming is obviosly proof of global dramatic climate change/disruption/warming/whatever the chatechism says this week. The Goreacle will not be pleased.
Phlogiston asked whether the regression slope (trend) calculated for a climate series should be compared with a value other than zero, on the basis that a null hypothesis of zero slope is not appropriate for the situation.
The first question to be addressed is whether some other hypothesised reference slope has a “known” numerical value together with a “known” standard error and degrees of freedom. My guess is that such a reference slope (or alternative null hypothesis) would be difficult to establish to the satisfaction of everyone who might be interested in the underlying arithmetic. If such reference statistics were available the comparison is simple enough by comparing the square of ratio of the two slopes with the f table for appropriate degrees of freedom for numerator and denominator. My belief is that it would do little or nothing to address the obvious question which is normally posed when a trend line is computed. This is of course “Is the estimated slope significantly different from zero?”, which is easily answered using the elementary statistical calculations I outlined here on June 4th at 1.50pm and which are found in every stats textbook. You can obviously compare the least squares slope with any other of your choice provided you have available the same sort of information for that slope. But there’s the rub. How do you arrive at those statistics? The choice of “zero slope” as your null hypothesis is really very sensible. You do not need any stats for it because zero is an absolutely fixed value. There’s no error. It is your stated gold standard and cannot be questioned.
In order to inform fully the readership of climate related postings/papers I strongly advocate presenting regression outcomes complete with the confidence intervals for the computed slope (which inherently include the amount of data used in the calculation) and the probability that the slope is different from zero. These statistics are all facets of the same basic calculations, and if published would give an immediate and non-disputable oversight of the situation. The regularly reported r-squared does not accomplish this objective. Of course, being time series, a correction for autocorrelation, such as Quenouille’s, should be incorporated, or, if omitted this should be clearly stated. This or any other correction serves to decrease the probability level associated with the slope, but the effect is not necessarily serious.
Have just realised that I wrote nothing about whether it is really sensible to fit a straight line to data that are grossly and obviously (by simple plotting methods) not linear in character. Think of things like the ENSO related data, PDO, AMO and countless temperature series. My studies of innumerable climate series have persuaded me that most of them have spells of remarkably stable (constant) data that are disturbed only by what is generally thought to be “noise”, but are punctuated at indeterminate times by a sudden change in underlying value, which is then frequently followed by a further stable period. Gradual change also occurs, but to me seems to be less common. I’m a firm believer in step changes being “normal” when it comes to climate observations.
Has anyone else noticed something similar?
Not logical really. To make a counter argument against AGW the natural variations must be cleaned from the temperature record, and they should be analyzed separately. It’s a bit funny when you think about it, that in this article the nature is being summoned against the humanity.
If the next El Nino releases higher temperatures to the atmosphere from the oceans and the trend line ticks upwards, do you think that the nature is then faulty?
steve says:
June 4, 2014 at 9:34 am
The RSS data site I found (remss.com) shows that while troposphere measurements have not been changing much the stratosphere measurements, channels 10, 11, 12, 13, 14, and 25 that are part of RSS have fairly significant dropping trends, all are between -0.2 and -0.8 C per decade. Granted they started measuring in 1998 and that year was a very warm year in the lower atmosphere, so that drop may be misleading. But that general dropping rate over 17 years is around -0.5 K per decade, almost -1 F per decade, faster than the rises predicted for the lower atmosphere that have gotten many people alarmed. Is anyone alarmed about the drop rate going on in the stratosphere temperature? And is there any explanation for the dropping?
Yes, in a post on Le Chateleliers Principle – Here – http://chiefio.wordpress.com/2014/06/01/le-chatelier-and-his-principle-vs-the-trouble-with-trenberth/ E.M.Smith shows the cooling action of CO2 in the stratosphere. With cooling flows toward both poles not what the CAGW gullibles would want you to know.
IanW says
And is there any explanation for the dropping?
henry says
actually
it can be explained naturally
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/
don’t worry about the carbon dioxide
are there no mathemathicians here?
http://wattsupwiththat.com/2014/06/04/the-pause-continues-still-no-global-warming-for-17-years-9-months/#comment-1654802
Probably the highest case scenario is that temps warm 0.25 deg in the period 2000-2025. That assumes immediate resumption of strong warming.
So, in 2025, even if the temp is up just 0.25 deg for the first quarter of the century, are we still going to be arguing about global warming? Don’t you think it will be a dead issue scientifically and publicly by then? And that assumes resumption of strong warming.
If we make it to 2025 with no warming, will it be a dead issue ?
” ºC ” for heavens sake.
Has anyone noticed that Christopher Monckton’s post is comparing global surface temperatures measurements and models against troposphere (not surface!) temperatures?
The proper comparison is models of troposphere temperatures against measurements of troposphere temperatures. This is on the RSS web site: http://www.remss.com/research/climate
They point out that “The troposphere has not warmed as fast as almost all climate models predict.”. The measurements are at the lower end or below the CMIP-5 model simulations since roughly 1998 and
“The reasons for the discrepancy between the predicted and observed warming rate are currently under investigation by a number of research groups. Possible reasons include increased oceanic circulation leading to increased subduction of heat into the ocean, higher than normal levels of stratospheric aerosols due to volcanoes during the past decade, incorrect ozone levels used as input to the models, lower than expected solar output during the last few years, or poorly modeled cloud feedback effects. It is possible (or even likely) that a combination of these candidate causes is responsible.”.
My guess is cloud feedback effects will be the primary factor since evidence is coming in that it is slightly positive feedback rather than the previously expected negative feedback.
With all due respect, I have to note that I believe Mr. Cripwell is correct about the way in which orbiting satellites measure earthly temperatures.
I suspect that you’re referring to Dr. Spencers article here: http://www.drroyspencer.com/2010/01/how-the-uah-global-temperatures-are-produced/, where he indicates that the satellite uses a platinum resistance thermometer measuring an internal target as one of two data points needed to calibrate the microwave radiometer (the other data point is 2.7K by looking at deep space).
The actual detection of the radiance is done by the microwave radiometer, not the PRT.
After all, PRTs require the platinum filament to be immersed or in contact with the medium to be measured. Not an easy task for a satellite orbiting at 850 km (for polar orbits) much less for the geostationary ones at 35,880 km.
Otherwise, enjoyed your posting.
Arno, If I remember portions of Bob Tisdale’s very informative posts, I think he would disagree with you about your (bolded) statement above.
[+ emphasis]
Charlie, please see the replies to me from Werner Brozek. I needed it explained too.
It’d be great if Bill Nye and, say another lightweight, like Neil deGrasse Tyson, debated Monckton on live TV. Might as well include Michio Kaku and Stephen Hawkins. Call it Cosmotologists versus Reality. It’d be great.
“Ian” says my graphs compare surface temperatures predicted by models with measured tropospheric temperatures. In fact, I compare not temperatures but temperature anomalies. That is why, to answer Max Beran’s point, I use the notation “Cº”, indicating temperature anomalies, rather than “ºC”, indicating actual temperatures.
Since lower-troposphere and surface temperatures move more or less in lock-step thanks to the uniformity of the temperature lapse rate, lower-troposphere and surface temperature trends, which are determined from the anomalies, will be near-identical. Indeed, my own comparison of, say, the warming rate since January 2001, the beginning of the millennium, shows near-identical trends whether one takes the mean of the three key surface datasets or the mean of the two satellite datasets.
“Ian” also says the cloud feedback was originally thought to be negative and is now thought to be positive. In fact, it is the other way about. The cloud feedback was thought to be quite strongly positive, but work by Roy Spencer and others has demonstrated it to be negative.
The primary reason why the world is not warming as fast as had been predicted is that the predictions were exaggerated because the models had been tuned to assume too large a warming effect from CO2. A secondary reason is that the models are insufficiently capable of taking natural variability into account.
All other things being equal, one would expect some global warming to occur, but, on balance, not very much. For this reason, I do not expect the long Pause to continue indefinitely. However, I do expect the growing discrepancy between predicted and observed temperatures to continue to widen for the foreseeable future.
And I’d be delighted to take up “cedarhill’s” suggestion of a debate against Bill Nye the Pseudo-Science Guy, but I’m reasonably sure his minders would see to it that he did not expose his absence of climatological knowledge on the air. In general, the Gorons of this world will not debate me. The last time I managed to take part in such a debate was earlier this year on Irish radio, where the host and the three other guests were all fervent true-believers. I was in the studio from the beginning of the program, but was not allowed to sit at the mike and participate until halfway through.
Afterwards, the president of the Dublin Historical Society, who had invited me to Ireland, said, “Four against one – they didn’t stand a chance.”
The growing reluctance of the usual suspects to take part in debate is a tacit but telling acknowledgement on their part that they are wrong.
“””””…..Addolff says:
June 4, 2014 at 9:31 am
Please forgive my ignorance here, but last months graph was from August 1996 to April 2014, and was headlined as 213 months “No global warming for 17 years, 9 months”.
Why is this graph from September 1996?
Can someone please tell me what I’m missing?…..”””””
Well Addolff, what you are missing is the set of rules for the game.
Rule # 1 Get the (RSS) data for the current (most recent) month.
Rule # 2 Get the data for the previous month..
Rule # 3 Check whether the Temperature trend is statistically different from zero, using well understood statistical mathematics.
If the answer is yes, then stop, and report the interval.
If the answer is no, replace earlier month with the previous month, and go to rule # 3.
So the reason the start month advanced by one month, is that Rule #3 gave a yes for the next earlier month.
The game requires NO statistically different from zero trend, up to the most recent reported data. The algorithm, simply determines the LONGEST CONTINUOUS ZERO TREND record till now.