No significant warming for 17 years 4 months

By Christopher Monckton of Brenchley

As Anthony and others have pointed out, even the New York Times has at last been constrained to admit what Dr. Pachauri of the IPCC was constrained to admit some months ago. There has been no global warming statistically distinguishable from zero for getting on for two decades.

The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point. However, as Anthony explained yesterday, the stasis goes back farther than that. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.

Usefully, the latest version of the Hadley Centre/Climatic Research Unit monthly global mean surface temperature anomaly series provides not only the anomalies themselves but also the 2 σ uncertainties.

Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:

clip_image002

On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.

The fact that an apparent warming rate equivalent to almost 0.9 Cº is statistically insignificant may seem surprising at first sight, but there are two reasons for it. First, the published uncertainties are substantial: approximately 0.15 Cº either side of the central estimate.

Secondly, one weakness of linear regression is that it is unduly influenced by outliers. Visibly, the Great el Niño of 1998 is one such outlier.

If 1998 were the only outlier, and particularly if it were the largest, going back to 1996 would be much the same as cherry-picking 1998 itself as the start date.

However, the magnitude of the 1998 positive outlier is countervailed by that of the 1996/7 la Niña. Also, there is a still more substantial positive outlier in the shape of the 2007 el Niño, against which the la Niña of 2008 countervails.

In passing, note that the cooling from January 2007 to January 2008 is the fastest January-to-January cooling in the HadCRUT4 record going back to 1850.

Bearing these considerations in mind, going back to January 1996 is a fair test for statistical significance. And, as the graph shows, there has been no warming that we can statistically distinguish from zero throughout that period, for even the rightmost endpoint of the regression trend-line falls (albeit barely) within the region of statistical insignificance.

Be that as it may, one should beware of focusing the debate solely on how many years and months have passed without significant global warming. Another strong el Niño could – at least temporarily – bring the long period without warming to an end. If so, the cry-babies will screech that catastrophic global warming has resumed, the models were right all along, etc., etc.

It is better to focus on the ever-widening discrepancy between predicted and observed warming rates. The IPCC’s forthcoming Fifth Assessment Report backcasts the interval of 34 models’ global warming projections to 2005, since when the world should have been warming at a rate equivalent to 2.33 Cº/century. Instead, it has been cooling at a rate equivalent to a statistically-insignificant 0.87 Cº/century:

clip_image004

The variance between prediction and observation over the 100 months from January 2005 to April 2013 is thus equivalent to 3.2 Cº/century.

The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.

Yet it is becoming difficult to suggest with a straight face that the models’ projections are healthily on track.

From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements. That variance may well inexorably widen over time.

In any event, the index will limit the scope for false claims that the world continues to warm at an unprecedented and dangerous rate.

UPDATE: Lucia’s Blackboard has a detailed essay analyzing the recent trend, written by SteveF, using an improved index for accounting for ENSO, volcanic aerosols, and solar cycles. He concludes the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996. Also, the original version of this story incorrectly referred to the Washington Post, when it was actually the New York Times article by Justin Gillis. That reference has been corrected.- Anthony

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

429 Comments
Inline Feedbacks
View all comments
cRR Kampen
June 14, 2013 4:50 am

How dead can a turkey be 🙂

Hoi Polloi
June 14, 2013 4:56 am

“Lets be clear on a couple things. Feynman is no authority on how science works. read his opinion on renormalization and you will understand that he did not practice what he preached.
Popper was likewise wrong about science. This isnt a matter of philosophical debate, its a matter of historical fact.”

Wow Mosher bashes Feynman ánd Popper. So wot’s your achievement in science compared with Feyman and Popper, Mosher? Already received a Noble prize? Your arrogance is toe-curling…

David L.
June 14, 2013 4:58 am

Steven says: June 13, 2013 at 4:36 am
“I keep seeing these graphs with linear progressions. Seriously. I mean seriously. Since when is weather/climate a linear behavorist? The equations that attempt to map/predict magnetic fields of the earth are complex Fourier series. Is someone, somewhere suggesting that the magnetic field is more complex than the climate envelope about the earth? I realize this is a short timescale and things may look linear but they are not. Not even close. Like I said in the beginning, the great climate hoax is nothing more than what I just called it. I am glad someone has the tolerance to deal with these idiots. I certainly don’t.”
————————————–
YES, YES, and YES!
I can’t see how any legitimate scientist would entertain these climate hacks beyond the first mention of a linear projection in their papers. At that statement they prove they don’t know what they are talking about. I agree you can use a line to interpolate data between two actual data points, but to fit a line and then project that into the distant future? Give me a giant break.
If you don’t know the real function it is wrong to assume a line will work. You might as well assume a Taylor expansion out to twelfth order for that matter. Assume anything; you’ll most likely be wrong. Assuming a line doesn’t get you any closer to being right.
The most amazing thing to me is that the line doesn’t even fit the data displayed! If they would analyze the residuals they’d see they weren’t normally distributed. The line isn’t even appropriate over the short timescale they plot.
Dr. Santer’s 17 year plot clearly shows the temperatures have gone up and are now coming back down. It’s not even leveling off, no more than the peak of the voltage on an AC circuit. It smoothly goes up and comes back down.
Can you imagine these guys as an artillery battery? They’d plot the first few points of the shell as it comes out of the barrel and project it linearly to their target.

jim2
June 14, 2013 5:06 am

Those who can do science, do it. Those who can’t become philosophers.

June 14, 2013 5:11 am

Blimey – some incredible minds here. Genuinely impressive stuff!
I shall now sum up my research in this matter using my limited intellect.
Its June.
I’m cold.

el gordo
June 14, 2013 5:31 am

‘After summer floods and droughts, freezing winters and even widespread snow in May this year, something is clearly wrong with Britain’s weather.
‘Concerns about the extreme conditions the UK consistently suffers have increased to such an extent that the Met Office has called a meeting next week to talk about it.
‘Leading meteorologists and scientists will discuss one key issue: is Britain’s often terrible weather down to climate change, or just typical?’
Read more: http://www.dailymail.co.uk/news/article-2341484/Floods-droughts-snow-May-Britains-weather-got-bad-Met-Office-worried.html#ixzz2WBzcNZIc
Follow us: @MailOnline on Twitter | DailyMail on Facebook

June 14, 2013 5:35 am

Can I edit Dr. Stokes’ comment to make clearer?
If there is a common signal programmed into the code of multiple models, averaging across model runs is the way to get it to show up in the output.

ferdberple
June 14, 2013 6:01 am

Thomas says:
June 13, 2013 at 11:50 pm
I’m wrong.
==========
I can’t disagree.

Richard M
June 14, 2013 6:01 am

barry says:
It is not enough to cite a quote out of context. Data, too must be analysed carefully, and not simply stamped with pass/fail based on a quote. Other attempts at finding a benchmark (a sound principle) are similar to Santer’s general conclusions that you need multi-decadal records to get a good grasp of signal (20, 30, 40 years).

I actually agree with this statement. The amount of time is not the biggest factor. The question is related to finding some factors that could come into play (“principle”). That is why the almost perfect fit of global temperatures with the DPO is so significant.
The current 16.5 years of no warming is actually around 8 years of warming followed by 8+ years of cooling that peaks right at the PDO switch. That is the “sound principle” that demonstrates that we really don’t even need to wait 17 years, we can say with high certainty that the PDO has a stronger influence on temperatures than CO2. And, if that is true then CO2’s effect is very small.

garymount
June 14, 2013 6:04 am

You can calculate the circumference of a circle as accurately as you like with straight (linear) lines:
http://www.colorado.edu/engineering/CAS/courses.d/IFEM.d/IFEM.Ch01.d/IFEM.Ch01.pdf
see page 8.

Richard M
June 14, 2013 6:08 am

Nick Stokes says:
As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge. In model world, we can rerun simultaneously to try to get a common signal. It’s true that models form an imperfect population, and fancy population statistics may be hard to justify. But I repeat, the fancy statistics here seem to be Monckton’s. If there is a common signal, averaging across model runs is the way to get it.

Nick, the reason averaging A MODEL makes sense is because you are trying to eliminate the affect of noise. When you average multiple models what are you doing? In essence you are averaging differing implementations of physics. Please inform me what a normal distribution of different physics provides? And, what is the meaning of the mean of a normal distribution of different physics. Dr. Brown made this clear. It is so idiotic I can’t even imagine you supporting this nonsense. You are smarter than that.

ferdberple
June 14, 2013 6:19 am

Nick Stokes says:
June 14, 2013 at 1:05 am
As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge.
============
There is no good reason to average chaos. It is a mathematical nonsense to do so because the law of large numbers does not apply to chaotic time series. There is no mean around which the data can be expected to converge.
The reason averaging works for some problems is because there is a mean to be discovered. You sample contains noise, and over time the noise will be random. Some positive and some negative. Over time the law of large numbers operates to equal out the positive and negative noise, and the signal will emerge.
However, as rgbatduke has posted, all this goes out the window when you are dealing with chaos. Chaotic systems are missing a constant mean and constant deviation. There is no convergence, only spurious convergence. False, misleading convergence that is not what it appears.
In chaotic systems you have attractors, which might be considered local means. When you use standard statistics to analyze them, you appear to get good results while the system is orbiting an attractor, but then it shoots off towards another attractor and makes a nonsense of your results.
So the idea that you can improve your results by taking longer samples of chaotic systems is a nonsense. The longer a chaotic system is sampled, the more likely if will diverge towards another attractor, making your results less certain not more certain.
This is the fundamental mistake in the mathematics of climate. The assumption that you can average a chaotic system (weather) over time and the chaos can be evened out as noise. That is mathematical wishful thinking, nothing more. Chaos is not noise. It looks like noise, but it is not noise and cannot be treated as noise if you want to arrive at a meaningful result.

Richard M
June 14, 2013 6:20 am

Why are there so many models? In engineering we have standards and committees to make changes to the standards. Assuming there is only one physics there should be only one model where all the changes that get made must be approved by a standards committee. Sure that takes a little extra effort but the result is one, arguably better, model. Instead we have dozens of which none are of much value (other than to the paychecks of the modelers).
Of course, this is the difference between researchers and engineers. The former is not too concerned with accuracy.

RichardLH
June 14, 2013 6:24 am

Richard M says:
June 14, 2013 at 6:20 am
“Of course, this is the difference between researchers and engineers. The former is not too concerned with accuracy.”
Also the difference between discovery and manufacture.

Patrick
June 14, 2013 6:33 am

“Richard M says:
June 14, 2013 at 6:20 am”
In engineering we (I do) know what +/- 2 microns are (+/- 3 microns, bin the job and start again). It is measureable, it is finite. On the other hand, computer based climate cartoon-ography, sorry I mean climate modelling, is, in it’s basic form, just a WAG where nothing is finite nor even measured (Other than the monthly pay check).

Nick Stokes
June 14, 2013 6:33 am

Richard M says: June 14, 2013 at 6:08 am
“When you average multiple models what are you doing? In essence you are averaging differing implementations of physics. Please inform me what a normal distribution of different physics provides?”

There is no expectation of a normal distribution involved in averaging.
But why do you think different models use different physics?
ferdberple says: June 14, 2013 at 6:19 am
“There is no good reason to average chaos.”

This would mean that you could never speak of any weather average. But we do that all the time, and find it useful.
Some folks are overly dogmatic about chaos.

June 14, 2013 6:36 am

I am most grateful to Professor Brown for having pointed out that taking an ensemble of models that use different code, as the Climate Model Intercomparison Project does, is questionable, and that it is interesting to note the breadth of the interval of projections from models each of which claims to be rooted in physics.
In answer to Mr. Stokes, the orange region representing the interval of models’ outputs will be found to correspond with the region shown in the spaghetti-graph of models’ projections from 2005-2050 at Fig. 11.33a of the Fifth Assessment Report. The correspondence between my region and that in Fig. 11.33a was explained in detail in an earlier posting. The central projection of 2.33 K/century equivalent that I derived from Fig. 11.33a seems fairly to reflect the models’ output. If Mr. Stokes thinks the models are projecting some warming rate other than that for the 45 years 2005-2050, perhaps he would like to state what he thinks their central projection is.
Several commenters object to applying linear regression to the temperature data. Yet this standard technique helpfully indicates whether and at what rate stochastic data are trending upward or downward, and allows comparison of temperature trends with projections such as those in the Fifth Assessment Report. A simple linear regression is preferable to higher-order polynomial fits where – as here – the data uncertainties are substantial.
Some commenters object to making any comparison at all between what the models predict and what is happening in the real world. However, it is time the models’ projections were regularly benchmarked against reality, and I shall be doing that benchmarking every month from now on. If anyone prefers benchmarking methods other than mine, feel free to do your own thing. One understands that the cry-babies and bed-wetters will not be at all keen to have the variance between prediction and observation regularly and clearly demonstrated: but the monthly Global Warming Prediction Index and comparison graph are already being circulated so widely that it will soon be impossible for anyone to get away with lying to the effect that global warming is occurring at an unprecedented rate, or that it is worse than we ever thought possible, or that the models are doing a splendid job, or that we must defer to the consensus because consensus must be right.
Finally, Mr. Mansion says that, just as correlation does not imply causation, absence of correlation does not imply absence of causation. In logic he is incorrect. Though correlation indeed does not imply causation, absence of correlation necessarily implies absence of causation. CO2 concentration continues to increase, but temperature is not following it. So, at least at present, the influence of CO2 concentration change on temperature change is not discernible.

Patrick
June 14, 2013 6:36 am

“RichardLH says:
June 14, 2013 at 6:24 am
Also the difference between discovery and manufacture.”
Based on my previous post, the discovery you haven’t read (Understood) the science (Drawing)! I agree!

ferdberple
June 14, 2013 6:43 am

barry says:
June 13, 2013 at 8:22 pm
similar to Santer’s general conclusions that you need multi-decadal records to get a good grasp of signal (20, 30, 40 years).
================
The problem is that we are likely dealing with a strange attractor, a fractal distribution, Which implies that regardless of the scale, the variability will appear the same. What this means mathematically is that there is no time scale that will prove satisfactory. There is no time scale at which you can expect the signal to emerge from the noise, because the noise is not noise. It is chaos. The system will continue to diverge, no matter if you collect data for 100, 1000, 1 million, 1 billion years.
The best that can be hoped for in our current understanding is to look for patterns in how the system orbits its attractors. This behavior may give some degree of cyclical predictability, or not, depending on the motion of the attractors. We use this approach to calculate the ocean tides with a high degree of precision, even though the underlying physics is chaotic.
Climate science on the other hand has ignored the cyclical behavior of climate and instead attempted to use a linear approximation of a non-linear system. And is now confused because the linear projections are diverging from observation. Yet this divergence is guaranteed as a result of the underlying chaotic time series.

Patrick
June 14, 2013 6:46 am

“Nick Stokes says:
June 14, 2013 at 6:33 am
This would mean that you could never speak of any weather average. But we do that all the time, and find it useful.
Some folks are overly dogmatic about chaos.”
And some folks are overly accepting of “averages”. It’s meaningless to compare an absolute, as is ALWAYS the case in weathercasts, with an average. But it is done everyday, in every weathercast.

Nick Stokes
June 14, 2013 6:46 am

Monckton of Brenchley says: June 14, 2013 at 6:36 am
“The central projection of 2.33 K/century equivalent that I derived from Fig. 11.33a seems fairly to reflect the models’ output. If Mr. Stokes thinks the models are projecting some warming rate other than that for the 45 years 2005-2050…”

I was far less critical of your graphs than Prof Brown, and I don’t particularly want to argue projections here. I was merely pointing out that they are indeed your estimates and statistics, and the graphs are not IPCC graphs, as they are indeed clearly marked.

June 14, 2013 6:54 am

Nick Stokes says at June 14, 2013 at 6:33 am

But why do you think different models use different physics?

Because they all give different results. Sure they must have some bits in common (I hope they use a round planet) but they don’t all model everything in the same way.
So what are you bundling?
Not variations in inputs to see what the model predicts are the most significant component.
Not variations in a single parameter model to see if that parameter is modelled correctly.
You are averaging a load of different concepts about how the climate works. That is the error that rgbatduke skewered at June 13, 2013 at 7:20 am…

there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

BTW, Nick Stokes: Please don’t think I am criticising you personally. I greatly respect your coming here into the lion’s den. I just have nowhere else to go now I can’t engage at the Guardian (sigh).

ferdberple
June 14, 2013 6:56 am

The faulty mathematics of the hockey stick and tree ring calibration could well be what led climate science down a dead end. The hockey stick made climate appear linear over large enough time scales to give some assurance of predictability. By minimizing the signal and amplifying the noise, tree ring calibration made temperatures appear stable over very long time periods, leading climate scientists to believe that linear models would prove well behaved. However, they were built on faulty mathematics. The fault is called “selection by the dependent variable”. It results in a circular argument. It is a reasonably well known statistical error and it is hard to believe the scientists involved were not aware of this, because some of them were formally trained in mathematics.

mogamboguru
June 14, 2013 6:57 am

Duncan says:
June 14, 2013 at 5:11 am
Blimey – some incredible minds here. Genuinely impressive stuff!
I shall now sum up my research in this matter using my limited intellect.
Its June. I’m cold.
——————————————————————————————————-
And I am out of funds, too, because this year’s unnervingly long, cold winter cost me 1000 Euros extra just for heating my home. Out of the window go my summer holidays…
Global warming? I am all for it! But where is it?

June 14, 2013 7:04 am

Mr. Stokes vexatiously persists in maintaining that Professor Brown had criticized my graphs, long after the Professor himself has plainly stated he had criticized not my graphs but the IPCC’s graphs, from one of which I had derived the interval of models’ projections displayed in orange and correctly attributed in the second of the two graphs in the head posting.
Of course it is embarrassing to Mr. Stokes that global warming is not occurring at anything like the predicted rate; and it is still more embarrassing to him that the variance between prediction and reality is now going to be visibly displayed every month. But continuing to lie to the effect that Professor Brown was criticizing my graphs when the Professor has said he was doing no such thing does not impress. Intellectual dishonesty of this kind has become the hallmark of the climate extremists.

1 6 7 8 9 10 18
Verified by MonsterInsights