By Christopher Monckton of Brenchley
As Anthony and others have pointed out, even the New York Times has at last been constrained to admit what Dr. Pachauri of the IPCC was constrained to admit some months ago. There has been no global warming statistically distinguishable from zero for getting on for two decades.
The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point. However, as Anthony explained yesterday, the stasis goes back farther than that. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.
Usefully, the latest version of the Hadley Centre/Climatic Research Unit monthly global mean surface temperature anomaly series provides not only the anomalies themselves but also the 2 σ uncertainties.
Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:
On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.
The fact that an apparent warming rate equivalent to almost 0.9 Cº is statistically insignificant may seem surprising at first sight, but there are two reasons for it. First, the published uncertainties are substantial: approximately 0.15 Cº either side of the central estimate.
Secondly, one weakness of linear regression is that it is unduly influenced by outliers. Visibly, the Great el Niño of 1998 is one such outlier.
If 1998 were the only outlier, and particularly if it were the largest, going back to 1996 would be much the same as cherry-picking 1998 itself as the start date.
However, the magnitude of the 1998 positive outlier is countervailed by that of the 1996/7 la Niña. Also, there is a still more substantial positive outlier in the shape of the 2007 el Niño, against which the la Niña of 2008 countervails.
In passing, note that the cooling from January 2007 to January 2008 is the fastest January-to-January cooling in the HadCRUT4 record going back to 1850.
Bearing these considerations in mind, going back to January 1996 is a fair test for statistical significance. And, as the graph shows, there has been no warming that we can statistically distinguish from zero throughout that period, for even the rightmost endpoint of the regression trend-line falls (albeit barely) within the region of statistical insignificance.
Be that as it may, one should beware of focusing the debate solely on how many years and months have passed without significant global warming. Another strong el Niño could – at least temporarily – bring the long period without warming to an end. If so, the cry-babies will screech that catastrophic global warming has resumed, the models were right all along, etc., etc.
It is better to focus on the ever-widening discrepancy between predicted and observed warming rates. The IPCC’s forthcoming Fifth Assessment Report backcasts the interval of 34 models’ global warming projections to 2005, since when the world should have been warming at a rate equivalent to 2.33 Cº/century. Instead, it has been cooling at a rate equivalent to a statistically-insignificant 0.87 Cº/century:
The variance between prediction and observation over the 100 months from January 2005 to April 2013 is thus equivalent to 3.2 Cº/century.
The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.
Yet it is becoming difficult to suggest with a straight face that the models’ projections are healthily on track.
From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements. That variance may well inexorably widen over time.
In any event, the index will limit the scope for false claims that the world continues to warm at an unprecedented and dangerous rate.
UPDATE: Lucia’s Blackboard has a detailed essay analyzing the recent trend, written by SteveF, using an improved index for accounting for ENSO, volcanic aerosols, and solar cycles. He concludes the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996. Also, the original version of this story incorrectly referred to the Washington Post, when it was actually the New York Times article by Justin Gillis. That reference has been corrected.- Anthony
Related articles
- The warming ‘plateau’ may extend back even further (wattsupwiththat.com)
- Are We in a Pause or a Decline? (Now Includes at Least April* Data) (wattsupwiththat.com)
- The Met Drops Its Basis For Claim Of “Significant” Warming (papundits.wordpress.com)
- Benchmarking IPCC’s warming predictions (wattsupwiththat.com)
- WUWT: 150 million hits and counting (wattsupwiththat.com)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“Gary Hladik says:
June 14, 2013 at 12:55 pm
Different from what? AFAIK we only have one Earth.”
Certainly only one Earth in my world however some may live multiple realities.
If we have no significant warming in 17 years 4 months but have .28C average warming in both UAH and Hadcrut4 in 20 years (equaling a 1.4C increase in 100 years–a 2.1C increase in 2100 from 1940 levels.
http://www.woodfortrees.org/plot/uah/from:1993/mean:12/plot/uah/from:1993/trend/plot/hadcrut4gl/from:1993/mean:12/plot/hadcrut4gl/from:1993/trend
A .28c average AFTER ADJUSTMENTS!
Patrick,
True. And the ‘lower tropo’ is cherry-picked. Global surface temps are the relevant meteric. See here.
rgbatduke says:
“snarl of models”
Excellent collective noun!
ferdberple says: at June 14, 2013 at 6:19 am quite a lot about chaos that meant that models couldn’t be aggregated, notably
Nick Stokes replied at June 14, 2013 at 6:33 am
Which sounded reasonable. I took Nick Stokes at his word, but then he says…
So which is it?
Nick Stokes, you are not sounding consistent.
Consistency
Nick Stokes says: at June 14, 2013 at 12:09 pm in reply to Monckton of Brenchley, June 14, 2013 at 7:04 am:
But Monkton is still responding in kind to the leaked AR5 graphs. He has to compare apples with apples even if we don’t like them apples.
I agree the leaked AR5 graphs are rubbish.
But for consistency, will you (Nick Stokes) condemn the IPCC if the published AR5 includes such an average?
The theory that there is any such thing as a “global temperature” upon which you can calculate anomalies is terribly flawed in and of itself. Simply beginning with such a poorly defined concept will ultimately lead to all kinds of logical errors – the state of climate science today.
There are many good discussions of this. Below is just one of them:
http://www.researchgate.net/post/It_it_misleading_to_report_the_average_temperature_of_the_Earth
jai mitchell says:
June 14, 2013 at 11:22 am
No, what it says is gain is distributed over frequency, and you cannot deduce sensitivity to long term excitations based on short term ones without thorough knowledge of the frequency response of the system.
Please. You are out of your depth.
Bart,
The sinusoidal solar cycle, held at a constant average incidence for 50 years will not produce a long-term warming.
There is a point when extremist contrarianism becomes disinformation–you have completely crossed that line.
the solar function has a period of 12 years. On average, it has been relatively constant for over 50 years. you cannot infer warming due to a relatively constant average solar irradiation, even if it does operate on a sinusoidal function. . .that is just voodoo science.
unless you can prove to me that the earth’s response to increased solar activity isn’t felt for over 40 years. . .I suppose you have a peer reviewed document that states something to that effect?????
@ur momisugly Patrick
No Patrick, the RSS is without adjustments, the Hadcrut is with adjustments but they fit together almost completely.
DbStealey
you said,
True. And the ‘lower tropo’ is cherry-picked. Global surface temps are the relevant meteric. See here.
but your link shows RSS lower troposphere values. . .not global surface. If you wanted to use global surface then you should have looked here
http://www.woodfortrees.org/plot/gistemp/from:1993/plot/gistemp/from:1993/trend/plot/esrl-co2/from:1993/normalise/offset:0.68/plot/esrl-co2/from:1993/normalise/offset:0.68/trend
(note, the original plot was 1993, not 1997.9 (just before the largest el nino in recorded history that you decided to cherry pick)
jai mitchell says:
June 14, 2013 at 2:55 pm
————————————–
As has been commented upon in this blog many times, the UV component of TSI fluctuates by a factor of two, on about the time scale of observed sine wave above & below the trend line of recovery from the LIA in average temperature, with appropriate lag to produce observed PDO & AMO oscillations.
M Courtney says: June 14, 2013 at 2:07 pm
“So which is it?”
There is no inconsistency there. Weather outcomes are very sensitive to perturbations; this is reflected in model performance. But long term climate averages make sense and are universally used in the everyday world.
Fluid mechanics have dealt with this for many years. Turbulence is classic chaotic flow. For over a century it has been dealt with by Reynolds averaging.
“But Monkton is still responding in kind to the leaked AR5 graphs.”
That makes no sense, and he didn’t even talk about model averaging in his post. I’m simply dealing with his ridiculous attempt to pretend that RGB was not talking about the graphs published in this post, when he clearly said that he was.
Mr. Stokes continues to lie in his habitual fashion. Professor Brown states quite plainly that it was the IPCC’s graph, reproduced as part of one of my graphs, that he was criticizing.
After you had speculated on who had compiled my graph, which has the words “lordmoncktonfoundation.com” plainly written on it, Professor Brown writes: “Aw, c’mon Nick, you can do better than that. Clearly I was referring to the AR5 ensemble average over climate models, which is pulled from the actual publication IIRC.”
Professor Brown’s criticism is directed at the compilation of an ensemble from models using different code. That is what the IPCC reproduced in its draft of AR5, and that is what I reproduced from AR5, and labelled it as such.
I note that Mr. Stokes is entirely unable to refute what my graph demonstrates: that the models are over-predicting global temperatures. It does not matter whether one takes the upper bound or lower bound of the models’ temperature projections or anywhere in between: the models are predicting that global warming should by now be occurring at a rate that is not evident in observed reality. Get used to it.
The moderators may like to consider whether outright lying on Mr. Stokes’ part is a useful contribution here. It illustrates the intellectual bankruptcy of the paid and unpaid trolls who cling to climate extremism notwithstanding the evidence, but otherwise it is merely vexatious.
jai mitchell says:
“…a 2.1C increase in 2100 from 1940 levels.”
And you accuse me of cherry-picking!
Bart is right, you are way out of your depth. Even the über-alarmist NY Times now admits that global warmibg has stopped. Go argue with them if you don’t like it.
Nick Stokes says:
June 14, 2013 at 3:04 pm
M Courtney says: June 14, 2013 at 2:07 pm
“So which is it?”
There is no inconsistency there. Weather outcomes are very sensitive to perturbations; this is reflected in model performance. But long term climate averages make sense and are universally used in the everyday world.
Fluid mechanics have dealt with this for many years. Turbulence is classic chaotic flow. For over a century it has been dealt with by Reynolds averaging.
“But Monkton is still responding in kind to the leaked AR5 graphs.”
That makes no sense, and he didn’t even talk about model averaging in his post. I’m simply dealing with his ridiculous attempt to pretend that RGB was not talking about the graphs published in this post, when he clearly said that he was.
Nick, you know perfectly clear that rgb was addressing the divergence between the models and the reality and between the models themselves. He makes it pretty clear in his post, that much too many models which are modelling so bad the reality are still used. Models which contradict each other. This is not weather perturbation reflected in model performance, the divergence is growing and growing.
Yes long term climate averages are universally used, however this is exactly what rgb shows it is wrong. Averaging dirt does not give good results.
You know perfectly clear that the outputs from IPCC models are exactly as rgb describes them.
And you know perfectly that you are doing just a divergence inventing excuses.
You also know that it is not climate variances and turbulences which make models go so far away from reality. The issue is simply they do not model correctly the current processes or they miss something. rgb’s post makes perfectly sense, and he does not criticises Christopher Monkcton’s chart, but the majority of models used by the current climate science to achieve those averages. You know that what he says makes perfect sense, this is why you try to move the discussion in a collateral diversion.
Monckton of Brenchley says: June 14, 2013 at 3:48 pm
Professor Brown states quite plainly that it was the IPCC’s graph, reproduced as part of one of my graphs, that he was criticizing.
“Reproduced as part of”? Here’s how it is described above
“In answer to Mr. Stokes, the orange region representing the interval of models’ outputs will be found to correspond with the region shown in the spaghetti-graph of models’ projections from 2005-2050 at Fig. 11.33a of the Fifth Assessment Report. The correspondence between my region and that in Fig. 11.33a was explained in detail in an earlier posting. The central projection of 2.33 K/century equivalent that I derived from Fig. 11.33a seems fairly to reflect the models’ output.”
And here is Fig 11.33a. Reproduced? “Seems fairly to reflect”?
But RGB’s criticism was directed at the statistics in Lord Monckton’s graph. Let me quote:
“Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”
Nowhere in the AR5 Fig 11.33 is a mean and standard deviation created, with variance, treating the difference as if they are uncorrelated random variates etc. Those are Lord M’s statistics.
dbstealey
The fact is, if you all want to cherry pick a 1998 plot and say there is no warming,
http://www.woodfortrees.org/plot/uah/from:1998/mean:12/plot/uah/from:1998/trend
then I can pick 1993 and plot it and show significant warming.
http://www.woodfortrees.org/plot/uah/from:1993/mean:12/plot/uah/from:1993/trend
even if I use the faulty UAH data (IMO)
chip, chip…chipping away…
14 June: Bloomberg: Stefan Nicola & Alessandro Vitelli: Forest Carbon Won’t Be Tradable Commodity, Climate Expert Says
Emissions reductions created through forest protection never will become a tradable commodity, and private investors are beginning to realize that, a consultant for the Third World Network said.
Forest carbon can’t be measured as accurately as CO2 discharges from industrial projects, Kate Dooley, who advises the environmental group on climate change issues, said today in Bonn. Under the United Nations’ Reduced Emissions from Deforestation and Forest Degradation program, or REDD, developing nations protect and manage their forests in exchange for funding from developed states to support their efforts.
“If you think that REDD can be established as a carbon market, if you think that forest carbon can be measured to the level of accuracy to satisfy investors to invest in it as a carbon market, I think that there’s a lot of disappointment in that,” she said in an interview at the UN talks in the German city. “Governments will drive this and the private-sector interest in forest carbon is really falling away.” …
http://www.bloomberg.com/news/2013-06-14/forest-carbon-won-t-be-tradable-commodity-climate-expert-says.html
Jai, please explain why you consider UAH data “faulty” but are OK with the demonstrably faked data from GISS & UEA. Thanks.
[snip – Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]
@jai mitchell, Bart is right: you *ARE* out of your depth. You stepped off in to the deep end the moment you posted the statement a while back in another thread that CO2 stores heat.
Popper and Feynman both had a wonderful ability to clearly articulate difficult and complex things, yet our dear Mosher has managed to obfuscate them both in one comment. Bravo! Well played, sir.
jai mitchell says:
“…unless you can prove to me…”
Earth to jai mitchell: scientific skeptics have nothing to prove. The onus is entirely on the purveyors of the global warming conjecture to show that it is valid. But they have failed.
The failed conjecture that global warming is continuing — and even accelerating [!?] — is owned by the alarmist cult. But real world evidence and empirical observations have falsified the alarmist belief that global warming is continuing.
Empirical evidence shows that global warming has stopped for the past seventeen+ years, no matter what you mistakenly believe. The über-alarmist NY Times even admits that fact now. And climate alarmist Phil Jones also admits that global warming has stopped. In fact, anyone who pays attention to reality knows that global warming has stopped.
But if your religion requires you to believe that global warming is continuing, and even accelerating, then who are skeptics to disagree? All we have are facts, which cannot stand up against anyone’s emotion-based True Belief.