By Christopher Monckton of Brenchley
This time last year, as the honorary delegate from Burma, I had the honor of speaking truth to power at the Doha climate conference by drawing the attention of 193 nations to the then almost unknown fact that global warming had not happened for 16 years.
The UN edited the tape of my polite 45-second intervention by cutting out the furious howls and hisses of my supposedly grown-up fellow delegates. They were less than pleased that their carbon-spewing gravy-train had just tipped into the gulch.
The climate-extremist news media were incandescent. How could I have Interrupted The Sermon In Church? They only reported what I said because they had become so uncritical in swallowing the official story-line that they did not know there had really been no global warming at all for 16 years. They sneered that I was talking nonsense – and unwittingly played into our hands by spreading the truth they had for so long denied and concealed.
Several delegations decided to check with the IPCC. Had the Burmese delegate been correct? He had sounded as though he knew what he was talking about. Two months later, Railroad Engineer Pachauri, climate-science chairman of the IPCC, was compelled to announce in Melbourne that there had indeed been no global warming for 17 years. He even hinted that perhaps the skeptics ought to be listened to after all.
At this year’s UN Warsaw climate gagfest, Marc Morano of Climate Depot told the CFACT press conference that the usual suspects had successively tried to attribute The Pause to the alleged success of the Montreal Protocol in mending the ozone layer; to China burning coal (a nice irony there: Burn Coal And Save The Planet From – er – Burning Coal); and now, just in time for the conference, by trying to pretend that The Pause has not happened after all.
As David Whitehouse recently revealed, the paper by Cowtan & Way in the Quarterly Journal of the Royal Meteorological Society used statistical prestidigitation to vanish The Pause.
Dr. Whitehouse’s elegant argument used a technique in which Socrates delighted. He stood on the authors’ own ground, accepted for the sake of argument that they had used various techniques to fill in missing data from the Arctic, where few temperature measurements are taken, and still demonstrated that their premises did not validly entail their conclusion.
However, the central error in Cowtan & Way’s paper is a fundamental one and, as far as I know, it has not yet been pointed out. So here goes.
As Dr. Whitehouse said, HadCRUTt4 already takes into account the missing data in its monthly estimates of coverage uncertainty. For good measure and good measurement, it also includes estimates for measurement uncertainty and bias uncertainty.
Taking into account these three sources of uncertainty in measuring global mean surface temperature, the error bars are an impressive 0.15 Cº – almost a sixth of a Celsius degree – either side of the central estimate.
The fundamental conceptual error that Cowtan & Way had made lay in their failure to realize that large uncertainties do not reduce the length of The Pause: they actually increase it.
Cowtan & Way’s proposed changes to the HadCRUt4 dataset, intended to trounce the skeptics by eliminating The Pause, were so small that the trend calculated on the basis of their amendments still fell within the combined uncertainties.
In short, even if their imaginative data reconstructions were justifiable (which, as Dr. Whitehouse indicated, they were not), they made nothing like enough difference to allow us to be 95% confident that any global warming at all had occurred during The Pause.
If one takes no account of the error bars and confines the analysis to the central estimates of the temperature anomalies, the HadCRUt4 dataset shows no global warming at all for nigh on 13 years (above).
However, if one displays the 2 σ uncertainty region, the least-squares linear-regression trend falls wholly within that region for 17 years 9 months (below).
The true duration of The Pause, based on the HadCRUT4 dataset approaches 18 years. Therefore, the question Cowtan & Way should have addressed, but did not address, is whether the patchwork of infills and extrapolations and krigings they used in their attempt to deny The Pause was at all likely to constrain the wide uncertainties in the dataset, rather than adding to them.
Publication of papers such as Cowtan & Way, which really ought not to have passed peer review, does indicate the growing desperation of institutions such as the Royal Meteorological Society, which, like every institution that has profiteered by global warming, does not want the flood of taxpayer dollars to become a drought.
Those driving the scare have by now so utterly abandoned the search for truth that is the end and object of science that they are incapable of thinking straight. They have lost the knack.
Had they but realized it, they did not need to deploy ingenious statistical dodges to make The Pause go away. All they had to do was wait for the next El Niño.
These sudden warmings of the equatorial eastern Pacific, for which the vaunted models are still unable to account, occur on average every three or four years. Before long, therefore, another El Niño will arrive, the wind and the thermohaline circulation will carry the warmth around the world, and The Pause – at least for a time – will be over.
It is understandable that skeptics should draw attention to The Pause, for its existence stands as a simple, powerful, and instantly comprehensible refutation of much of the nonsense talked in Warsaw this week.
For instance, the most straightforward and unassailable argument against those at the U.N. who directly contradict the IPCC’s own science by trying to blame Typhoon Haiyan on global warming is that there has not been any for just about 18 years.
In logic, that which has occurred cannot legitimately be attributed to that which has not.
However, the world continues to add CO2 to the atmosphere and, all other things being equal, some warming can be expected to resume one day.
It is vital, therefore, to lay stress not so much on The Pause itself, useful though it is, as on the steadily growing discrepancy between the rate of global warming predicted by the models and the rate that actually occurs.
The IPCC, in its 2013 Assessment Report, runs its global warming predictions from January 2005. It seems not to have noticed that January 2005 happened more than eight and a half years before the Fifth Assessment Report was published.
Startlingly, its predictions of what has already happened are wrong. And not just a bit wrong. Very wrong. No prizes for guessing in which direction the discrepancy between modeled “prediction” and observed reality runs. Yup, you guessed it. They exaggerated.
The left panel shows the models’ predictions to 2050. The right panel shows the discrepancy of half a Celsius degree between “prediction” and reality since 2005.
On top of this discrepancy, the trends in observed temperature compared with the models’ predictions since January 2005 continue inexorably to diverge:
Here, 34 models’ projections of global warming since January 2005 in the IPCC’s Fifth Assessment Report are shown an orange region. The IPCC’s central projection, the thick red line, shows the world should have warmed by 0.20 Cº over the period (equivalent to 2.33 Cº/century). The 18 ppmv (201 ppmv/century) rise in the trend on the gray dogtooth CO2 concentration curve, plus other ghg increases, should have caused 0.1 Cº warming, with the remaining 0.1 ºC from previous CO2 increases.
Yet the mean of the RSS and UAH satellite measurements, in dark blue over the bright blue trend-line, shows global cooling of 0.01 Cº (–0.15 Cº/century). The models have thus already over-predicted warming by 0.22 Cº (2.48 Cº/century).
This continuing credibility gap between prediction and observation is the real canary in the coal-mine. It is not just The Pause that matters: it is the Gap that matters, and the Gap that will continue to matter, and to widen, long after The Pause has gone. The Pause deniers will eventually have their day: but the Gap deniers will look ever stupider as the century unfolds.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Nick Stokes says:
November 20, 2013 at 2:10 am
———————————————-
“You can adopt the null hypothesis that the models are right, and if you can reject that, you’ve proved something.”
Nice try Mr Stokes , but it won’t wash. Despite “travesty” Trenberth’s crazed proposal to the AMS that the null hypothesis should be reversed in the case of AGW, the null hypothesis “AGW is utter tripe” remains in place. And the null hypothesis still stands for not just AGW but the radiative greenhouse hypothesis underlying it.
“But AGW isn’t deduced from the temperature record, so isn’t dependent on rejecting a null hypothesis of zero warming.”
No, that won’t work either. AGW has not been “deduced”. It has been proposed, rejected, reanimated, hyped and used for blatantly political purposes. Anyone with any reasoning ability should be able to deduce that adding radiative gases to the atmosphere will not reduce the atmospheres radiative cooling ability.
RicharLH: I believe that excess warming, including the rise in OHC, in the 1980s and 1990s was because of Asian industrialisation and forest burning. The extra aerosols reduced cloud albedo.
The effect saturated about 1999 when the ‘Asian Brown Cloud’ appeared. this seems to have been the ‘false positive’ which encouraged ‘the team’ to continue its serious dissembling.
PS the physics behind this is the correction to Sagan’s incorrect aerosol optical physics. He misinterpreted the work of van der Hulst.
Lewis P Buckingham says:
November 20, 2013 at 1:18 am
“When Cowtan and Way infilled the Arctic temperature data, did they also calculate error bars inherent in that infilling?”
Did Cowan and Way “infill” the Arctic temperature data? To me it looks as if Cowan and Way made retrospective predictions about what Arctic temperatures would have been, rather than providing “actual data”. There is a lamentable tendency to treat predictions as “actual data”. By “actual data” I mean real temperature measurements made, and recorded, by flesh and blood people. Cowan and Way have not provided any new “actual data.” There is another point. How does anyone know whether or not temperatures in the Arctic are rising rapidly, if, as is generally admitted, there is a scarcity of “actual data” for the Arctic?
Nick Stokes, I have heard of “moving the goal posts” to win an argument, but you have taken the goal posts off the pitch entirely.
Look up the scientific method, or re-take science 101!
The CAGW hypothesis, as demonstrated by models, has been entirely and completely falsified, by true, unadjusted, untampered, real, empirical scientific measurements. The prediction of the accepted and established hypothesis, (that a doubling in CO2 will result in a warming rate with a central estimate of 3 degrees warming) has NOT happened. the prediction is false, the hypothesis is falsified. Go back to the drawing board and find a hypothesis which is validated by empirical evidence. Stop adjusting the evidence to fit the hypothesis!
Bottom line –
CO2 has risen, is still rising, and temperatures are not.
Nick Stokes, I cannot believe you are even trying to defend your laughable position, particularly with this classic:
OK right, so if a theory is accepted by a scientific consensus as “proved” then it stays proven regardless of all subsequent evidence to the contrary? Gallileo and Einstein might have something to say on that idea.
Well nick is right, in that AGW is not deduced from the temperature record, its a failed hypothesis that persists despite solid evidence to the contrary….IE models designed with AGW in mind invariably overestimate warming by large amounts. Natural variability is the primary driver of climate and if you can’t see that by now, you should really take a close look at why you believe in CAGW.
NicK Stokes says “AGW has been around since 1896. Arrhenius then deduced that CO2 would impede the loss of heat through IR, and would cause temperatures to rise. There was no observed warming then.”
While the IPCC likes to show warming only from 1850, 46 years before 1896; longer time series show warming since the LIA. How much of that if any is AGW is yet to be demonstrated.
@ur momisugly Nick Stokes:
AGW has been around since 1896. Arrhenius then deduced that CO2 would impede the loss of heat through IR, and would cause temperatures to rise. There was no observed warming then. AGW is a consequence of what we know about the radiative properties of gases.
One wee flaw, he completely reversed his opinion about 10 years later! AGW is still just a hypothesis, not even a theory, but once the all encompassing Precautionary Principle is invoked, anything is possible, even fairies at the bottom of your garden! 🙂
cd says: November 20, 2013 at 2:59 am
“If you claim to understand what causes climate change then make predictions that their will be statistically significant warming with increasing CO2, and at a particular rate, and then it fails to materialise then by all scientific standards the null hypothesis is accepted.”
No, statistical testing never leads to the null hypothesis being accepted. The outcomes are reject or fail to reject.
If you want to disprove something statistically, you have to adopt the null hypothesis that it is true, and then show that that has to be rejected.
Alan the Brit says: November 20, 2013 at 3:33 am
“One wee flaw, he completely reversed his opinion about 10 years later!”
I’d like to see a citation for that.
As Dr Roy Spencer has stated, the climate Null Hypothesis has never been falsified.
The Null Hypothesis is a corollary of the Scientific Method. Because it has never been falsified, it means that the current climate remains well within historical norms. There is nothing either unusual or unprecedented happening, therefore all the arm-waving to the contrary is simply Chicken Little-type alarmism.
In a way Nick Stokes is correct in saying “AGW isn’t deduced from the temperature record”. The debate is actually about whether and, if so, how, when and where climate becomes significantly more dangerous overall having taken account of demographic changes in the broadest sense”.
Global average temperature is used as a simplistic proxy by both sides of the debate to try and justify political actions.
Taking the focus to the real issue and looking at existing data , apart from sea-level behaviour there is as yet nothing to indicate how, when and where things will get more dangerous..
Nick Stokes says:
November 20, 2013 at 2:55 am
“But a model does not use as input any temperature record.”
I find this difficult to accept!
Do people write simulations with many variables and not seed the variables at the start of simulation?
Surely every simulation run ought to be preceded by an initialization step?
To Alan the Brit: the Arrhenius hypothesis is based on the assumption of ‘black body’ surface-emitted real energy being absorbed by GHGs in the atmosphere with that energy being thermalised in the atmosphere.
Only one of these assumptions is valid; if there were IR emission in the self-absorbed GHG IR bands, that energy would be absorbed. However, anyone with sufficient statistical thermodynamics’ knowledge knows that this energy cannot be thermalised in the gas phase (assumes higher or equal temperature surface).
The bottom line is that it all comes down to Tyndall’s experiment having been seriously misunderstood. The GHGs absorb IR energy but the thermalisation has to be at optical heterogeneities, the interface with condensed matter for which the vibrationally activated density of states is much broader.
As for surface emission: the most basic radiative physics is that radiation fields are added vectorially, so there can be no net surface IR in most H2O or CO2 bands. That so many physicists accept the IPCC case proves that modern physics’ education is rubbish. I forgive the climate people because they are taught incorrect physics No professional engineer with process engineering experience accepts this mistaken view because we have to get the right answer.
“However, the world continues to add CO2 to the atmosphere and, all other things being equal, some warming can be expected to resume one day.”
Isn’t this conflation of the two and the implied cause and effect, an example of a logical fallacy? I’m sure it will get warmer again and it will get colder again, but without very much reference to CO2.
steverichards1984 says: November 20, 2013 at 3:42 am
“Do people write simulations with many variables and not seed the variables at the start of simulation?
Surely every simulation run ought to be preceded by an initialization step?”
Yes, they do initialize. But typically with a climate model, the initial state is set way back (many decades), and the model left to “wind up”. That’s an acknowledgement that the initial state is not well known, and probably contains unrealistic things that have to be left to settle down. The initial state would be based on climate norms.
dbstealey says: November 20, 2013 at 3:42 am
“As Dr Roy Spencer has stated, the climate Null Hypothesis has never been falsified.”
I presume that NH includes zero trend. And that just isn’t true. The fact that people are talking about 20 years or whatever without significant warming implies that the trend over longer times is significantly different from zero. Otherwise what does the number mean?
TLM says: November 20, 2013 at 3:25 am
“OK right, so if a theory is accepted by a scientific consensus as “proved” then it stays proven regardless of all subsequent evidence to the contrary?”
No, there’s a well established way of disproving it. Do it! If you want to do it statistically, posit some consequence of the theory as null hypothesis and try to reject it. Just saying that you have failed to disprove some alternative theory doesn’t work.
Surely whether there has been a pause in warming or not over the past 17 years is neither here nor there. The key point is that the GCM models predicted that temperatures would rise a lot faster than they have over the past 17 years due to increasing emissions/concentrations of CO2, and the fact that the temperature hasn’t risen as quickly as predicted, kind of suggests that the sensitivity of the surface temperature to CO2 concentrations is low, and hence future impacts will be less severe than currently predicted.
Surely that is a good thing? And thus climate change becomes something less to worry about? Or am I missing something?
Nick Stokes says:
November 20, 2013 at 2:55 am
“GISS forcings are often cited. But a model does not use as input any temperature record.”
The temperature record is used in the calculation of the most critical input of all in the models. And not the entire temperature record, but only a tiny fraction of the record that is extremely cherry-picked. The temperature record of late 20th Century warming is used in the calculation of climate sensitivity, which is the sole reason for any significant debate on AGW.
Now, the water vapor feedback hypothesis does not need a temperature record to become a hypothesis. One can hypothesize that the feedback is any number at all, from extremely negative to extremely positive. Yet it is absolutely critical that the feedback number be seen as potentially legitimate, and that it seems to equate with at least some actual temperature record. The late 20th century warming plus CO2 trend are the only time in history that we have any evidence that the current water vapor feedback hypothesis could be valid. Outside of this time, the hypothesis is falsified by evidence that is far more scientifically valid than Cowtan and Way’s Arctic temperatures.
Remove the cherry-picked temperature record from the assumption of a water vapor feedback and the AGW Theory becomes a 1 degree, largely beneficial, temperature rise in which the world can rejoice. So go ahead and remove the temperature record, Nick, and we can all go home pretend the last 25 years of fear mongering never happened. However you can not defend the temperature record as justification for the input assumptions and then deny that the temperature record is relevant to the output.
If the temperature record is not relevant, then the AGW theory is not relevant, from beginning to end.
Jim Clarke says: November 20, 2013 at 4:15 am
“The temperature record is used in the calculation of the most critical input of all in the models. And not the entire temperature record, but only a tiny fraction of the record that is extremely cherry-picked. The temperature record of late 20th Century warming is used in the calculation of climate sensitivity, which is the sole reason for any significant debate on AGW.”
Climate sensitivity is not an input to GCM’s. You can use a GCM to estimate CS. People also try to independently estimate CS from the temp record, but it isn’t easy.
Great thread!
“AlecM says:
November 20, 2013 at 3:43 am”
Could not have said it better myself. Give this man a VB!
Hi Nick
Have you ever done the exercise whereby you remove the Arctic stations/data from the equation and then graphed the results?
That is to say that what is produced is ;
A) A ‘global’ record excluding the Arctic
B) A NH record excluding the arctic
C) JUST the Arctic itself?
Tonyb
So,,,,,, who are these individuals that have written this paper?
What is their history in climate science?
What else have they written?
What groups do they belong to?
What is their motivation for attempting to explain away the pause?
Who funds them?
Who reviews them?