By Christopher Monckton of Brenchley
This time last year, as the honorary delegate from Burma, I had the honor of speaking truth to power at the Doha climate conference by drawing the attention of 193 nations to the then almost unknown fact that global warming had not happened for 16 years.
The UN edited the tape of my polite 45-second intervention by cutting out the furious howls and hisses of my supposedly grown-up fellow delegates. They were less than pleased that their carbon-spewing gravy-train had just tipped into the gulch.
The climate-extremist news media were incandescent. How could I have Interrupted The Sermon In Church? They only reported what I said because they had become so uncritical in swallowing the official story-line that they did not know there had really been no global warming at all for 16 years. They sneered that I was talking nonsense – and unwittingly played into our hands by spreading the truth they had for so long denied and concealed.
Several delegations decided to check with the IPCC. Had the Burmese delegate been correct? He had sounded as though he knew what he was talking about. Two months later, Railroad Engineer Pachauri, climate-science chairman of the IPCC, was compelled to announce in Melbourne that there had indeed been no global warming for 17 years. He even hinted that perhaps the skeptics ought to be listened to after all.
At this year’s UN Warsaw climate gagfest, Marc Morano of Climate Depot told the CFACT press conference that the usual suspects had successively tried to attribute The Pause to the alleged success of the Montreal Protocol in mending the ozone layer; to China burning coal (a nice irony there: Burn Coal And Save The Planet From – er – Burning Coal); and now, just in time for the conference, by trying to pretend that The Pause has not happened after all.
As David Whitehouse recently revealed, the paper by Cowtan & Way in the Quarterly Journal of the Royal Meteorological Society used statistical prestidigitation to vanish The Pause.
Dr. Whitehouse’s elegant argument used a technique in which Socrates delighted. He stood on the authors’ own ground, accepted for the sake of argument that they had used various techniques to fill in missing data from the Arctic, where few temperature measurements are taken, and still demonstrated that their premises did not validly entail their conclusion.
However, the central error in Cowtan & Way’s paper is a fundamental one and, as far as I know, it has not yet been pointed out. So here goes.
As Dr. Whitehouse said, HadCRUTt4 already takes into account the missing data in its monthly estimates of coverage uncertainty. For good measure and good measurement, it also includes estimates for measurement uncertainty and bias uncertainty.
Taking into account these three sources of uncertainty in measuring global mean surface temperature, the error bars are an impressive 0.15 Cº – almost a sixth of a Celsius degree – either side of the central estimate.
The fundamental conceptual error that Cowtan & Way had made lay in their failure to realize that large uncertainties do not reduce the length of The Pause: they actually increase it.
Cowtan & Way’s proposed changes to the HadCRUt4 dataset, intended to trounce the skeptics by eliminating The Pause, were so small that the trend calculated on the basis of their amendments still fell within the combined uncertainties.
In short, even if their imaginative data reconstructions were justifiable (which, as Dr. Whitehouse indicated, they were not), they made nothing like enough difference to allow us to be 95% confident that any global warming at all had occurred during The Pause.
If one takes no account of the error bars and confines the analysis to the central estimates of the temperature anomalies, the HadCRUt4 dataset shows no global warming at all for nigh on 13 years (above).
However, if one displays the 2 σ uncertainty region, the least-squares linear-regression trend falls wholly within that region for 17 years 9 months (below).
The true duration of The Pause, based on the HadCRUT4 dataset approaches 18 years. Therefore, the question Cowtan & Way should have addressed, but did not address, is whether the patchwork of infills and extrapolations and krigings they used in their attempt to deny The Pause was at all likely to constrain the wide uncertainties in the dataset, rather than adding to them.
Publication of papers such as Cowtan & Way, which really ought not to have passed peer review, does indicate the growing desperation of institutions such as the Royal Meteorological Society, which, like every institution that has profiteered by global warming, does not want the flood of taxpayer dollars to become a drought.
Those driving the scare have by now so utterly abandoned the search for truth that is the end and object of science that they are incapable of thinking straight. They have lost the knack.
Had they but realized it, they did not need to deploy ingenious statistical dodges to make The Pause go away. All they had to do was wait for the next El Niño.
These sudden warmings of the equatorial eastern Pacific, for which the vaunted models are still unable to account, occur on average every three or four years. Before long, therefore, another El Niño will arrive, the wind and the thermohaline circulation will carry the warmth around the world, and The Pause – at least for a time – will be over.
It is understandable that skeptics should draw attention to The Pause, for its existence stands as a simple, powerful, and instantly comprehensible refutation of much of the nonsense talked in Warsaw this week.
For instance, the most straightforward and unassailable argument against those at the U.N. who directly contradict the IPCC’s own science by trying to blame Typhoon Haiyan on global warming is that there has not been any for just about 18 years.
In logic, that which has occurred cannot legitimately be attributed to that which has not.
However, the world continues to add CO2 to the atmosphere and, all other things being equal, some warming can be expected to resume one day.
It is vital, therefore, to lay stress not so much on The Pause itself, useful though it is, as on the steadily growing discrepancy between the rate of global warming predicted by the models and the rate that actually occurs.
The IPCC, in its 2013 Assessment Report, runs its global warming predictions from January 2005. It seems not to have noticed that January 2005 happened more than eight and a half years before the Fifth Assessment Report was published.
Startlingly, its predictions of what has already happened are wrong. And not just a bit wrong. Very wrong. No prizes for guessing in which direction the discrepancy between modeled “prediction” and observed reality runs. Yup, you guessed it. They exaggerated.
The left panel shows the models’ predictions to 2050. The right panel shows the discrepancy of half a Celsius degree between “prediction” and reality since 2005.
On top of this discrepancy, the trends in observed temperature compared with the models’ predictions since January 2005 continue inexorably to diverge:
Here, 34 models’ projections of global warming since January 2005 in the IPCC’s Fifth Assessment Report are shown an orange region. The IPCC’s central projection, the thick red line, shows the world should have warmed by 0.20 Cº over the period (equivalent to 2.33 Cº/century). The 18 ppmv (201 ppmv/century) rise in the trend on the gray dogtooth CO2 concentration curve, plus other ghg increases, should have caused 0.1 Cº warming, with the remaining 0.1 ºC from previous CO2 increases.
Yet the mean of the RSS and UAH satellite measurements, in dark blue over the bright blue trend-line, shows global cooling of 0.01 Cº (–0.15 Cº/century). The models have thus already over-predicted warming by 0.22 Cº (2.48 Cº/century).
This continuing credibility gap between prediction and observation is the real canary in the coal-mine. It is not just The Pause that matters: it is the Gap that matters, and the Gap that will continue to matter, and to widen, long after The Pause has gone. The Pause deniers will eventually have their day: but the Gap deniers will look ever stupider as the century unfolds.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
@Nick
“But AGW isn’t deduced from the temperature record, so isn’t dependent on rejecting a null hypothesis of zero warming.”
I got a bigger and longer laugh from your squirming today than from the good Lord’s wise words, and that is saying something! Of all the desperation – I just can’t spend the time to address all of it so just one reminder about 1896 and all that LWIR radiation.
Arrhenius did make his observation of course, but later admitted he got it really wrong! How about citing that for a change! All the IPCC and their running mates are doing is repeating his first mistake, only to have to (inevitably) correct it later just as he did: CO2 warms, but not by very much.
Christopher M observes that the warming is so slight that even a lack of El Nino’s for a time cancels it entirely. You will recall, of course, outrageous prophecies from the likes of Hansen who had the oceans boiling and a “Venus-like climate” in a few centuries, based on the continued redoubling of emissions from burning fossil fuels that same crowd screams are going to run out soon. When that happens we will still be able to burn the piles of accumulated stupid over at the IPCC offices.
Monty Python never made up a sketch as dumb as the kneejerk defenses of CAGW. 1896….my a$$!
Wow, Nick Stokes has really jumped the shark in this comments thread. Needs to be preserved for posterity, as the moment he can look back on and realise what the AGW agenda had done to his scientific objectivity.
Been away, sorry for the delay.
It is a big step to go from:
A: The temperature record has diverged from the models and so the models are wrong.
to
B: The temperature record has diverged from the models and so CO2 is not a greenhouse gas.
Point A seems to be proven, within any statistical meaning. But that does not lead to Point B. The climate is a complex system with many factors. How they all interact is not known… indeed proving Point A shows they are not all well-estimated.
Conversely, Point B being a reasonable fact (some might say a self-evident fact from our knowledge of spectroscopy) does not necessarily lead back to Point A. Although it might be a justifiable leap if the models did have a proven track record of approximating the real woorld
Failure of the models is a reason to not use the models in making expensive and poverty inducing policy decisions.
Failure of the models is a reason to question the impact of CO2 and other greenhouse gases on the whole climate system.
That’s two steps with different justifications required.
Yet it seems to me that many people get so carried away with their policy battles that they go so far as reify the link from policy to the Navier-Stokes equations: Firming it up both ways.
They are very tenuous links.
tonyb says: November 20, 2013 at 4:28 am
“Have you ever done the exercise whereby you remove the Arctic stations/data from the equation and then graphed the results?”
Well, not quite. What I’ve been doing lately is contrasting the normal practice of HADCRUT (and most recently, NOAA) of discarding cells with no data, with instead infilling with a latitude average.
Discarding means in arithmetical effect that the dataless cells are treated as having the value of the global average. This underweights the information we have about the region. Treating them as typical of what we know of their latitude, rather than what we know of the world, makes more sense.
So it’s not exactly with/without, but nearly.
I have tried ways of estimating Arctic and Antarctic in isolation. Here’s Antarctica.
“Crispin in Waterloo but really in Ulaanbaatar says:
November 20, 2013 at 4:33 am”
Another “wrong” for Arrhenius was Eugenics! But most people keep quiet about that too!
OssQss says:
November 20, 2013 at 4:29 am
Reading through the related thread on Climate Audit it appears that the authors have form on SkS.
AlecM says:
November 20, 2013 at 3:00 am
The key issue is from when did ‘the team’ realise it was wrong?
AlecM, they always knew they were wrong, it was never, ever about real Science, it was about Control & Cash.
Nick
Thanks for that.
The amplification in the Arctic is, I suspect, artificially inflating the Global and NH temperatures albeit Giss and Hadley don’t really appear to properly account for it
There is also the ‘uhi’ factor that recognise that far more readings are taken in urban areas than used to be the case, but lets ignore that for the moment
I have done a lot of work on CET and was at the Met Office discussing it just a couple of weeks ago. It is a pretty reliable proxy for NH temperatures at least. Look at what it has been doing over the last decade.
http://www.metoffice.gov.uk/hadobs/hadcet/
There appear to be many other datasets showing cooling, but they are being lost in the general noise of the bigger record.
I suspect the Arctic is (or has been) warming, just as it did in the 1920-1940 period (where 1930-1940 In Greenland remain the two warmest consecutive decades on record) and we also know of considerable warming in the 1820-1850 period.
Bearing in mind all the above I would have thought it a very useful and very fundamental exercise for someone with the appropriate skills (You) to produce the three graphs I suggest.
I suspect there is a Nobel prize in this for both of us 🙂
tonyb
Arctic North of 70 deg North is less han 3% of the World. If it warmed 1 deg C only, the World would warm 0.03 deg C!!
For Arctic alone to warm up the World 1 deg C it would have to warm up moret han 33 deg C
Hyperthermania @ur momisugly 01:30 says:
Here ya go:
pres·ti·dig·i·ta·tion
ˌprestəˌdijəˈtāSHən/
noun
formal
noun: prestidigitation
1.
magic tricks performed as entertainment.
Nick Stokes says:
November 20, 2013 at 3:52 am
steverichards1984 says: November 20, 2013 at 3:42 am
“Do people write simulations with many variables and not seed the variables at the start of simulation?
Surely every simulation run ought to be preceded by an initialization step?”
Yes, they do initialize. But typically with a climate model, the initial state is set way back (many decades), and the model left to “wind up”. That’s an acknowledgement that the initial state is not well known, and probably contains unrealistic things that have to be left to settle down. The initial state would be based on climate norms.
—————————————————————–
That may be true but you overlook the point that the models then use the existing temperature record to “calibrate ” (i.e. modify -some say fudge – the model parameters) to make the models “fit” the temperature record before the prediction runs.
Nick Stokes says:
November 20, 2013 at 2:10 am
“But AGW isn’t deduced from the temperature record, so isn’t dependent on rejecting a null hypothesis of zero warming.”
In other words, the hypothesis of Antropogenic Global Warming stays unfalsified even when it’s not warming?
In other words, rising temperatures are not a prediction of the theory?
Ok. Let’s just accept that.
You have just said that the AGW theory does not predict rising temperatures.
If that is the new official position of IPCC climate science, we can stop talking about spending hundreds of billions to protect us from warming.
Nick Stokes says:
November 20, 2013 at 4:40 am
“Discarding means in arithmetical effect that the dataless cells are treated as having the value of the global average.”
Infilling by any method has the logical effect that you are then estimating rather than measuring. The assumption is that the infilling method provides a “correct” value to substitute for a truly measured one.
To be sure you should only compare “like with like” thus not increasing the error potential/margin.
That is the main problem with “Cowtan and Way”, they create data by estimation then treat it as “measured” for the conclusion they derive.
Many thanks to all who have contributed here. Mr. Stokes is perhaps on shaky ground when he suggests that observed temperature change is not an input to the models. Of course it is, and in many places. For instance, it is one of the inputs that they use in their attempts to quantify the water vapor and other temperature feedbacks.
He is also on shaky ground in suggesting that the fact of little or no warming over the past couple of decades does not show the theory to have been wrong. Of course it does. Everyone who is rational accepts that adding greenhouse gases to the atmosphere will cause some warming, all other things being equal: but Arrhenius, whom Mr. Stokes cites with approval, did indeed change his mind about the central question in the climate debate. That question is not, as Mr. Stokes tries to imply, the question whether CO2 is a greenhouse gas and can cause warming. That question has long been settled in the affirmative.
Mr. Stokes is incorrect to say that Arrhenius was the first to posit the warming influence of CO2. It was in fact Joseph Fourier who did so, for he had deduced that it might influence the escape of “chaleur obscure” (i.e. infrared radiation) to space. Tyndall’s experiment of 1859 demonstrated that CO2 does indeed inhibit the passage of long-wave radiation. Arrhenius, during the long Arctic winter of 1895/6, after the loss of his wife, consoled himself by carrying out 10,000 individual spectral-line calculations, and he had not even brought a pocket calculator with him, still less a computer.
Unfortunately, his calculations were wrong. They were based on defective lunar spectra, and he had not at that time come across the fundamental equation of radiative transfer, which had been demonstrated a quarter of a century previously and would have saved him much computation. In 1906 he realized he had gotten his sums wrong, and, in a paper published in Vol. 1, no. 2 of the Journal of the Royal Nobel Institute, he published a new estimate about one-third of the original estimate, though he also added a water-vapor feedback.
As the head posting demonstrates, there is a growing discrepancy between even the most recent predictions of the IPCC about the rate of global warming and the observed rate. That discrepancy is now serious. The discrepancy between the First Assessment Report’s predictions in 1990 and what has happened since are still more serious. Then, the IPCC predicted that global warming would occur at 0.35[0.2, 0.5] K/decade. However,, the actual warming since then has been 0.14 K/decade, or only 40% of the predicted rate.
Furthermore,much of the warming since 1990 occurred during the positive phase of the Pacific Decadal Oscillation that endured from the sharp cooling-to-warming phase transition in 1976 to the warming-to-cooling transition late in 2001. As Pinker et al. pointed out in 2005, the positive phase of the PDO was coincident with – and perhaps causatively correlated with – a naturally-occurring reduction in cloud cover that greatly reduced the planetary albedo and exercised as very large forcing (approaching 3 Watts per square meter).
Analysis by Monckton of Brenchley and Boston (2010), in the 42nd Annual Proceedings of the World Federation of Scientists, suggests that between one-third and one-half of the warming since 1983 had been anthropogenic, and the rest was caused by the reduction in cloud cover.
Like it or not, the continuing failure of global mean surface temperature to change at anything like the predicted rate (or, in the past couple of decades, at all) is a serious challenge to the official theory, raising questions about the magnitude of the feedbacks the IPCC uses as a sort of deus ex machina to triple the small direct warming from CO2.
Mr. Stokes, in trying to suggest that the debate between skeptics and extremists centers on whether or not there is a greenhouse effect, is being disingenuous. The true debate is about how big the direct warming effect of CO2 is (for there are many non-radiative transports that act homeostatically and are undervalued by the models: evaporation, for instance), and how big the feedback factor should be (several papers find feedbacks appreciably net-negative, dividing climate sensitivity by up to 5).
Mr. Stokes also gives the impression that the uncertainties not only in the data but also in the theory are far smaller than they are. It is perhaps time for him to accept, in the light of the now-manifest failure of global temperatures to respond as predicted, that those of us who have raised legitimate and serious questions about those many aspects of the theory that are not settled science may have been right to do so.
Intellectual honesty is essential to true science. Mr. Stokes would earn more respect if he conceded that the discrepancy between what was predicted and what is observed is material, and that, if it persists, the skeptics he so excoriates will have been proven right.
I believe that Nick Stokes deserves an award for agile and persistent hand-waving around the fact that the warming has stopped for at least 17 years. Well done. It takes a very special amount of diligence and effort to ignore the truth, which has his cohorts in such a panic that they can’t even decide how to respond to it, resorting to amazing feats (and quite amusing) of straw-grasping.
BREAKING NEWS ALERT!!!
20 Nov: Washington Post: AP: Turmoil at UN climate talks as question of who’s to blame for global warming heats up
An old rift between rich and poor has reopened in U.N. climate talks as developing countries look for ways to make developed countries accept responsibility for global warming — and pay for it.
With two days left, there was commotion in the Warsaw talks Wednesday after the conference president — Poland’s environment minister — was fired in a government reshuffle and developing country negotiators walked out of a meeting on compensation for climate impacts….
The question of who’s to blame for climate change is central to developing countries who say they should receive financial support from rich nations to green their economies, adapt to shifts in the climate and cover costs of unavoidable damage caused by warming temperatures.
http://www.washingtonpost.com/world/europe/turmoil-at-un-climate-talks-as-question-of-whos-to-blame-for-global-warming-heats-up/2013/11/20/17a34bf6-51e5-11e3-9ee6-2580086d8254_story.html
u must check the pic in the above, whose caption is:
(PRECIOUS) Photo Caption: United Nations Secretary General Ban Ki-moon, right, and Executive Secretary of the UN Framework Convention on Climate Change Christiana Figueres, left, talk during a meeting with the Ghana Bamboo Bike initiative, at the UN Climate Conference in Warsaw, Poland, Wednesday, Nov. 20, 2013.
HOPEFULLY THE DEVELOPING COUNTRIES WILL NOW GET OUT OF THE PROCESS ALTOGETHER, & CHASE AWAY THE SOLAR/WIND SALESPEOPLE PUSHING TECHNOLOGY ON THEM THAT WE IN THE DEVELOPED WORLD CAN’T EVEN AFFORD IN THEIR PRESENT STAGE OF DEVELOPMENT.
@DirkH
“That may be true but you overlook the point that the models then use the existing temperature record to “calibrate ” (i.e. modify -some say fudge – the model parameters) to make the models “fit” the temperature record before the prediction runs”
Unless we have a different understanding of ‘model parameter’ what you are saying is not correct.
Could you clarify what you mean by ‘model parameter’ and how you think they are ‘calibrated’?
@Nick Stokes
Serious question for information. Do the GCMs have any adjustable parameters? If so are these parameters fit to the prior history? By contrast, are the GCMs first principle models with well established inputs from known physical measurements?
Re hurricanes. I said it before, but I will reiterate.
Hurricanes cannot be dependent on absolute surface temperature, as has been claimed by some politicians and media outlets, otherwise Venus would be raging with them – and it is not. Yet Mars manages some impressive hurricanes with little in the way of surface temperature.
In reality, large depressions and smaller tornadoes depend on differential temperatures in the airmass, not absolute temperatures. This was nicely demonstrated in the recent US tornado swarm, which raged along a large and vigorous cold front.
R
“prestidigitation” – slight of hand. Magic.
Nick
I see you’ve resorted to being pedantic.
If you want to disprove something statistically, you have to adopt the null hypothesis that it is true, and then show that that has to be rejected.
Sorry but if I were being pedantic I would have to correct you here; you don’t have to adopt anything, the null hypothesis is the default position. That default position is either accepted or rejected after experimentation via statistical inference (as stated). The converse must therefore also be true (relating to the hypothesis or alternative hypothesis).
@Patrick
>Another “wrong” for Arrhenius was Eugenics! But most people keep quiet about that too!
Yup, he was quite a guy. What I do like able him was that he admitted the first go-round was in error and he raised the (unproven) idea that there are multipliers that might kick in if CO2 warmed things first.
I spent the greater part of today calibration to several significant digits a machine that uses infra red radiation to stimulate CO2 and CO molecules so they can be counted. The reason I was successful was because CO2 absorbs IR and lights up nicely so we can count a ‘show of little carbon hands’.
Anyone who claims CO2 does not cause a slight insulating effect is denying reality – a reality I use to determine CO and CO2 levels in gases. Many machines used in industry (all the good ones) operate on the same principle.
But, that does not a climate model make. How the atmosphere deals with any additional heat is very different from how a few thousand parts per million react to a laser beam. As Willis has ably demonstrated, when heated, the atmosphere dumps a lot more heat higher up by creating thunderstorms, and/or condenses additional cloud cover to cool the planet. It is a governed system, unlike the simplistic model I use each day to make measurements, with a delay of about 8 months, right Willis?
But I digress. The topic of the hour is the fact that if you smear the data to increase the width of the error bars, you allow for an interpretation that includes possibly greater cooling, not just possibly greater warming. Given the additional uncertainty, the period for which there may have been no warming at all is extended further back in time. That is the unavoidable consequence.
Nick Stokes says:
November 20, 2013 at 3:59 am
dbstealey says: November 20, 2013 at 3:42 am
“As Dr Roy Spencer has stated, the climate Null Hypothesis has never been falsified.”
I presume that NH includes zero trend. And that just isn’t true. The fact that people are talking about 20 years or whatever without significant warming implies that the trend over longer times is significantly different from zero. Otherwise what does the number mean?
To see if any thing abnormal is going on since the end of the Little Ice Age, approximately 1850. The trend line should approximate the average of the ones at the start of the Minoan, Roman and Medieval warm periods.
M Courtney says:
November 20, 2013 at 4:40 am
Failure of the models is a reason to not use the models in making expensive and poverty inducing policy decisions.
Failure of the models is a reason to question the impact of CO2 and other greenhouse gases on the whole climate system.
That’s two steps with different justifications required.
Yet it seems to me that many people get so carried away with their policy battles that they go so far as reify the link from policy to the Navier-Stokes equations: Firming it up both ways.
They are very tenuous links.
You are correct that CO2 is a greenhouse gas. I do not think that the position of a reasonable sceptic is that it is not a greenhouse gas. I do think what we are saying it is not the significant factor that the warmest think it is. I do not think at this time anyone knows what causes El Niño and La Niña or what the effects of sun spot activity has on climate both of which seem to be much more important to climate than CO2.
I am highly impressed, as I so often am, by Lord Monckton’s bandwidth, his command of so many relevant facts, and his seeming ability to summon them at a moment’s notice.
So it was probably salutary for me, as one who, not being so blessed, am among those most likely to be enthralled by such virtuoso performances, to encounter here: http://joannenova.com.au/2013/11/monckton-bada/#comment-1342330 an instance in which one needs little more than high-school algebra to recognize that on occasion Lord M. can be intransigently wrong.
It reminded me once again to reserve judgment about things I have not analyzed completely for myself.
Bruce Cobb says:”I believe that Nick Stokes deserves an award for agile and persistent hand-waving around the fact that the warming has stopped for at least 17 years. Well done. It takes a very special amount of diligence and effort to ignore the truth, which has his cohorts in such a panic that they can’t even decide how to respond to it, resorting to amazing feats (and quite amusing) of straw-grasping.”
Ditto. Carry on, Nick! This is entertaining.
It is important for each of us to realize that intelligent people can get an idea in their mind, adopt it, and carry on in the face of disproving evidence and counter-argument.
This is human nature.
“We” (by “we,” I mean DesCartes, Popper and so on, not me specifically) have developed science not because we think scientifically, but because we humans do not think scientifically.
Perfectly rational, enlightened, intelligent, church-going, tax-paying, well-meaning citizens defended slavery for quite a long time.
I am not calling Stokes a supporter of slavery; odds are he or she is against it; just using this well-recognized point of consensus to illustrate how any of us, despite having a college education and use of an intellect, can hold fast to ideas in the face of great contrary evidence. if we can appreciate this, we can appreciate two precious things: one is active, respectful debate, and the other is science itself.