Time to sweep away the flawed, failed IPCC
By Christopher Monckton of Brenchley
HadCRUT4, always the tardiest of the five global-temperature datasets, has at last coughed up its monthly global mean surface temperature anomaly value for June. So here is a six-monthly update on changes in global temperature since 1950, the year when the IPCC says we might first have begun to affect the climate by increases in atmospheric CO2 concentration.
The three established terrestrial temperature dataset that publish global monthly anomalies are GISS, HadCRUT4, and NCDC. Graphs for each are below.
GISS, as usual, shows more global warming than the others – but not by much. At worst, then, global warming since 1950 has occurred at a rate equivalent to 1.25 [1.1, 1.4] Cº/century. The interval occurs because the combined measurement, coverage and bias uncertainties in the data are around 0.15 Cº.
The IPCC says it is near certain that we caused at least half of that warming – say, 0.65 [0.5, 0.8] Cº/century equivalent. If the IPCC and the much-tampered temperature records are right, and if there has been no significant downward pressure on global temperatures from natural forcings, we have been causing global warming at an unremarkable central rate of less than two-thirds of a Celsius degree per century.
Roughly speaking, the business-as-usual warming from all greenhouse gases in a century is the same as the warming to be expected from a doubling of CO2 concentration. Yet at present the entire interval of warming rates that might have been caused by us falls well below the least value in the predicted climate-sensitivity interval [1.5, 4.5] Cº.
The literature, however, does not provide much in the way of explicit backing for the IPCC’s near-certainty that we caused at least half of the global warming since 1950. Legates et al. (2013) showed that only 0.5% of 11,944 abstracts of papers on climate science and related matters published in the 21 years 1991-2011 had explicitly stated that global warming in recent decades was mostly manmade. Not 97%: just 0.5%.
As I found when I conducted a straw poll of 650 of the most skeptical skeptics on Earth, at the recent Heartland climate conference in Las Vegas, the consensus that Man may have caused some global warming since 1950 is in the region of 100%.
The publication of that result provoked an extraordinary outbreak of fury among climate extremists (as well as one or two grouchy skeptics). For years the true-believers had gotten away with pretending that “climate deniers” – their hate-speech term for anyone who applies the scientific method to the climate question – do not accept the basic science behind the greenhouse theory.
Now that that pretense is shown to have been false, they are gradually being compelled to accept that, as Alec Rawls has demonstrated in his distinguished series of articles on Keating’s fatuous $30,000 challenge to skeptics to “disprove” the official hypothesis, the true divide between skeptics and extremists is not, repeat not, on the question whether human emissions may cause some warming. It is on the question how much warming we may cause.
On that question, there is little consensus in the reviewed literature. But opinion among the tiny handful of authors who research the “how-much-warming” question is moving rapidly in the direction of little more than 1 Cº warming per CO2 doubling. From the point of view of the profiteers of doom (profiteers indeed: half a dozen enviro-freako lobby groups collected $150 million from the EU alone in eight years), the problem is that 1 Cº is no problem.
Just 1 Cº per doubling of CO2 concentration is simply not enough to require any “climate policy” or “climate action” at all. It requires neither mitigation nor even adaptation: for the eventual global temperature change in response to a quadrupling of CO2 concentration compared with today, after which fossil fuels would run out, would be little more than 2 Cº –well within the natural variability of the climate.
It is also worth comparing the three terrestrial and two satellite datasets from January 1979 to June 2014, the longest period for which all five provide data.
We can now rank the results since 1950 (left) and since 1979 (right):
Next, let us look at the Great Pause – the astonishing absence of any global warming at all for the past decade or two notwithstanding ever-more-rapid rises in atmospheric CO2 concentration. Taken as the mean of all five datasets, the Great Pause has endured for 160 months – i.e., 13 years 4 months:
The knockout blow to the models is delivered by a comparison between the rates of near-term global warming predicted by the IPCC and those that have been observed since.
The IPCC’s most recent Assessment Report, published in 2013, backcast its near-term predictions to 2005 so that they continued from the predictions of the previous Assessment Report published in 2007. One-sixth of a Celsius degree of warming should have happened since 2005, but, on the mean of all five datasets, none has actually occurred:
The divergence between fanciful prediction and measured reality is still more startling if one goes back to the predictions made by the IPCC in its First Assessment Report of 1990:
In 1990 the IPCC said with “substantial confidence” that its medium-term prediction (the orange region on the graph) was correct. It was wrong.
The rate of global warming since 1990, taken as the mean of the three terrestrial datasets, is half what the IPCC had then projected. The trend line of real-world temperature, in bright blue, falls well below the entire orange region representing the interval of near-term global warming predicted by the IPCC in 1990.
The IPCC’s “substantial confidence” had no justification. Events have confirmed that it was misplaced.
These errors in prediction are by no means trivial. The central purpose for which the IPCC was founded was to tell the world how much global warming we might expect. The predictions have repeatedly turned out to have been grievous exaggerations.
It is baffling that each successive IPCC report states with ever-greater “statistical” certainty that most of the global warming since 1950 was attributable to us when only 0.5% of papers in the reviewed literature explicitly attribute most of that warming to us, and when all IPCC temperature predictions have overshot reality by so wide – and so widening – a margin.
Not one of the models relied upon by the IPCC predicted as its central estimate in 1990 that by today there would be half the warming the IPCC had then predicted. Not one predicted as its central estimate a “pause” in global warming that has now endured for approaching a decade and a half on the average of all five major datasets.
There are now at least two dozen mutually incompatible explanations for these grave and growing discrepancies between prediction and observation. The most likely explanation, however, is very seldom put forward in the reviewed literature, and never in the mainstream news media, most of whom have been very careful never to tell their audiences how poorly the models have been performing.
By Occam’s razor, the simplest of all the explanations is the most likely to be true: namely, that the models are programmed to run far hotter than they should. They have been trained to yield a result profitable to those who operate them.
There is a simple cure for that. Pay the modelers only by results. If global temperature failed to fall anywhere within the projected 5%-95% uncertainty interval, the model in question would cease to be funded.
Likewise, the bastardization of science by the IPCC process, where open frauds are encouraged so long as they further the cause of more funding, and where governments anxious to raise more tax decide the final form of reports that advocate measures to do just that, must be brought at once to an end.
The IPCC never had a useful or legitimate scientific purpose. It was founded for purely political and not scientific reasons. It was flawed. It has failed. Time to sweep it away. It does not even deserve a place in the history books, except as a warning against the globalization of groupthink, and of government.
leftturnandre says:
July 29, 2014 at 2:44 pm
Really, milord, English??
“…For years the true-believers had gotten away with pretending that ….
gotten?
It’s perfectly acceptable English even though a little archaic. Strangely enough there is some useful background information in a letter to the Scunthorpe Evening Telegraph:
VINCE Withers is wrong to criticise “Grimsby, its council and our nation” for allowing the use of the word GOTTEN in Grimsby’s Freshney Place.
He is wrong if he believes that the origin of the word gotten is American.
The legitimate use of the word gotten dates back to Middle English as a past participle of “get”. Gotten was used by such great English writers as William Shakespeare, Francis Bacon and Alexander Pope in the 16th to 18th centuries.
Our English language has a long, interesting, ever-changing and continuously developing history. The words used in Vince’s letter beautifully illustrate the way in which words move from one language to another and are then subtly changed.
Vince tells us about his wife choosing bras, looking at caricatured images and his attempt to educate others about the bastardisation of true words.
“Bra” was introduced into English in the 1930s as an abbreviation for “brassiere” which had found its way into English from the French language earlier in the 20th Century.
“Bastardisation” is an extension of “bastard”, which came into Middle English via Old French from the Latin word “bastardus”.
“Caption” and “Educate” also started as Latin words (captio-, capere, educare) and became part of late Middle English.
“Caricature” came into English in the mid-18th Century via French from Italian.
Image – from Old French from Latin “imago”.
Let’s look at the letter’s title: A tragic Misuse Of Our Language.”
“Tragic” – mid-16th century fom French “tragique”, via Latin, from Greek “tragikos”.
Misuse – Old French from Latin “usus”.
Our – Old English of Germanic origin.
Language – Middle English from Old French “langage” based upon Latin “lingua”.
The key words in Vince’s letter clearly demonstrate our reliance upon words introduced into English from other languages and the “bastardisations” which may occur on the way.
Geoff Bartholomew (and Oxford Dictionaries), Helene Grove, Grimsby.
http://www.scunthorpetelegraph.co.uk/Gotten-dates-Middle-English/story-11178748-detail/story.html
Similarly “Fall” is often decried as an Americanism but was used in this sense in Britain in the 1660s and is said to derive from “fall of leaf” from the 1540s.
My own favourite archaic word is Sennight. People commonly use “Fortnight” to mean “in two weeks’ time” without realising that it is a contraction of “fourteen nights.” In the same way “Sennight” is a contraction of “seven nights” meaning “in a week’s time” Try using it, it’s a very useful word.
cesium62 says:
July 30, 2014 at 2:31 am
My, just a few days ago we were told that global warming had stopped for the past 17 years. Now we learn it’s only 13 years.
We could use either 13 years or 17 years, the two periods are not mutually exclusive. Both require the same proof, that statistics show that there has been no significant warming. If both periods pass this test than either can be used.
cesium62:
At July 30, 2014 at 2:31 am you assert
Please desist from making untrue and fatuous assertions. Every IPCC prediction has been plain wrong.
If you are interested in comparing “actual temperatures” I suggest you consider the IPCC forecast of “committed warming”.
The explanation for this is in IPCC AR4 (2007) Chapter 10.7 which can be read at
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-7.html
It says there
In other words, it was expected that global temperature would rise at an average rate of “0.2°C per decade” over the first two decades of this century with half of this rise being due to atmospheric GHG emissions which were already in the system.
This assertion of “committed warming” should have had large uncertainty because the Report was published in 2007 and there was then no indication of any global temperature rise over the previous 7 years. There has still not been any rise and we are now way past the half-way mark of the “first two decades of the 21st century”.
So, if this “committed warming” is to occur such as to provide a rise of 0.2°C per decade by 2020 then global temperature would need to rise over the next 6 years by about 0.4°C. And this assumes the “average” rise over the two decades is the difference between the temperatures at 2000 and 2020. If the average rise of each of the two decades is assumed to be the “average” (i.e. linear trend) over those two decades then global temperature now needs to rise before 2020 by more than it rose over the entire twentieth century. It only rose ~0.8°C over the entire twentieth century.
Simply, the “committed warming” has disappeared (perhaps it has eloped with Trenberth’s ‘missing heat’?).
This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.
Richard
Whether “gotten” is correct or not, they’re still “getting” away with it – that’s the problem.
Whether it’s a “pause” or a “peak” remains to be seen…
One cannot falsify a projection, you know that. There isn’t even a legitimate statistical basis for the multimodel mean “projections” in AR4 or AR5, so one cannot perform a hypothesis test on it. AR5, in chapter 9, openly acknowledges this. That does not stop it from making statements with various levels of “confidence” in the summary for policy makers, even though they could not possibly offer a justification for their assertions of confidence other than “I pulled this level out of my ass” because there is no defensible statistical derivation of the numbers, or rather, the assignment of phrases in English that anywhere else in science would have to be backed up by hard, defensible, numbers.
We can never be certain that all projections of global warming are complete bunkum until each individual projection fails. We cannot falsify a projection in the meantime because the projections do not offer us any way to compute a p-value for the present state, so we cannot say how unlikely it is (given the results of the models assuming that the models are correct).
As I’ve pointed out, we could perform a hypothesis test for each of the models in CMIP5 — simply form the envelope of their perturbed parameter ensemble runs, split it up by percentiles, and look at the percentile that is the best match for the current climate. If the current climate falls at the extreme left of the distribution in the first few percent (as it does for most of the models), we can reject the null hypothesis that “this model is a correct climate simulation” for that model with some defensible probability of being correct.
If this were done collectively, one model at a time, one could actually think about making a defensible statement about the probability of the MME mean being correct — if (say) 30 out of 36 models fail and the remaining 6 don’t fail but are systematically off all in the same (to warm) direction then we could say with a great deal of confidence indeed that the MME prediction is useless. But we could do that analysis right now, we don’t need to wait six more years.
rgb
Richard Courtney, you write “then it will be 100% certain that all projections of global warming are complete bunkum.”
But it is all the tea in China to a bad egg, that none of the warmists, led by The Royal Society and the APS will EVER agree that this is true.
@ur momisugly davidmhoffer
And, don’t forget, whether the warming resumes or not does not prove that atmospheric CO2 levels are the cause or that man’s contribution to the atmospheric CO2 level is the cause.
All it would prove is that the warming since the LIA is continuing.
I’ll always remember the shock I felt when I read that in the IPCC process, they adjusted the reports so that they would match the executive summary.
This is truly Red Queen territory.
Averaging surface data and satellite data is questionable. If you must do it then you should give equal strength to both types of data. First average the surface data and satellite data separately and then average the two results. You can see the problem by considering what you would get if you averaged 100 surface data sets with the 2 satellite data sets. The satellite data would be completely overwhelmed.
Personally, I would ignore the surface data. It is beyond hope. Giving it any credence at all destroys one the principle skeptic points that the surface data is not fit for purpose.
rgbatduke: “As I’ve pointed out, we could perform a hypothesis test for each of the models in CMIP5 — simply form the envelope of their perturbed parameter ensemble runs, split it up by percentiles, and look at the percentile that is the best match for the current climate. If the current climate falls at the extreme left of the distribution in the first few percent (as it does for most of the models), we can reject the null hypothesis that “this model is a correct climate simulation” for that model with some defensible probability of being correct.”
You’ve said stuff like this before, of course, and I’ve always sensed that there was something to what you were trying to communicate, but I could never understand precisely what it was. Perhaps I’ll get my mind around it better if you explain just what you mean by “percentile.” Percentile of what? Trend over some period? Temperature at some date? Squared differences from actual temperatures?
Maybe you’re saying the following. For each member of an ensemble of initial-value (boundary-value, forcing value, whatever) sets, all of which we consider equally likely, we run a given model and take a histogram of the, say, temperature trends they produce. We don’t know what initial-value set actually applied in real life, but, if the histogram is any guide, it would be unlikely for it to be approximated by any ensemble member whose trend is within x degrees/century of the actually observed trend if the model is accurate to within x degrees/century.
For the sake of us who have trouble with statistics, in other words, could you put into English exactly what test you’re proposing?
several commenters accused Lord Monckton
of having used “Gotten” and they say that this
isn’t “English”, and is this the only criticism that
they are able to level at him? Pretty feeble eh?
Yet Lord Monckton is an Alumni of Churchill College,
at Cambridge University, so let’s see what the Official
Cambridge English Dictionary says about “Gotten”.
http://dictionary.cambridge.org/dictionary/british/gotten?q=gotten
Well there’s a surprise then. Lord Monckton is correct. Again !
😀
Where hurricane in the Atlantic?
http://sirocco.accuweather.com/sat_mosaic_640x480_public/ei/isaehatl.gif
http://weather.unisys.com/surface/sst_anom.gif
Here’s what Fowler’s Modern English Usage and The King’s English say:
M Courtney says:
Yes, MC, you’re right to call out the PP – which is generally a load of rubbish. The PP is designed around the proposal that, ‘we must do something’; that, ‘we must never let this happen again’. But sometimes it really is best to do NOTHING.
Meanwhile, on the way AGW is reported in the pause, we are expected to get all fired up over a rise in GT of around 1.25 C per century. That’s means nothing to the man on the street, who has been through multiple degree changes in temp this week alone (in UK). Why would he be scared of a measly one and a bit degree rise in a hundred years? It really is all about scaremongering.
To those who question whether there has been a standstill in global warming for 13 years 4 months or 17 years 10 months, I reply that the HadCRUT4 dataset usefully provides measurement, coverage and bias uncertainties for each monthly data point. The combined uncertainties amount to 0.15 K. The differences between the different datasets are less than 0.15 K. Therefore, though the mean of all the datasets shows no global warming at all for 13 years 4 months, and the RSS dataset shows none for 17 years 10 months, the two values are within each other’s error margins, and they are saying broadly the same thing – that there has been no global warming for around a decade and a half. None of the models predicted, as its central estimate, any such outcome.
I very much hope that Professor Brown will carry out the statistical analysis of the CMIP5 models that he has suggested. That would be a great service to the truth, if he can find the time to do it.
To those who question my courteous use of the American usage “gotten”, it is a strong-verb past-participle akin to “wrought” (past participle of “wreak”) and “dove” (past participle of “dive”). In U.S. English, these past participles were carried across the pond on the Mayflower and are all commoner in the U.S. than here. But “gotten”, in particular, still survives in the phrase “ill-gotten gains” (I have never heard anyone say “ill-got gains”). And it is used not only in Cranmer’s Godly Order but also frequently in the King James version of the Bible, with which I was brought up.
For instance, Genesis IV:1, “And Adam knew Even his wife, and she conceived, and bare Cain, and said, I have gotten a man from the LORD.” Genesis XII:5, “And Abram took Sarai his wife, and Lot his brother’s son, and all their substance that they had gathered, and the souls that they had gotten in Haran; …”. Exodus XIV:18, “And the Egyptians shall know that I [am] the LORD, when I have gotten me honour upon Pharaoh, upon his chariots, and upon his horsemen.” Leviticus VI:4, “… he shall restore that which he took violently away or the thing which he hath deceitfully gotten, …”. Numbers XXXI:50, “… what every man hath gotten, of jewels of gold, chains and bracelets, rings, earrings and tablets, …”. Deuteronomy VIII:17, “And though say in thine heart, My power and the might of [mine] hand hath gotten me this wealth.” And that’s just a few from the Pentateuch.
If “gotten” was good enough to be used frequently by the great committee that translated the Hebrew and Greek of the Bible into one of the finest works of literature in our language, then it’s good enough for me. And it satisfies the first obligation of the written word, that it should be comprehensible to its audience.
@Monckton of Brenchley
Are you sure it was not the Godspeed? Down “he-yea” we are kind of particular to the first permanent English Settlement. 😉
Exactly! I too have noticed this odd behaviour, would this be possible in any other science?
Below is a nice graphic showing their INCREASING confidence levels at each new report, compared to their projections, compared to actual observations. They say a pictures speaks a thousand words.
http://www.energyadvocate.com/gc1.jpg
There is a simple cure for that. Pay the modelers only by results.
===============================================
if we could only apply this to politicians and bureaucrats.
This has gotten out of hand. ☺
Richard M says:
July 30, 2014 at 5:57 am
If you must do it then you should give equal strength to both types of data.
WTI is a combination of Hadcrut3, UAH version 5.5, GISS and RSS. Hadcrut3 is not out yet for June, however my best estimate for when it does come is that the WTI pause would be 13 years and 6 months to the end of June.
davidmhoffer says:
July 29, 2014 at 10:21 pm
Joel O’Bryan;
The LIA ended about 1850 AD, that’s 164 ya.
>>>>>>>>>>>>>>>>>>>
Yup. And when was the beginning? When was it at the “bottom”?
http://wattsupwiththat.files.wordpress.com/2009/12/noaa_gisp2_icecore_anim_hi-def3.gif
First slide shows warming trend starting in the 1600’s. About 400 years ago.
====================================================
The graphs I see of the Holocene say we are not “warming for thousands of years” at all as you said in a previous post, and this link shows that as well. We have been cooling for most of the Holocene. Not sure why you contradict yourself?
Jim Cripwell:
At July 30, 2014 at 5:24 am you say
Yes, but the truth is what it is, and it is not affected by any group refusing to acknowledge it.
Richard
I very much hope that Professor Brown will carry out the statistical analysis of the CMIP5 models
==================
just the sort of thing grad students were invented for. one would think the climate modelling community would also welcome such a study, and support the funding application, if in fact they have faith in their models.
I would very much like to see the variance for individual models before and after anthropogenic forcings are added. does the addition of anthropogenic forcings in fact increase variability? and secondly, what is the natural variability without anthropogenic forcings? do the individual model runs show very little variability, or in fact do they show that natural variability is high?
from a practical standpoint, it would seem that it is much simpler problem to model the future as a probability, than it is to predict which future we will arrive at. Like throwing a pair of dice, it is much simpler to predict that the value on the next 100 rolls will lie between 2 and 12, with an expected variance, than it is to predict the actual value of the next 100 rolls. thus, studying the statistics of computer models is likely to tell us much more about the future climate than will simple projections of averages.
The preceding “had” actually places it properly in past tense.
Past perfect.
rgbatduke:
At July 30, 2014 at 5:04 am you quote my having concluded at July 30, 2014 at 3:10 am which is here..
then respond saying to me
Yes, “One cannot falsify a projection” but you can falsify a prediction, you know that.
And – as I quoted and explained – the “committed warming” is a prediction.
No hypothesis test is needed because one compares the forecast to outcome of a prediction, and I did.
The importance of the “committed warming” is that it did provide “hard, defensible, numbers” which passage of time has shown to be wrong.
And I stand by my correct statement that said
“This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models .”
The models made a prediction which is wrong. Therefore,
(a) the hypothesis emulated by the models is wrong,
or
(b) the models fail to emulate the hypothesis correctly
or
(c) both (a) and (b).
And those models provide the projections of future climate and they each incorporate an attempted emulation of the AGW hypothesis. So, if the the AGW hypothesis as emulated by those climate models is wrong then the projections provided by those models must be wrong; i.e. they are complete bunkum.
Excepting those disagreements, I agree the remainder of your posts.
Richard
Superb essay, and those graphics are killers. Thank you, Christopher!