The 200 months of 'the pause'

By Christopher Monckton of Brenchley

A commenter on my post mentioning that according to the RSS satellite monthly global mean surface temperature dataset there has been no global warming at all for 200 months complains that I have cherry-picked my dataset. So let’s pick all the cherries. Here are graphs for all five global datasets since December 1996.











The mean of the three terrestrial datasets:


The mean of the two satellite datasets:


The mean of all five datasets:


Since a trend of less than 0.15 K is within the combined 2 σ data uncertainties arising from errors in measurement, bias, and coverage, global warming since December 1996 is only detectable on the UAH dataset, and then barely. On the RSS dataset, there has been no global warming at all. None of the datasets shows warming at a rate as high as 1 Cº/century. Their mean is just 0.5 Cº/century.

The bright blue lines are least-squares linear-regression trends. One might use other methods, such as order-n auto-regressive models, but in a vigorously stochastic dataset with no detectable seasonality the result will differ little from the least-squares trend, which even the IPCC uses for temperature trend analysis.

The central question is not how long there has been no warming, but how wide is the gap between what the models predict and what the real-world weather brings. The IPCC’s Fifth Assessment Report, to be published in Stockholm on September 27, combines the outputs of 34 climate models to generate a computer consensus to the effect that from 2005-2050 the world should warm at a rate equivalent to 2.33 Cº per century. Yeah, right. So, forget the Pause, and welcome to the Gap:











Mean of all three terrestrial datasets:


Mean of the two satellite datasets (monthly Global Warming Prediction Index):


Mean of all five datasets:


So let us have no more wriggling and squirming, squeaking and shrieking from the paid trolls. The world is not warming anything like as fast as the models and the IPCC have predicted. The predictions have failed. They are wrong. Get over it.

Does this growing gap between prediction and reality mean global warming will never resume? Not necessarily. But it is rightly leading many of those who had previously demanded obeisance to the models to think again.

Does the Great Gap prove the basic greenhouse-gas theory wrong? No. That has been demonstrated by oft-repeated experiments. Also, the fundamental equation of radiative transfer, though it was discovered empirically by Stefan (the only Slovene after whom an equation has been named), was demonstrated theoretically by his Austrian pupil Ludwig Boltzmann. It is a proven result.

The Gap is large and the models are wrong because in their obsession with radiative change they undervalue natural influences on the climate (which might have caused a little cooling recently if it had not been for greenhouse gases); they fancifully imagine that the harmless direct warming from a doubling of atmospheric CO2 concentration – just 1.16 Cº – ought to be tripled by imagined net-positive temperature feedbacks (not one of which can be measured, and which in combination may well be net-negative); they falsely triple the 1.16 Cº direct warming on the basis of a feedback-amplification equation that in its present form has no physical meaning in the real climate (though it nicely explains feedbacks in electronic circuits, for which it was originally devised); they do not model non-radiative transports such as evaporation and convection correctly (for instance, they underestimate the cooling effect of evaporation threefold); they do not take anything like enough account of the measured homeostasis of global temperatures over the past 420,000 years (variation of little more than ±3 Cº, or ±1%, in all that time); they daftly attempt to overcome the Lorentz unpredictability inherent in the mathematically-chaotic climate by using probability distributions (which, however, require more data than straightforward central estimates flanked by error-bars, and are thus even less predictable than simple estimates); they are aligned to one another by “inter-comparison” (which takes them further and further from reality); and they are run by people who fear, rightly, that politicians would lose interest and stop funding them unless they predict catastrophes (and fear that funding will dry up is scarcely a guarantee of high-minded, objective scientific inquiry).

That, in a single hefty paragraph, is why the models are doing such a spectacularly awful job of predicting global temperature – which is surely their key objective. They are not fit for their purpose. They are mere digital masturbation, and have made their operators blind to the truth. The modelers should be de-funded. Or perhaps paid in accordance with the accuracy of their predictions. Sum due to date: $0.00.

In the face of mounting evidence that global temperature is not responding at ordered, the paid trolls – one by one – are falling away from threads like this, and not before time. Their funding, too, is drying up. A few still quibble futilely about whether a zero trend is a negative trend or a statistically-insignificant trend, or even about whether I am a member of the House of Lords (I am – get over it). But their heart is not in it. Not any more.

Meanwhile, enjoy what warmth you can get. A math geek with a track-record of getting stuff right tells me we are in for 0.5 Cº of global cooling. It could happen in two years, but is very likely by 2020. His prediction is based on the behavior of the most obvious culprit in temperature change here on Earth – the Sun.


newest oldest most voted
Notify of
David L.

Who can say the models aren’t wrong? The evidence cannot possibly be more clear. So warmists, scrap the models and go back to the drawing boards! You ain’t got nuthin’.

The “3 degrees” is a strong element of every one of the Gap graphs (as in the oft-quoted 3 degrees of increase for a simple doubling of CO2 concentration). All the graphs are over/predicting by this same 3 degrees, with very minor variations.
This ought to lead to at least a question about the reliability of the sensitivity assumptions behind the models, I’d have thought …

Oh, as for “sum due to date” – I’d argue that they owe us all a few billions already …

Kerry McCauley

So, now the ancient fear is graphically clarified…what “they” used to tell the boys, the caution: wanton digital manipulation leads to blindness. QED. Bravo, Lord Christopher!

Sheffield Chris

If the models are as wrong as they appear to be, and the areas of error are as clear as outlined by Lord Monckton (Title not in doubt by me), has anybody published a more realistic model output that matches reality?

They should surely have the message by now. Excellent post.


To be fair , and as with both sides , ‘paid trolls ‘ is not really an issue like all religions the most fanatical are volunteers whose motivation is many fold but not finical . And we should be careful not to indulge in ‘conspiracy’ claims the alarmists are so fond off.
What this article does seem to miss out is the way the area of climate ‘science’ has seen massive growth of the back of ‘the cause ‘ . From a poor relation to the physical sciences , little heard off and less cared about , it’s become an academic ‘star’ with lots of research cash and positions a plenty . For some it is that which as to be defended to the death, for they know that once the academic ‘trend’ slips away from them, they have nothing but to go back to but obscurity, defunding and lack of jobs.
Can anyone see people like Mann get any role in academic without ‘the cause ‘? So all they can do is keep doubling down in the hope to keep the gravy train on track , and facts be dammed.

David L.

This is plain to see by everyone other than the unconvertible zealots. For them, a few other standard techniques might help: plotting the residuals as a histogram or probability plot, residuals versus predicted value, residuals versus time, and predicted values versus actual values.
But why bother? Anyone can see the model doesn’t fit the data. The residuals would all be positive and diverge over time. I guess the hope of those profiting from the public trough is that the real data will eventually catch up with the model.
If I tried to present a model describing drug stability this bad when filing a new drug with any agency, it would result in a nonapproval.

gopal panicker

there is a model that accurately predicts temperatures…no computer needed…just paper and pencil…cheap

John V. Wright

“They are mere digital masturbation, and have made their operators blind to the truth”. Perfect. Where else in the world can you get this stuff?!?
Thank you Anthony and thank YOU, my noble lord.

High Treason

It is so clear that the IPCC are in denial about their models being totally wrong. You would think that normal scientists would be celebrating and drinking cocktails(banana daiquiris perhaps?) in the streets from their test tubes with the news that the “catastrophic” warming had stalled. The continuance of arguing with increasingly insane “evidence” shows they have something to hide. They will NEVER admit the whole thing is a fake to create their Fabian Utopian One World Government(in reality, North Korea without the backing of China) because they know what will happen to them when the s%^t hits the fan and the People learn of their treachery.

Bloke down the pub

Typo? In the face of mounting evidence that global temperature is not responding at (as) ordered,

Rick Bradford

**That … is why the models are doing such a spectacularly awful job of predicting global temperature – which is surely their key objective. **
Yes, doing an awful job of predicting global temperature is indeed their key objective, so that they can pursue their anti-development and anti-human agenda.


Lord Monckton
I enjoyed reading your piece. There was a recent post which featured a talk given by Dr Essex on this very subject. Your post here affirms the points he was making.
I think you’re a bit hard on the poor modelers. Most of the people building and writing them are just doing a job. Trying perhaps do to the impossible, but I don’t think they should be de-funded for it. I think the real problem is that many of the scientists that use them don’t understand them and therefore attach to much confidence in their projections. Somewhere along the line there are scientists who are promoting what they know – at least now – to be inherently flawed methodologies. Perhaps they should be isolated and exposed for misrepresenting the models.


The models really do grossly overstate warming. Still, it is reasonable to consider all factors which influence average temperatures, the most obvious of which is ENSO. If a modest adjustment is made to account for ENSO, then the rate of warming since 1998 increases to about 0.75C per century. If you believe there are longer term cyclical influences (like the AMO/Atlantic thermohaline circulation rate) then a reasonable conclusion is that some of the rapid warming between 1975 and 1997 was the result of longer term cyclical factors; the flip side of which is that some of the recent slowing in warming was due to the downward side of those same factors.
A plausible “underlying rate” in in the range of 0.11C per decade, which is a bit under half the model projection, and completely consistent with a sensitivity to GHG forcing near half of the model diagnosed 3.2C per doubling. I think it is no coincidence that empirical estimates of 1.6 to 1.8C per doubling.


BTW….Lord Mocnkton
A friend of mine works for hedge fund company. He told me that they employ Oxford and Cambridge math graduates to build financial and economic models. He’s a physics graduate and helps run and maintain them. After years of building very complex models in order to predict the market, these incredibly bright people decided that extrapolating a running average a few days ahead was better at predicting the future than the highly complex models they had built.

John Judge

I do not really understand most of the technical terms in his explanation for the failure of the models. However, I would submit that if the warmists wish to refute Lord Monckton’s arguments, they had better come up with something better than ad hominem attacks and rages against “Big Oil”.

Good post and nice, explanatory, charts. The linear regressions include very low “r-square” values, 0.000-0.035. If, in my work, I had data that gave r2’s like that, I’d re-examine the data set and experimental technique, because I would consider it to be no correlation. In climate science discussions, on the other hand, seem to ignore this lack of correlation and both sides of the issue divine significance of data that seem to be too noisy to have any good confidence level or much of a decent correlation.
I’m currently involved in an effort to optimize operations of a landfill gas to pipeline gas plant. We are collecting data and doing statistical analyses on the data. R2’s of 0.5 mean we can’t can’t attach much significance to the data. One of the things I don’t understand about climate science is making much of very poor correlations. Should you be using other curve-fitting, y=mx+b doesn’t seem to be overly convincing.

Kurt Myrhagen

Every kind of modelling or forecast-making is an attempt to predict the future. It always was, still is and always will be impossible. I just cannot understand how anyone could believe that these people could predict future climate. To me, this is quite possibly the most intriguing and entertaining part of the great climate swindle. Just imagine, we have politicians sucking up to IPCC’s saucerers. Even Rasputin couldn’t have done a better job!


“predicting global temperature – which is surely their key objective”
I know some people who have been trying to use the same climate models for trying to predict rainfall.
One guy was telling me they were doing very well (tongue in cheek, I might add), because the model mean was tracking really well with reality.
Thing is, about half the models predict more rain, and half predict less rain. 😉

Chris Schoneveld

So to summarize: Lord Monckton did pick the dataset with the lowest, (even negative) trend (RSS) of -0.2 ºC/century since all the other datasets show positive trends between +0.44ºC/century and +0.93 ºC/century. So, yes, RSS was a cherry, because it was the only one that showed (be it statistically insignificant) cooling for 200 months (I know, the warming trends of the others are equally statistically insignificant). It is a pity that he chose RSS, since it gave his opponents ammunition to attack his credibility.


As good as this is, what we really need is some cooling that no one can deny/spin instead of non-warming.

Pete Brown

Well, I think you’re forgetting the fact that all of the 163 hottest years on record have occurred in the last 163 years! So there…

Sheffield Chris asked:
“has anybody published a more realistic model output that matches reality?”
Yes, as long as one realises that the best we can do so far is determine the direction of trend rather than the rate of that trend. Internal system variation, especially from the oceans, currently confounds quantification of the rate of any underlying longer term trends.
The fact is that zonal jets with reducing cloudiness result in system warming and meridional jets with greater cloudiness result in system cooling by regulating the proportion of ToA insolation that gets into the oceans to drive the climate system.
That fits all the observations that I am aware of including LIA, MWP, Roman Warm Period et al.
So we need to start over from that point.

Village Idiot

Good of you to take time out of your busy jousting schedule and swing by the Village once again, Sir Christopher, encouraging us and lifting our spirits. Hurrah, I say!
Just one little thing, You Grace. It would help us exceedingly (especially those of more feeble faith than your good self) if you would deign to pencil in on your beautiful graphs, just where you think the global temperatures will go in the next 10, 20, 50 years. This would really help – reinforcing our already firm confidence in you that you actually know what you are talking about. You see, I don’t want to be a sneak, but I’ve heared some disquieting murmurs down on the Village Green along the lines of: ‘mud-slinging, yelling ‘yah boo’ and flicking your fingers at falsehood can only buy you cheap credibility; to really sort out the Lords from the serfs, and slay the Serpent good n’ proper, you have to show that you can predict future global temperature better than them.’
So, please: Show us of what stuff you’re made, and that you’re not afraid to go head to head with these amateurs.


Congratulations at your attempted damage limitation in your post at August 27, 2013 at 3:41 am
Unfortunately (for you and the modellers), your attempt fails.
You say in total

The models really do grossly overstate warming. Still, it is reasonable to consider all factors which influence average temperatures, the most obvious of which is ENSO. If a modest adjustment is made to account for ENSO, then the rate of warming since 1998 increases to about 0.75C per century. If you believe there are longer term cyclical influences (like the AMO/Atlantic thermohaline circulation rate) then a reasonable conclusion is that some of the rapid warming between 1975 and 1997 was the result of longer term cyclical factors; the flip side of which is that some of the recent slowing in warming was due to the downward side of those same factors.
A plausible “underlying rate” in in the range of 0.11C per decade, which is a bit under half the model projection, and completely consistent with a sensitivity to GHG forcing near half of the model diagnosed 3.2C per doubling. I think it is no coincidence that empirical estimates of 1.6 to 1.8C per doubling.

No, it is NOT “reasonable to consider all factors which influence average temperatures”.
It is reasonable to consider all the KNOWN factors which influence average temperatures and to admit we don’t know the unknown unknowns.
For example, nobody knows what has caused – and probably still is causing – the temperature rise from the Little Ice Age (LIA) which has been happening for centuries. This natural temperature rise is certainly not a response to anthropogenic (i.e. human-released) CO2. And it has been providing an observed – n.b. observed and not merely plausible – rise of about 0.8°C per century.
Add in the known effects (such as ENSO which you mention) and there is no need to introduce any hypothesis of an anthropogenic effect of magnitude sufficient for it to be discernible.
This fits with empirical – n.b. not model-derived – determinations which indicate climate sensitivity is less than 1.0°C for a doubling of atmospheric CO2 equivalent. This is indicated by the studies of
Idso from surface measurements
and Lindzen & Choi from ERBE satellite data
and Gregory from balloon radiosonde data
These findings are consistent with – and indicative of – the feedbacks in the climate system being negative (i.e. not positive as is required for climate sensitivity to be higher than 1.1°C for a doubling of atmospheric CO2 concentration).
These empirical indications are that climate sensitivity is less than 1.0°C for a doubling of atmospheric CO2 concentration and, therefore, any effect on global temperature of increase to atmospheric CO2 concentration only has an abstract existence; it is too small for it to have a discernible existence that has observable effects.
Please note that my post I here provide presents is
An explanation of why the climate models don’t work: the models use high and untrue values of climate sensitivity
A reply to Sheffield Chris who asks at August 27, 2013 at 3:00 am for a “published a more realistic model output that matches reality”: the references I here cite are to published values of climate sensitivity and the ‘model’ they represent is of no discernible effect of anthropogenic CO2 on natural global temperature variations.
Agreement with the post of gopal panicker who, at August 27, 2013 at 3:14 am, says

there is a model that accurately predicts temperatures…no computer needed…just paper and pencil…cheap



Village Idiot:
Thankyou for your post at August 27, 2013 at 4:15 am
which again demonstrates your idiocy.
Lord Monckton is being a scientist so he has no need to provide an alternative model: as a scientist he is only required to falsify the existing model(s) as he does in the above article.
Of course, as you say, his understanding and application of the scientific method is much superior to those whom you call “amateurs”. However, I think calling them “amateurs” is being too kind to them when they so flagrantly flout the scientific method.


Quick, Mr. Obama, tweet it.

Gail Combs

Sheffield Chris says: @ August 27, 2013 at 3:00 am
…. has anybody published a more realistic model output that matches reality?
Try Joe Bastardi and Joseph D’ Aleo of Weatherbell
(It is on the right side of WUWT about 6 page downs)

Bill Illis

The climate models have clearly failed in the last 20 years.
Basically, since they were built. Most of them were developed less than 20 years ago.
If we go back to the first of them by Hansen and Manabe in the early 1980s, the predictions missed the last 20 years as well.
There is no global warming climate model that has a long pause in it because they are programmed to increase with rising CO2.
Why are we wasting so much resources on models which don’t work. Tens of millions of dollars per year and thousands of people are involved One might be tempted to say “why not reprogram them so they do a better job?” Well, whomever tried to do this would get blacklisted by the other climate scientists.
Hence, they go on and on looking for flimsy excuse after flimsy excuse when the obvious answer is right there staring them right in the face.

mike g

Ah, but Chris Schoneveld, it is okay if they attack his credibility. That is the only weapon they have at their disposal since the truth has deserted them. What really counts is the credibility of the alarmists has so thoroughly been destroyed. In your blindness, you fail to see that the two most reliable datasets are two which record the least warming (all at a time of peaking of ocean oscillations). Chalk the vast majority of the warming that has occurred since the 70’s to ocean oscillations and the sun being above average activity the last half of the 20th century (and don’t forget to factor in the adjustments to the data in the other datasets which account for quite a bit of the slope). You’re left with a falsified theory and falsified models. Get over it.

Jean Parisot

Additionally, the models all have subset characteristics which are testable, and failing. Tropical hotspots, water vapor content, sea level, etc.

Steve Jones

Village Idiot,
A real scientist, aware of the shortcomings of the current state of climate science, would not try and predict what temperatures will be based on known incomplete knowledge. On the other hand, the IPCC and the world’s policy makers are prepared to. That is both ingenuous and irresponsible.
Another thing, the root of the word amateur is the latin word amare (to love). It was originally used to describe someone who did something for the love of it and who was considered more noble than someone doing something professionally. Sadly, amateur has come to mean second rate. His Lordship is definitely an amateur in the original sense of the word whereas many modern climate scientists are definitely professional.

Tom in Florida

If you had based your career and economic security on being a leader in your field and then came to find out, not only have you been wrong, but your methods that lead to your conclusions were also wrong, would you risk being sent away to the poor house with little more than a footnote in history? Most likely not so you would defend yourself to the bitter end, after all, what have you got to lose by hanging on as long as you can. (not a defense just an understanding of why)

Jean Parisot

Village Idiot, given a 420K/yr history of +\- 1 deg change within a +\- 3 deg range, I suspect he can get a bit closer then the model’s mean of greater then two degrees of error using nothing more then a bit of chalk.

Bruce Cobb

I’m not sure why ghg theory not being proved wrong is mentioned, as it’s a straw man. The real argument is over what the climate sensitivity to man’s CO2 actually is. The “human fingerprint” has not been shown. That doesn’t mean there is none; it just means that it is very small. It appears that climate just doesn’t respond like a laboratory. Fancy that.

Chris Schoneveld

mike g says:
August 27, 2013 at 4:39 am
“You’re left with a falsified theory and falsified models. Get over it.”
“You are barking up the wrong tree”. I have been a skeptic from day one, you obviously didn’t get the gist of my comment.


A comment I’ve made before and will make again. It is pointless and misleading to superimpose the CO_2 curve, with an arbitrary y-axis scaling, on top of the temperature curve, with equally arbitrary y-axis scaling. This is screamingly obvious when one is plotting not absolute magnitudes (which have some meaning) but the cursed “anomalies” that climate scientists seem obsessed with, largely because of their belief that they can subtract away some sort of reliably known “time varying baseline” and focus only explaining deviations from this baseline, in a system described in even its simplest (almost trivial) forms by a nonlinear stochastic differential equation. Sadly, plotting anomalies is the subject of almost an entire chapter in the lovely book “How to Lie with Statistics”, (which could also be read as “How to make terrible errors using statistics naively” as an alternative title).
So please, please — remove the grey CO_2 curve. That isn’t science. It isn’t even good argumentation — since the two curves have completely different units you can easily scale the y-axis units for the CO_2 so that it falls nicely into the range of the temperature fluctuations and lines up with the temperature trends in perfect agreement with any positive temperature slope — and it still won’t mean anything.
If you want to see something really instructive, take a look at this:
This is what the actual global average temperature looks like in degrees Kelvin. Some “hockey stick”, huh? Only it doesn’t, because one cannot add a constant back — the Earth’s temperature varies seasonally just like the CO_2, and the anomaly was computed by subtracting something like a constant plus an annual sinusoid from the original data, and I cannot add back the annual sinusoid function because I don’t know it. I can’t even find it on the internet. I could probably figure it out if I looked deep in some computer code somewhere, and it probably is in the literature, but at the moment I don’t even know the purported range of monthly variation of the supposed mean global temperature relative to which the anomaly is computed (and it may be computed locally and subtracted before forming the anomaly mean!) So this entire figure could have monthly ripple that is as large or larger than the entire “anomaly” variation over the entire range. How would I know? How would anyone (but one of the deep climate cognoscenti) know?
It is also what the actual CO_2 concentration looks like in parts per million, which conveniently scales to fit on the same graph and (because pre-industrial baseline CO_2 was ballpark 287 ppm) by pure chance one can quite accurately extrapolate the near-exponential rise back to the left so that it appears to be rising from the temperature line. This does, actually, correctly illustrate the relative increase — CO_2 has increased by around a third of its original absolute pre-industrial concentration, and the bulk of that increase has occurred within the last 50 or 60 years.
Entertaining as it is to look at anomalies, sometimes it is very useful to look at the actual quantities involved. The skeptics who assert that a “trace gas” like CO_2 cannot provide much warming as it increases — well, look at these curves. It doesn’t, not in any absolute sense. Not much, of course, is not the same as zero.
A second sorry aspect of WFT is that it is (as far as I can tell, correct me if I’m wrong somebody) quite impossible to add simple little things like error bars to the curves, or to do a proper chisq fit USING the data uncertainties. In fact, there are a ton of things one cannot do within WFT, either because it is missing the functions needed to do it or because it is missing the DATA needed to do it. In particular any sort of reasonable error estimate. It would be infinitely more instructive to put all of the data into R (for example) where one could actually do statistics with it instead of thinking up fifty different ways to commit the sin of post hoc ergo propter hoc on the susceptibility of a quantity that cannot even be properly defined by the very people that compute it.
One day I’m going to write an entire article on the “anomalous” sins of the climate community. For example, GISS and HADCRUT and all of the rest of the datasets that purport to reach back to the mid-1800s are presented as anomalies across the entire range. At the same time, it is openly confessed that to transform the anomaly into an absolute temperature one has to add to the quantity an estimate of some baseline temperature, say, 14 C or 287 K. Only, there is no general agreement as to just what that baseline temperature ought to be — it might be as low as 286-something K or as high as 287-something K, where the range enabled by the “somethings” is order unity either way. What value you get depends — wait for it — on what model you use. Strangely enough, what value you get for the anomalies themselves also depends on what model you use! The error for the anomalies, surely, increases as one goes back in time. The error for the baseline similarly increases as one goes back in time! In fact, we have precisely zero thermometric measurements for entire continents — Antarctica, for example — from the mid-1800’s.
You can then see why it is essential not to present any sort of graphical treatment of the uncertainties in global temperature — this is never done even in the modern thermometric data. Each “anomaly” dataset is presented as a fait accompli, without the slightest hint of uncertainty, and (committing a sin that would cost them points on any physics exam!) to an absolutely absurd number of significant figures! The anomaly is never 0.1, it is 0.1327… (who knows how many digits of garbage they actually keep in their published computation — WFT is happy plotting at least 2. Thus we are presented with the illusion that we know the global temperature anomaly within an experimental resolution of at least 0.01 K, perhaps 0.001 K or even more! We are further led to believe that “smoothing” this data in some way leaves us with a real trend, and not just smoothed noise! Lying, lying, lying.
Let’s realistically assume that even in the modern era, it is most unlikely that we know the absolute global average temperature to an experimental resolution of 0.1 K. By this I mean that there is that much variation (easily) just between purported estimates of the anomaly alone, and since those estimates surely rely on substantially overlapping data, this variation almost certainly significantly underestimates the error. One could argue that in the modern era we probably don’t know the anomaly within 0.3 K, and of course this grows substantially as one goes into the past, and plotting the “anomaly” in the first place conceals the simple fact that we don’t know the baseline to which the anomaly is added to within more than about a degree.
One cannot assume that this error is unbiased normal error — pure statistical error resulting from some process with zero mean. For one thing, the datasets that compute anomalies have systematic differences — some are consistently higher than others (again, in spite of the fact that they have enormous data overlap and indeed are probably adjusted to remain IN approximate agreement). For another, the anomaly computations include systematic corrections to the raw data — which begs so very many questions it is difficult to count them — as well as perform black infilling and extrapolatory magic that literally cannot be validated outside of AT MOST a narrow window of time. Indeed, the strangest thing of all is that even the anomalies fluctuate by several tenths of a degree month to month, all or part of which could be pure statistical error. After all, what they are subtracting to form the anomaly isn’t even a constant average baseline temperature, it is an average baseline temperature plus an assumed known seasonal correction, which is a second order correction compared to the baseline.
With all that said, I do agree with you that the IPCC is getting ready to repeat the sins of AR4’s summary for policy makers and present the mean and standard deviation of many different model results as if it is a statistically meaningful quantity. Which is why your presentation above — especially when presented with the AR4 and/or AR5 predicted trend — is NOT cherrypicking, at least not when applied to the entire time after those (e.g. AR4) predictions were made. That is simply looking to see how the models did, which is terribly.
I’m not certain I agree that we are due for 0.5C of cooling — perhaps we are, perhaps not — because I don’t think uncertain science suddenly becomes certain for you, for me, for your friend who is sometimes right, for the IPCC, for the GCMs, or for your favorite psychic medium. Given the uncertainties in the data and the corrections, I’m not even sure we’ve had the claimed 1 C of global warming post the mid-1800s. I think we have actually had some warming, but it could be a half a degree, it could be a degree and a half. Who knows what Australia, Antarctica, the western half of the United States, most of South America, half of Canada, most of China, the bulk of the pacific, and the bulk of the Atlantic oceans were doing (temperature-wise) in the mid-1800s? Our thermometric data is spotty to sparse and inaccurate, and a lot of this was terra incognita to the point where we don’t even have good ANECDOTAL evidence of climate.
We are left trying to make sense of equally sparse proxies, where the proxy errors BEGIN with the residual errors of the modern era (which typically normalizes the proxy) and get strictly larger as one computes the proxy results further in the past, where the normalization period is almost certainly corrupted by the incorrect inclusion of UHI-contaminated data that is almost impossible to correct without doing a case by case study of EACH contributing station, if then.
I say “if then” because if one looks at the range of temperatures visible on the area weather stations just in the immediate vicinity of my house in Durham, while there are clearly visible UHI systematic errors in the local airports that contribute to e.g. GISS, it isn’t particularly easy to see how to correct them in a time-dependent way that allows for things like gradual urbanization of the area, the fact that it is piedmont (hilly, with significant vertical variation of temperature that is different at different times of the year), with a particular kind of soil that favors certain kids of convective updrafts and thunderstorm formation (at least in the nearby sandhills that influence our weather), with two large impoundments, both very near the airport, that have been built and filled over the last three decades, as the airport itself has gone from a single small terminal and a runway to three terminals and two large runways where they are on the THIRD REBUILD of two of those terminals, and where they relocated the airport weather station right next to the tarmac in the middle of nowhere, directly exposed to the sun, awash with jet exhaust, and right next to what amounts to solar heated rock, gravel and grass (no trees need apply, even though the entire region is heavily wooded EXCEPT for the cities proper and the airport). I shudder to think of doing this sort of thing, correctly, for every contributing weather station or pretending that a one-size fits all correction can be applied across the board on the basis of some simple functional form.
IMO we have at most 33 years of pretty good measurements of global average temperature(s) — by pretty good I mean arguably within a few tenths of a degree C combined systematic and statstical error. We have perhaps another 20-30 years of decent measurements (post-world-war-II, say) where our knowledge probably is within order of a half a degree. Before that, I suspect that it quickly broadens out to a degree or more of error, with an unknown fraction that could be systematic and not zero-trend statistical. It is a daunting proposition to try to measure the Earth’s temperature now with anything like real precision. It isn’t even possible to measure the temperature within a tenth of a degree in my own back yard. Yet we purport to know what the temperature in my own back yard was in the year 1870 to well within a degree? I don’t think so.


Pete Brown says:
August 27, 2013 at 4:09 am
Well, I think you’re forgetting the fact that all of the 163 hottest years on record have occurred in the last 163 years! So there…

You might also add that 163 years out of the last 163 years have occurred after the end of the Little Ice Age.


Will people start calling this the Mann-o-pause?

Gail Combs

John Judge says: @ August 27, 2013 at 3:49 am
….they had better come up with something better than ad hominem attacks and rages against “Big Oil”.
The ‘rages against “Big Oil” ‘ is projection. Al Gore is tied to Occidental Petroleum. Maurice Strong Chair of the UN’s First Earth Summit, served from 1973-1975 as the founding director of the U.N. Environment Program and later chaired (thanks to help from President Bush) Kyoto. He got his start in Saudi Arabia with Rockefeller’s oil company, served as President of the Power Corporation of Canada; CEO of Canada’s national oil company, Petro-Canada (which he also helped to found); and head of Ontario Hydro, North America’s largest utility company. He was senior consultant to the World Bank, the UN and a trustee of a Rockefeller Foundation(Standard Oil money.) SEE: Cloak of Green
Ged Davis, VP of Shell Oil, is shown in one of the Climate gate e-mails as the person who wrote the ‘scenarios’ for the IPCC. (Golden Economic Age (A1), Sustainable Development (B1), Divided World (A2), Regional Stewardship (B2)) SEE: e-mail
Shell and BP provided initial funding for the Climate Research Unit of East Anglia.
And then there is Chris Horner’s eye witness story: Enron, joined by BP, invented the global warming industry. I know because I was in the room.
Their only hope is to tar skeptics with the same brush while down playing their involvement with the energy industries and they have been quite successful… if you are a brain dead sheeple.


Bruce Cobb —
Maybe it was mentioned for people like me. The only explanation I have for how a “greenhouse gas” could possibly cause warming works equally well for preventing warming, i.e., for moderating temperature, damping both warming and cooling for a net change of zero. I am willing to be made wrong, because then I’d understand the matter better, but it remains that I honestly don’t see it. I’d welcome a direction to a high-school-level explanation (one not written by believers in AGW). Hopefully it will include why Venus, when pressure is accounted for, is no hotter than it “should” be just from closer proximity to the sun. ?


It was Prof. Phil Jones in the Climategate emails (‘Bottom line: the “no upward trend” has to continue for a total of 15 years before we get worried.’) and Ben Santer in a 2011 paper in JGR (“Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.”) who picked the time period. Complaints of cherry-picking go to them. If perhaps they were wrong about validation periods, what else might they be wrong about?

Gail Combs

Chris Schoneveld says: @ August 27, 2013 at 3:57 am
…..It is a pity that he chose RSS, since it gave his opponents ammunition to attack his credibility.
I would also pick RSS because it is the ‘CLEANEST’ of the data sets. The three terrestrial datasets: GISS, HadCRUt4, NCDC, have been sliced diced and mutilated as has been shown repeatedly here at WUWT. An example from Jo Nova’s site.
The satellite sets also have better more uniform coverage without all the problems of very spotty coverage, siting, multiple instruments, calibrations and observers that give you a much larger possible error.
Of the two satellite sets: UAH has the ‘Stigma’ from Dr. Roy Spencer (in the minds of the warmists) so that leave RSS as the only ‘neutral’ set.


Village Idiot, given a 420K/yr history of +\- 1 deg change within a +\- 3 deg range, I suspect he can get a bit closer then the model’s mean of greater then two degrees of error using nothing more then a bit of chalk.
By the way, I think this particular assertion is just plain wrong. The glacial/interglacial variation in global temperature is a lot larger than that according to e.g. this graph:
or this one:
The actual range of variation appears to be roughly 10 K, which is quite substantial. Note well that almost all of that variation is negative compared to the present — we are currently in the Holocene interglacial, with a temperature that appears to coincide with both the warmer third of the Holocene and with the warm interglacials past. That is, if glaciation returns (so glaciers descend across the Americas several kilometers thick all the way south into say Pennsylvania, similar effects in what is not the temperate zone worldwide) global average temperatures could be as much as 8 to 10 C cooler than they are at present. In the last glacial era (the Wisconsin) CO_2 levels dropped to where the partial pressure was barely large enough to sustain at least some species of plants — close to mass extinction levels, in other words. If one goes back further still, out of the ice age we are currently in, the Eocene optimum was perhaps 5 to 10 C warmer than the present, as well. But at that point the continents themselves had a different shape.
The five million year curve is actually rather disturbing, especially when compared and contrasted with the last half-billion years. We are actually remarkably, dangerously cold. It has only been as globally cold as it currently is in a single era out of the last 500 million years (on a timescale where the entire glacial/interglacial fluctuation vanishes). We spend 90,000 out of 100,000 years in glacial mode, with that borderline extinction-event hovering on the low side of the temperature/CO_2 curve. So I think Monckton got all of these numbers wrong (or at the very least, I’d like to know the source for his claims as they disagree with the published results that are the basis of these figures, which I think are really rather reasonable as we have NON-anecdotal evidence for glaciation that would have kept e.g. all of New York State and parts north at a temperature well-below freezing year round. That’s the kind of proxy I trust, although perhaps not to terribly high resolution, and of course it merely corroborates this isotope-derived evidence.

Jim from Maine

Someone please send this to CNN, NYT, ABC, NBC, CBS, AND The President.
And add in every local college and highschool.


stevefitzpatrick says:
August 27, 2013 at 3:41 am
The models really do grossly overstate warming.
Of course they do….models are tuned to past temperatures that have been adjusted to show more rapid warming than is real…
…even if you had the most accurate model in this world, it would never be right because of those past temp adjustments
All models are going to shop more rapid warming because of that one fact
If the get the model right….they would have to admit they lied about past temps
a catch 22……..


All models are going to shop – show

Hugh Eavens

These IPCC models predict the trend per century. The data uncertainties of measurement work and climate noise both ways in the argument. So after how many years of decline can the IPCC models be challenged? According to some of their experts it was around 15 years but each year it’s of course becoming more of a problem statistically since then.
But that still does not make the graphs as created above that meaningful or scientific as often portrayed. The prediction is multi-decadal and the measured cooling trends are not there yet, especially considering the admitted inaccuracies (again they work BOTH ways).
This is in my view the fundamental flaw in the reasoning in this article: comparing current trends of one or two decades with predictions made for the coming 50-100 years. They just do not compare well although they can be interesting to analyze of course. It’s just that the exercise does not warrand the conclusion drawn: “the world is not warming anything like as fast as the model”. Simple because the model does not inform you that well about the trend in this decade. It’s simply not designed to do so.

If the objection is cherry picking the years involved, then let’s REALLY cherry pick andstart with the MWP.