Despite IPCC doom report, this dataset of datasets shows no warming this millennium

By Christopher Monckton of Brenchley

HadCRUT4, the last of the five monthly global datasets to report its February value, shows the same sharp drop in global temperature over the month as the other datasets.

clip_image002

Our dataset-of-datasets graph averages the monthly anomalies for the three terrestrial and two satellite temperature records. It shows there has still been no global warming this millennium. Over 13 years 2 months, the trend is zero.

 

Start any further back and the trend becomes one of warming – but not of rapid warming. The Archdruids of Thermageddon, therefore, can get away with declaring that there is no such thing as a Pause – but only just. Pause denial is now endemic among the acutely embarrassed governing class.

This month Railroad Engineer Pachauri denied the Pause: yet it was he who had proclaimed its existence only a year ago in Australia.

However, it is no longer plausible to suggest, as the preposterous Sir David King did in front of the House of Commons Environment Committee earlier this month, that there will be as much as 4.5 Cº global warming this century unless CO2 emissions are drastically reduced.

More than an eighth of the century has passed with no global warming at all. Therefore, from now to 2100 warming would have to occur at a rate equivalent to 5.2 Cº/century to bring global temperature up by 4.5 Cº in 2100.

How likely is that? Well, for comparison, HadCRUT4 shows that the fastest global warming rate that endured for more than a decade in the 20th century, during the 33 years 1974-2006, was equivalent to just 2 Cº/century.

Even if that record rate were now to commence, and were to continue for the rest of the century, the world would be only 1.75 Cº warmer in 2100 than it was in 2000.

The fastest supra-decadal warming rate ever recorded was during the 40 years 1694-1733, before the industrial revolution began. Then the Central England record, the world’s oldest and a demonstrably respectable proxy for global temperature change, showed warming at a rate equivalent to 4.3 K/century. Nothing like it has been seen since.

Even if that rapid post-Little-Ice-Age naturally-driven rate of naturally-occurring warming were to commence at once and persist till 2100, there would only be 3.75 Cº global warming this century.

Yet the ridiculous Sir David King said he expected 4.5 Cº global warming this century. Even the excitable IPCC, on its most extreme scenario, gives a central estimate of only 3.7 Cº warming this century. Not one of the puddings on the committee challenged him.

Meanwhile, the discrepancy between prediction and observation continues to grow. Here is the IPCC’s predicted global warming trend since January 2005, taken from Fig. 11.25 of the Fifth Assessment Report, compared with the trend on the dataset of datasets since then. At present, the overshoot is equivalent to 2 Cº/century.

clip_image004

It is this graph of the widening gap between the predicted and observed trends that will continue to demonstrate the absence of skill in the models that, until recently, the IPCC had relied upon.

Finally, it is noteworthy that the IPCC’s mid-range estimate of global warming from 1990 onward was 0.35 Cº/decade. The IPCC now predicts less than half that, at 0.17 Cº/decade. At that time, it was advocating a 50% reduction in CO2 emissions. It is now transparent that no such reduction is necessary: for the warming rate is already below what it would have been if any such reduction had been achieved or achievable, desired or desirable.

Within a few days, the RSS satellite record for March will be available. I shall report again then. So far, that record shows no global warming for 17 years 6 months.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
169 Comments
Inline Feedbacks
View all comments
JDT
April 1, 2014 10:11 am

E.Smith
You seem to mis-understand the nature of EM radiation. EM radiation carries energy. EM radiation at higher energy (short wavelength) can be converted into heat or lower-energy (infrared) EM radiation. So the earth is most definitely NOT a closed system!
PS just for fun:
http://globalclimate.ucr.edu/resources.html#q3

dikranmarsupial
April 1, 2014 11:17 am

The standard test for the biasedness of a coin says that if you flip a coin five times and get five heads in a row, the p-value (the probability of observing a result at least as extreme under the null hypothesis that the coin is unbiased) is 0.03125, which is less than the usual threshold of 0.05, so we say that the null hypothesis is rejected and can claim we have reasonable evidence that the coin is biased.
Now if I take a fair coin and flip keep flipping it until I get five heads in a row, do I have statistical justification for claiming the coin is biased? No, because instead of the flips being a random sample, I have cherry picked the five flips that suited my argument.
Does it make a difference whether I use a computer algorithm to do the cherry picking for me? No, of course not.
Occasional periods of little or no warming can be found in the observations and in GCM output, to show how unsurprising this result is, there was a flat trend in the RSS dataset from 1980 to 1994:
http://woodfortrees.org/plot/rss/plot/rss/from:1980/to:1994/trend
So if there was no warming from 1980-1994 and none from 2000 to present, there was a rather busy 6 years in the middle! ;o)
The difference is that *I* know this is cherry picking and wouldn’t even consider making a serious argument based on a cherry picked trend.

Mark Bofill
April 1, 2014 11:30 am

Dr Cawley,
How many flips of the coin will satisfy you? Can you offer a figure in years after which, if we have not seen the projected atmospheric temperature trends, you will accept that the coin is biased? If not, it would seem to me that there is ~no~ statistical argument that you’d find persuasive.

dikranmarsupial
April 1, 2014 11:38 am

Mark Bofill, it is not a matter of the number of coin flips, the point is that cherry picking the interval to maximise the strength of the evidence in favour of your argument is bad statistics. As the old quote goes “he uses statistics as a drunk uses a lamp-post – more for support than illumination”. In the coin flipping example it is simple enough to come up with an alternative test that takes into account the scope for cherry picking.
In the case of the “pause”, the statistical test is straightforward, you just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades (taking the autocorrelation of the montly observations properly into account). This is not a particularly difficult test to perform, but skeptics never seem to want to talk about that test, for some reason! ;o)
I am amenable to solid statistical arguments, but I am not too impressed by cherry picking.

dikranmarsupial
April 1, 2014 11:39 am

By the way, you can call me “Gavin” if you like, I am not that formal.

Jason Calley
April 1, 2014 11:42 am

dikranmarsupial says at April 1, 2014 at 11:17 am
You make a very good point about cherry picking part of a series of coin tosses. If you don’t mind, let me modify your example so that it more closely reflects the situation regarding AGW. Suppose an AGW model maker gives you a coin and tells you this: “This coin is a fair unbiased coin right now, but I have modified it so that with every passing throw, it becomes more and more biased towards landing heads up. How much more biased? An increasing bias so large that no matter how many times you throw it, the chance of getting a run of fifteen throws with equal heads and tails is extremely unlikely.”
The AGW theorists have told us that each year that CO2 increases, the chances of a hotter world increase. How much do the chances increase? So much that the chance of getting 15 years in a row (even with a long series of years) without overall warming is statistically insignificant
Your analogy, while correct for unbiased coins, is NOT correct for biased coins.

dikranmarsupial
April 1, 2014 11:46 am

Jason Calley, GCMs are not that tunable, the fact that no skeptic has made a GCM that can explain the observed climate using only natural forcings demonstrates this is the case. The attempt to deflect the criticism of cherry picking, without addressing it, onto a discussion of models is noted however. ;o)

Mark Bofill
April 1, 2014 11:51 am

Gavin,

In the case of the “pause”, the statistical test is straightforward, you just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades (taking the autocorrelation of the montly observations properly into account). This is not a particularly difficult test to perform, but skeptics never seem to want to talk about that test, for some reason! ;o)

Thank you Gavin. My maths aren’t as strong as I’d like, so it’s entirely possible I’m making a mistake, but my impression was that the longer we go with observations failing to match a predicted trend, the less probable the idea that the trend is valid becomes, much like your coin example.
I’ve read on Lucia’s Blackboard that between 15 and 17 of the GCMs used for AR4 projections are outside of their 95% confidence intervals, depending on the temperature set used to evaluate them (here. Why shouldn’t I find her argument to reject the hypothesis compelling?

Jason Calley
April 1, 2014 11:51 am

@dikranmarsupial “The attempt to deflect the criticism of cherry picking, without addressing it, onto a discussion of models is noted however. ;o)”
You failure to acknowledge the role of increasing bias in negating your charge of cherry picking, without addressing it is noted however. :o)

Mark Bofill
April 1, 2014 11:57 am

The attempt to deflect the criticism of cherry picking, without addressing it, onto a discussion of models is noted however.

Oh. We’re considerably sloppier over here than it looks like the folks at SkS are, we wander off topic all the time. So my position is plain, I’ve got reservations about the cherry picking argument but the topic doesn’t interest me all that much. Recognizing the criteria by which we can statistically accept or reject the hypothesis that the GCM’s project realistic atmospheric temperature trends is what I care about.

dikranmarsupial
April 1, 2014 12:01 pm

Mark, whether there is a pause in warming or not is independent of the models, if there is a pause it means that there is a statistically significant difference between the trend in the period before the start of the pause and the trend in the period after. If you account for the autocorrelation in montly data, the difference isn’t even close to being statistically significant even if you do cherry pick the cut off date to maximise the strength of the argument.
Asking whether there is a model-data inconsistency is a separate question altogether. The evidence for a model-data inconsistency is stronger than that for the existence of a pause, but it isn’t (yet) statistically significant either. However, one of the problems with a test for model-observation inconsistency is that the inconsistency can be caused either by the models running too warm, or because they underestimate the effects of internal climate variability (which is used as part of the test) or a bit of both. For this reason the tests commonly discussed on blogs (e.g. Fyfe) are not nearly as conclusive as they are often made out to be.
If you want to look at something where the models are very clearly wrong, look at Arctic sea ice, the models have massively underestimated the rate of loss.
The pause and the model-data difference are both interesting and science is learning a great deal about internal variability by research that has been the result of efforts to understand the likely causes.

Mark Bofill
April 1, 2014 12:02 pm

Uhm oops, should have said AR5 models or I should have linked a different post. Regardless, question remains.
BTW, I appreciate the opportunity for a brief civilized exchange Gavin. I doubt it will last long here, but I’d be delighted to be mistaken about that.

Mark Bofill
April 1, 2014 12:08 pm

there is a pause it means that there is a statistically significant difference between the trend in the period before the start of the pause and the trend in the period after.

I understand what you’re saying. It’s not the way I’m used to thinking about it, but it makes sense. So you’re saying forget the models, look at the actual measurements as a random variable, and use statistics to figure out if there’s a pause or anything unusual going on.
Gotcha.
I’ll go kick that around for awhile. Obviously it’s OK to do that, but I’m not 100% sure what I think that demonstrates.
At any rate, appreciate your time sir, thanks.
Mark

dikranmarsupial
April 1, 2014 12:14 pm

Mark, yes, I suspect you are right, but I would also be delighted to be mistaken about that!
The key problem with the analysis at the BlackBoard is the bit about ” It possible to estimate the ±95% range spread of trend due to “weather” based on that model by computing the standard deviation and assuming trends are normally distributed.”, If the models underestimate the magnitude of the “weather”, it falsely inflates the significance of the difference between the model mean and the observations. My intuition was that the models would be more reliable in estimating the forced response (the ensemble mean) than they would be in estimating the variability die to the “weather” (the standatd deviation), however having spoken to some climatologists I now think it is more likely to be a bit of both being wrong. The important question to ask is “why is it that the discussion focuses on whether the models overestimate the trend rather than whether they underestimate climate variability”?
Essentially the Blackboard test is only unbiased if the models accurately estimate the effects of climate variability. Why should we expect them to be able to do that if they can’t get the forced response correct?

Mark Bofill
April 1, 2014 12:28 pm

Gavin,
As far as I understand it, that’s correct. It’s also possible that the ‘bounds’ are set too tight and underestimate variability. We can state more generally that what is demonstrated is that one of the assumptions are incorrect, where the set of assumptions include (model trend is correct, model variability is correct,) and possibly other elements. Any way we slice it though, we’re making a mistake someplace.

Mark Bofill
April 1, 2014 12:40 pm

Oh. I never really answered you question.

The important question to ask is “why is it that the discussion focuses on whether the models overestimate the trend rather than whether they underestimate climate variability”?

‘Cause the trend is more relevant to the eventual state of the variable we care about, would be my guess. But point taken that model rejection doesn’t necessarily demonstrate that the trend was the problem.

dikranmarsupial
April 1, 2014 12:47 pm

Mark, the trouble is that they are both linked, and you can’t obtain a clear answer about one without getting a clear answer to the other. The point is that it is better to focus on understanding the science rather than simply trying to reject the models. As I pointed out, it is easy to find things that the models get wrong (e.g. sea ice extent), but that does not mean that the models are not the best method we currently have for reasoning about the effects of our (in)actions on future climate. Understanding the science will help us make better models in the future, simply trying to reject models is not in itself a highly productive activity.

Mark Bofill
April 1, 2014 1:04 pm

Certainly. But when you say ‘reject the models’, it’s important (in my view) to identify what use to reject the models for. This becomes a matter of personal opinion of course, but I don’t want policymakers making decisions on the basis of these models with the mistaken idea that we have confidence that the models accurately project variables we’re interested in. Some disagree with me on that. I don’t think politicians understand the distinctions though. I worry that they hear ‘computer model’ and they think they’ve got an oracle.
Steven Mosher used to talk about this some now and then. The models are basically just our understanding of the physics programmed conveniently so we don’t have to do the math by hand. Obviously there’s value in automation; no advantage to doing the computations with pencil and paper. I just worry that people aren’t aware of the pitfalls. Also – maybe we’d disagree on this, but I suspect the fact that our GCM projections do not appear to match observations in statistical tests means that we don’t really understand the Earth’s climate half as well as some would like us to believe.

1sky1
April 1, 2014 1:24 pm

Fitting decade-long linear “trends” to climate data is a fool’s errand; the values obtained are demonstrably much too volatile to provide any indication of truly secular trend. Only records longer than a century can begin to distinguish between the latter and various phases of multidecadal cycles.

Ian
April 1, 2014 1:24 pm

dikranmarsupial
You say “Jason Calley, GCMs are not that tunable, the fact that no skeptic has made a GCM that can explain the observed climate using only natural forcings demonstrates this is the case”.
Does this not count?
http://www.drroyspencer.com/research-articles/satellite-and-climate-model-evidence/
You may not agree with it, but it does exist :).

TRM
April 1, 2014 1:57 pm

I for one love these updates. It is a constant stream of blows to the CO2 controls the climate crowd. If Dr Easterbrook and Dr Libby are correct we could be in for a lot more. I wonder at what point they finally give up? 20 years? 30? 50? Never? Sadly my guess is the last one.
Oh well I’m taking matters into my own hands. This summer I’m doing more insulating and other practical stuff to stay warm in the winters to come. Already got some done over the last few years but always lots more to do.
Cheers and stay warm everyone.

April 1, 2014 2:14 pm

dikranmarsupial
You say “Jason Calley, GCMs are not that tunable, the fact that no skeptic has made a GCM that can explain the observed climate using only natural forcings demonstrates this is the case”.

Well, your statement is overly restrictive: the fact is that no scientist has made a GCM that explains the observed climate. Period.
Only CAGW Alarmists use the existing GCMs to forecast the future and make recommendations regarding our actions.
Skeptics remain, uh, skeptical of their usefulness. (“their” meaning both the GCMs and the Alarmists). 🙂

April 1, 2014 5:23 pm

TomP says:
April 1, 2014 at 12:21 am
Quote ” Mario, The millennium started in 2001 because there was no year zero.”
The real reason Monckton starts at 2001 is that if you include 2000, the slope is positive.
+++++++++
Then why not since 1998? Where from there through today it’s negative still?? The convention is correct at Lord Monckton presented

Dr. Strangelove
April 1, 2014 6:52 pm

dikran
You cannot have it both ways. You say the “pause” is a statistical noise due to chance. Then I say the warming in 1978-1998 is also a statistical noise. But warmists say that’s due to humans. So the cool periods are statistical noise but warm periods are due to humans? Earth has been warming since the Little Ice Age. Is that also man-made?
Atmospheric CO2 is steadily increasing but temperature is cyclical showing warming and cooling. Is the discrepancy due to chance? None of the models predicted the pause. Is that also due to chance? Even random guesses have a chance of being correct. But contrived predictions are likely wrong when they contradict observations.
BTW none of the global temperature anomalies from 1950-2013 exceeds 2-sigma deviation in warming and cooling. None is statistically significant by the standard of empirical sciences. They are all statistical noise.

Chris
April 1, 2014 7:36 pm

Query: are there larger size images of the two charts accompanying this post available somewhere? I’d like to use them as wallpaper on my computer monitors.