The 200 months of 'the pause'

By Christopher Monckton of Brenchley

A commenter on my post mentioning that according to the RSS satellite monthly global mean surface temperature dataset there has been no global warming at all for 200 months complains that I have cherry-picked my dataset. So let’s pick all the cherries. Here are graphs for all five global datasets since December 1996.

GISS:

clip_image002

HadCRUt4:

clip_image004

NCDC:

clip_image006

RSS:

clip_image008

UAH:

clip_image010

The mean of the three terrestrial datasets:

clip_image012

The mean of the two satellite datasets:

clip_image014

The mean of all five datasets:

clip_image016

Since a trend of less than 0.15 K is within the combined 2 σ data uncertainties arising from errors in measurement, bias, and coverage, global warming since December 1996 is only detectable on the UAH dataset, and then barely. On the RSS dataset, there has been no global warming at all. None of the datasets shows warming at a rate as high as 1 Cº/century. Their mean is just 0.5 Cº/century.

The bright blue lines are least-squares linear-regression trends. One might use other methods, such as order-n auto-regressive models, but in a vigorously stochastic dataset with no detectable seasonality the result will differ little from the least-squares trend, which even the IPCC uses for temperature trend analysis.

The central question is not how long there has been no warming, but how wide is the gap between what the models predict and what the real-world weather brings. The IPCC’s Fifth Assessment Report, to be published in Stockholm on September 27, combines the outputs of 34 climate models to generate a computer consensus to the effect that from 2005-2050 the world should warm at a rate equivalent to 2.33 Cº per century. Yeah, right. So, forget the Pause, and welcome to the Gap:

GISS:

clip_image018

HadCRUt4:

clip_image020

NCDC:

clip_image022

RSS:

clip_image024

UAH:

clip_image026

Mean of all three terrestrial datasets:

clip_image028

Mean of the two satellite datasets (monthly Global Warming Prediction Index):

clip_image030

Mean of all five datasets:

clip_image032

So let us have no more wriggling and squirming, squeaking and shrieking from the paid trolls. The world is not warming anything like as fast as the models and the IPCC have predicted. The predictions have failed. They are wrong. Get over it.

Does this growing gap between prediction and reality mean global warming will never resume? Not necessarily. But it is rightly leading many of those who had previously demanded obeisance to the models to think again.

Does the Great Gap prove the basic greenhouse-gas theory wrong? No. That has been demonstrated by oft-repeated experiments. Also, the fundamental equation of radiative transfer, though it was discovered empirically by Stefan (the only Slovene after whom an equation has been named), was demonstrated theoretically by his Austrian pupil Ludwig Boltzmann. It is a proven result.

The Gap is large and the models are wrong because in their obsession with radiative change they undervalue natural influences on the climate (which might have caused a little cooling recently if it had not been for greenhouse gases); they fancifully imagine that the harmless direct warming from a doubling of atmospheric CO2 concentration – just 1.16 Cº – ought to be tripled by imagined net-positive temperature feedbacks (not one of which can be measured, and which in combination may well be net-negative); they falsely triple the 1.16 Cº direct warming on the basis of a feedback-amplification equation that in its present form has no physical meaning in the real climate (though it nicely explains feedbacks in electronic circuits, for which it was originally devised); they do not model non-radiative transports such as evaporation and convection correctly (for instance, they underestimate the cooling effect of evaporation threefold); they do not take anything like enough account of the measured homeostasis of global temperatures over the past 420,000 years (variation of little more than ±3 Cº, or ±1%, in all that time); they daftly attempt to overcome the Lorentz unpredictability inherent in the mathematically-chaotic climate by using probability distributions (which, however, require more data than straightforward central estimates flanked by error-bars, and are thus even less predictable than simple estimates); they are aligned to one another by “inter-comparison” (which takes them further and further from reality); and they are run by people who fear, rightly, that politicians would lose interest and stop funding them unless they predict catastrophes (and fear that funding will dry up is scarcely a guarantee of high-minded, objective scientific inquiry).

That, in a single hefty paragraph, is why the models are doing such a spectacularly awful job of predicting global temperature – which is surely their key objective. They are not fit for their purpose. They are mere digital masturbation, and have made their operators blind to the truth. The modelers should be de-funded. Or perhaps paid in accordance with the accuracy of their predictions. Sum due to date: $0.00.

In the face of mounting evidence that global temperature is not responding at ordered, the paid trolls – one by one – are falling away from threads like this, and not before time. Their funding, too, is drying up. A few still quibble futilely about whether a zero trend is a negative trend or a statistically-insignificant trend, or even about whether I am a member of the House of Lords (I am – get over it). But their heart is not in it. Not any more.

Meanwhile, enjoy what warmth you can get. A math geek with a track-record of getting stuff right tells me we are in for 0.5 Cº of global cooling. It could happen in two years, but is very likely by 2020. His prediction is based on the behavior of the most obvious culprit in temperature change here on Earth – the Sun.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
301 Comments
Inline Feedbacks
View all comments
JimF
August 27, 2013 7:38 pm

rgbatduke says:
August 27, 2013 at 5:29 am: You begin to sound like a geologist, rather than a physicist. We geo-types have been thinking this way a long time.

August 27, 2013 7:38 pm

Gail Combs says August 27, 2013 at 6:42 pm

So you want a more recent article?
How about this one: FORBES: 6/19/2013 Will Summer Blackouts Doom The Texas Boom?

Your article posits a ‘what if’ scenario in Texas; since we are WELL past June (note the date on your cited article!) I can now answer with certitude that we have had _no_ ‘posited’ blackouts OR brownouts. Zip, zero, nada. (You did notice one of the favored weasel words “will” was used in the title, a word frequently used by purported ‘news’ organizations to ‘hype’ stories?)
Next …
.

Janice Moore
August 27, 2013 8:04 pm

Hey, _Jim! LOL, still trying to help your favorite blogger, Gail, I see. Good for you. You might want to re-think that last post, however… . (ahem). While the article was dated 6/19/13 (how kind of you to point that out to Ms. Combs), it’s still “summer.” It will be summer until around September 21 (just FYI). Perhaps, there will be no brown-outs as feared, but, her cite was not the fool’s errand you make it out to be.
Oh, _Jim, I probably shouldn’t have written so sarcastically to you above. Please forgive me, but IT WAS SO MUCH FUN! Okay, now you can help ME with my writing!
#(:))

Peter
August 27, 2013 8:29 pm

I think I need a primer on temperature. Is the average temperature the average of the daily Tmax and Tmin, or is it the average of more data points through the day? My recollection is that it is the average Tmax and Tmin, and when compared with the hour or minute frequency data, there is quite a large difference in what the average is, much larger than the fractions of degrees for the one sigma stated earlier. Has the global average daily Tmax gone up with time, or the average Tmin, or both? If all of the warming is on the low end, is it less of a problem than on the high end?

RoHa
August 27, 2013 8:30 pm

” the most obvious culprit in temperature change here on Earth – the Sun.”
O.K. The science is in. Time for us to take action. Fix the Sun, or get rid of it entirely.

Janice Moore
August 27, 2013 8:41 pm

Peter, here’s an article that may help you understand max min temperature:
http://wattsupwiththat.com/2013/06/20/model-data-comparison-daily-maximum-and-minimum-temperatures-and-diurnal-temperature-range-dtr/
If it is unhelpful, try searching on WUWT using search term “max min temperature” or “diurnal temperature” or other such terms.
GOOD LUCK!
(I won’t try to explain it, I am too ignorant of the subject matter myself!)

Rob
August 27, 2013 9:24 pm

Whamo! Don’t you just love “real data”.

August 27, 2013 9:24 pm

_Jim;
You know, you have to let ppl make their own mistakes and learn;
>>>>>>>>>>>>>>
What a crude and ignorant remark. Is there some part of “people died” that you fail to comprehend? Do dead people learn from their mistakes? In this case the “mistake” being that they were unfortunate innocent victims of someone else’s mistake? If you spotted a child playing with a loaded machine gun, would you shrug and suggest the child should learn from their “mistakes”? Or just the people the child accidentally kills?
I suggest that you owe Gail an apology.

August 27, 2013 9:29 pm

Thanks, Christopher, Lord Monckton. Good work!
A long, significant pause for average temperature under ever-increasing CO2 concentration in the atmosphere.
Could it be that night temperatures are raising while day temperatures ares falling, as Patrick J. Michaels, wrote in 1992?
“Most of the warming is at night, when it produces benign effects such as longer growing seasons”
From “Sound and Fury – The Science and Politics of Global Warming”
See http://store.cato.org/free-ebooks/sound-fury-science-politics-global-warming

Janice Moore
August 27, 2013 9:55 pm

Good for you to stand up for Gail Combs, David M. Hoffer. Well said and I agree.
Perhaps…….. _Jim thinks Gail Combs is really his ex-wife (or SOMETHING like that — she is his number one target quantity-wise by FAR), lol. He makes a practice of following her posts and regaling us all with his harsh criticisms of what she writes. One wonders why… .

Henry Clark
August 28, 2013 12:04 am

Monckton of Brenchley says:
August 27, 2013 at 10:34 am
“At the 1.3 inches/century mean rate of sea-level rise shown by the late Envisat satellite during the eight years of its operation, that will be quite a long time.”
That must be a reference to how, indeed, Envisat in its original data was finding little sea level rise, including showing a fall during 2010.
Sea level change is plotted by most publicizing institutions in an extremely misleading manner via showing only total cumulative gain rather than variation in the rate of change. However, doing the latter leads to a striking pattern in sea level rise, cloud cover, humidity at appropriate altitude, and temperature:
http://s24.postimg.org/rbbws9o85/overview.gif
As can be seen, there is quite a reason that the *derivative* of sea level change is almost never, ever, ever plotted in graphs distributed by those favoring the CAGW movement.
Based on the preceding (and the sun seeming headed for a Grand Minimum), I’d wager outright fall in sea level to become the trend by / after later this decade (at least in reality, although publicized charts may unfortunately be largely a matter of adjustments).

August 28, 2013 1:30 am

1sky1:
Your post at August 27, 2013 at 4:08 pm
http://wattsupwiththat.com/2013/08/27/the-200-months-of-the-pause/#comment-1401440
answers my post addressed to you at August 27, 2013 at 3:37 pm
http://wattsupwiththat.com/2013/08/27/the-200-months-of-the-pause/#comment-1401415
In my post I said

The climate models are claimed to emulate climate behaviour as represented by the existing data sets of global temperature. That claim is falsified by comparison of the models’ outputs with the existing data sets of global temperature. This thread is discussing that falsification.

THAT IS TRUE and, as I said, the validity of the “existing data sets of global temperature” is not relevant to “that falsification”.
But your reply says

Inasmuch as climate models don’t calim to emulate multidecadal variations, only the SECULAR trend should be at issue in any comparison with in situ observations. My point about the requirements to establish such a trend is entirely relevant to the issue of model falsification.

What you, I, or anybody else thinks “should be at issue” has no relevance to what IS at issue. And this thread is about what is at issue.
And it is not possible to define the “SECULAR trend” because there is no agreed definition of what average global temperature is or how it could be determined. This is explained in Appendix B of the item I linked for you. To save your needing to find it, I again provide it here
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
So, I request that we return to the important subject of this thread.
Richard

August 28, 2013 1:38 am

Friends:
Ian W said at August 27, 2013 at 4:15 pm

the EPA will carry on regardless regulating the USA energy supplies out of existence. Similarly, the EC will continue imposing its ‘carbon taxes’ on as many areas as it can. The bureaucrats in both agencies have nothing else to do – that is their sole raison d’etre. So even as the US becomes embroiled in yet another war – the EPA will be cutting away at its industrial foundations. I am not sure they care about or even believe in CAGW but their jobs depend on it being true and on enforcing ever more stringent regulations – so that is what they will do as it is their source of power.

Repeated for emphasis because it is why the subject of this thread is important.
Those bureaucrats will be permitted to continue founding their activities on the the ‘projections’ of climate models unless people are made aware that those ‘projections’ are shifting sand.
Richard

August 28, 2013 1:59 am

stevefitzpatrick:
re your silly post addressed to me at August 27, 2013 at 5:40 pm.
I consider ALL the science of climate change to be important because the pseudoscience of AGW is
(a) damaging the reputation of all science
and
(b) being used as an excuse to promote totalitarianism.
Your assertion that I don’t want to discuss any of the science of climate change is untrue and laughable.
But I object to trolls attempting to distract discussion of any part of the science of climate change.
If you want to discuss ENSO then do it on appropriate thread – they often occur on WUWT – where I would be pleased to engage about it. ENSO is NOT the subject here. And if you wanted to deflect a thread about ENSO with irrelevances about “the 200 months of the pause” then I would object to that, too.
Your every contribution in this thread has been input of ‘red herrings’. And your false assertions of what I “don’t want” to do cannot hide your trolling.
Richard

rgbatduke
August 28, 2013 5:29 am

Your article posits a ‘what if’ scenario in Texas; since we are WELL past June (note the date on your cited article!) I can now answer with certitude that we have had _no_ ‘posited’ blackouts OR brownouts. Zip, zero, nada. (You did notice one of the favored weasel words “will” was used in the title, a word frequently used by purported ‘news’ organizations to ‘hype’ stories?)
I don’t know about Texas, but North Carolina is having its coolest summer in at least 30 years, maybe more. In August, for example, it’s been down in the mid-50’s at night repeatedly, and we’ve had daytime highs that are in the seventies and low 80s where “normal” is 88 to 90 and where the record highs are around 100 F. It is cloudy to partially cloudy almost all the time, not with the high haze of humidity that makes it killer hot, but with honest cumulus and a fair bit of rain. The weather right now feels more like mid to late September or even October, not August, which is usually scorching. It’s been nice outside, nice enough to do things outdoors in the middle of the day far inland, in August. Often. Basically, this never happens, trust me.
It is quite possibly the case that Texas had a similarly comparatively cool summer, but either way the fact that Texas did NOT have a blackout or brownout doesn’t falsify the reported shortage of capacity, does it? If it had had an unusually HOT summer with HIGH demand, or if any sort of incident had interrupted a part of its supply chain, it might have, but in a cool or even a “normal” summer without accident or incident, it would get by. It may also have more expensive alternatives that are not included in its primary capacity computation that saved it — buying power across state lines, for example — that buffer(ed) any shortfalls that might have occurred without making any waves but higher costs that were absorbed by the fact that the overall summer was cooler than normal.
I’m not sure how wise it is, in other words, to ignore an article that points out that a critical resource is marginally provisioned to handle a growing, inelastic demand, in such a way that fluctuations can push one over the edge. Sadly, if it ever does go over the edge into brownouts or blackouts, people will die (no AC is potentially life-threatening to the elderly and very young in midsummer) and global warming, not carbon trading, will be blamed even if the weather that produces it is completely within normal hotter year/colder year fluctuations about its floating mean behavior.
Europe is, of course, far worse off because they invested far more heavily in e.g. wind generation, and they are backing off as hard as they can right now as it is becoming clear that energy poverty is a whole new emerging class of discontentment in a democratic population. In the end, CAGW hysteria may fail simply because people refuse to vote themselves into poverty and misery and bring civilization itself crashing down in order to avert a model-predicted disaster that is, um, not happening they way it was predicted, I mean “projected”, errr, prophesied, hmmm, speaking of weasel words, what IS the right word for making a statement about the future that you are unwilling to bet your professional ass on but are perfectly happy to have used to bet the ass of global civilization on instead?
rgb

August 28, 2013 5:55 am

Ooops, sorry, I put the wrong link in that comment! (feel free to delete it). Too early in the morning. THIS is what I meant to alert you to:
http://scienceblogs.com/gregladen/2013/08/28/can-monckton-put-his-money-where-is-mouth-is/

rgbatduke
August 28, 2013 6:03 am

Sea level change is plotted by most publicizing institutions in an extremely misleading manner via showing only total cumulative gain rather than variation in the rate of change. However, doing the latter leads to a striking pattern in sea level rise, cloud cover, humidity at appropriate altitude, and temperature:
http://s24.postimg.org/rbbws9o85/overview.gif
As can be seen, there is quite a reason that the *derivative* of sea level change is almost never, ever, ever plotted in graphs distributed by those favoring the CAGW movement.

Wow, really interesting. I wonder what Lief would say regarding this — usually he jumps all over arguments linking solar state to secular climate trends, but most of the data above seems linked to magnetic field variation and directly measured or proxy-inferrable variations in cosmic ray flux. Given a proposed, physically plausible causal mechanism, the correlations become something more than numerology and are rather compelling especially given that they appear on multiple time scales (including geologic time scales). While I wouldn’t say that they “prove” a solar-climate link beyond mere insolation variation or demonstrate that solar influences on climate suffice to completely trump possible GHE variations, they suggest that if nothing else, GCMs are omitting a possibly important variable. I was already aware of the fact that stratospheric H2O has plummeted in the last five or six years, that so far NASA is not attributing a cause, and that even NASA’s articles on this are pointing out that it should have a -0.5 C or thereabouts effect on climate if sustained, more if further amplified.
Why not break this out and turn it into a top article? A single streaming GIF isn’t the best possible way to present either textual or graphical information, and I hesitate to add the link to my collection of climate-related links because the source link does not have the feel of permanence (where WUWT is archived AFAICT indefinitely).
rgb

August 28, 2013 7:21 am

In the Gap plots for the last 100 months Monckton starts the red line projection approximately 1.5ºC above the Jan 2005 measured value, why is that? Surely if he is really showing the gap between actual temperatures and projections since 2005 the plots should have the same origin.

rgbatduke
August 28, 2013 7:25 am

I think I need a primer on temperature. Is the average temperature the average of the daily Tmax and Tmin, or is it the average of more data points through the day? My recollection is that it is the average Tmax and Tmin, and when compared with the hour or minute frequency data, there is quite a large difference in what the average is, much larger than the fractions of degrees for the one sigma stated earlier. Has the global average daily Tmax gone up with time, or the average Tmin, or both? If all of the warming is on the low end, is it less of a problem than on the high end?
It’s far worse than you can imagine. Here is NASA/GISS’s direct statement on mean surface air temperature and its use of “anomalies” instead of the absolute temperature. Basically, the absolute mean surface air temperature is effectively not computable.
http://data.giss.nasa.gov/gistemp/abs_temp.html
The assertion is made that the “anomaly” — deviation from some sort of local average — is, though, and the further assertion is made that this anomaly can be extrapolated spatially to cover an area far, far larger than the locally sited surface station producing the data from which it is derived. The data from any given station is also full of holes, and these holes are “infilled”, that is, assigned a value based on averages of surrounding stations (to avoid breaking the code). With few exceptions, this extrapolated, infilled, anomaly result is then spatiotemporally averaged and turned into the “anomaly” you see in the surface data sets, usually without any explicit statement about probable error and whether or not the probable error is symmetric/normal or potentially skewed or biased by the computational methodology used.
The station data is itself transformed in various ways before the anomaly is computed, some of the transformations amount to adding or subtracting a constant amount from the baseline and/or anomaly itself (e.g. to correct for the “urban heat island” (UHI) effect that corrupts almost all of the station data because of how the stations are cited. The data transformations used are different for older data and because they are biased and in some sense completely arbitrary (very difficult to justify on any sort of formal basis) the transformations alone can produce “warming” where it is not visible in the raw data, or (by increasing the presumed UHI) it could conceivably produce relative cooling as well.
It is historically interesting to note that data adjustments in basically all of the major surface temperature records have AFAIK invariably increased relative warming from the beginning of the century to the present, never decreased it. A simple hypothesis test applied to this result, assuming as the null hypothesis that an actual statistical error is as likely to be positive (warming) as negative (cooling) produces a p-value that suggests HUMAN bias in the selection of the corrections with over 99% confidence, because the corrections are the equivalent of many coin flips in a row that all turned out heads. It doesn’t prove that the coin is biased, but it does not support the null hypothesis that it is unbiased and trusting the coin, is in the language of the con, a mug’s game.
The different SAT anomaly data sets primarily differ in the particular set of corrections and the particular algorithms they use for infilling and extrapolating. They also differ in their included data, but of course with a finite data set to select from (especially in the more distant past) there is enormous overlap in their input data and the data sets cannot be considered “independent” in any sense of the word. Nevertheless, the output results from the different algorithms differ significantly — by tenths of a degree or more — every month. I’m a bit of a statistician, and have some knowledge of things like autocorrelation and covariance, and if three different methods purporting to compute the same quantity with (say) 80% overlap in their input data produce a spread of results an average of (say) 0.2 C apart, it is absolutely certain that the standard error in the means of these methods is strictly greater than 0.2 C. Quite a bit greater, in fact. Roughly twice as great (even more if the overlap is greater than 80%).
This same problem plagues the estimation of the number of “independent samples” in Monte Carlo methods based on a Markov Chain of transitions where each succeeding state produced by the chain has a large autocorrelation with the previous state — if you use the raw number of “Monte Carlo” steps as if each step is an independent and identically distributed sampe, you will significantly underestimate the probable error. I have a peer reviewed publication in Physical Review, by the way, that proves this — I’m not making it up.
This is why I brought up the question of error in my comments to Mr. Monckton above — it is a critical omission in the entire discussion of AGW. In figure 1.4 of the leaked AR5 report, for example, a uniform error of 0.1 C is applied to the mean surface data points without discussion or justification. Yet if you go to WFT and plot the actual data from the surface stations, they visibly differ by more than this and are based on overlapping data and partially similar methodology — even the variations in method can hardly be considered to be pulled out of an “independent and identically distributed” hat!
Personally, I think somebody armed with a mouse or spreadsheet just made these error bars up. I think they said to themselves “Hmm, we need error bars or this won’t look like science. The independent coarse grained averages of the surface data have some visible spread, I’ll make up an error that isn’t too big — it might look like we don’t know what we’re doing — and that contains them to make them look reliable. 0.1 C is about right, yeah, that will do it.”
Of course, if I’m wrong I’d love to hear how they arrived at them, because given the overlap in the input data and the visible spread in the result, I think they are underestimating the error by a factor of 2 to 4 (where the 2 generously assumes that there is SOME independence in the computational algorithms used to add to the partial independence in the data).
No matter what transformations and biases that are applied to the surface data at this point, they are pretty much finished. HADCRUT4 is likely to be the last major change that squeezes just a bit more warming out of the very same data, because at this point the surface data is strongly constrained by lower troposphere measurements that are ALREADY diverging somewhat from the surface temperature and by sea surface temperature measurements that do not have UHI type corrections that can be arbitrarily manipulated (although both satellite and buoy data can be arbitrarily manipulated in other ways, it is more difficult to justify any correction that produces systematic time dependent warming, and more obvious that if one is introduced it comes at the direct expense of precision).
The point of including the error is that it adds a substantial range to claims of warming over the thermometric era. Instead of making the bald statement that “temperatures have increased 1 C over 140 years” or the like, one has to make a statement more like “temperatures have increased 1C plus or minus 0.5 C over the last 140 years”. Statements like “X is the hottest year on record” have to be blurred, because e.g. 1930 was well within an uncertainty of 0.5 C of the year X — dozens of years are — so the correct statement is: “We have no idea what the comparative ranking of year X is, but we can say with some confidence that it is the upper third of temperatures in the last 140 years.”
Nobody makes headlines or gets enormous grants from the latter, but the latter is — once one adds in the all important error bars — quite true, depending on what those error bars, honestly and properly computed, really are.
Precisely the same issues arise when trying to compute “anomalies” — the assumption is made that the secular trend of the anomaly can somehow be isolated from the secular trend in the average of the absolute temperature. While this might be approximately true in models with a nearly constant absolute temperature, it requires that the factors that move the average absolute temperature are in some sense orthogonal to the factors that move the anomaly. This is the unwritten assumption in all claims of AGW — that CO_2 is a separable factor that is the primary cause of secular trend in the anomaly but is not a factor in the secular trend in the underlying average temperature and that the two can somehow be independently computed even across very long time bases where many other conditions (including the sparsity and quality of the data) change. True or not — and I very much doubt that it is even crudely true, I think it is just plain wrong — the computation of the separability has to account for independent errors in the two — the error in the underlying secular trend in the average that is NOT due to CO_2 has to ADD to the error in the secular trend in the anomaly (all these silly linear fits to intervals in the surface temperature anomaly). Hence even the error in the anomaly itself properly estimated from a consideration of the independence of the various methods used to compute it and the spread of results is an underestimate of the true error. There is almost certainly a nearly equally large error in the baseline, one that increases into the past. This is evident in the fact that the spread in model computed “average surface temperature” — the baseline to which one must add the anomaly to get a supposed absolute surface temperature as a function of time — itself varies by almost 1 C around 14C. Some places in the literature you will see it is 15 C, for example, not 14 C. Some places it might be given as less than 14C.
We thus have the rather humorous implicit assertion that while we do not know the absolute mean surface temperature of the Earth within 1 C, and while if we added the various anomalies computed in their various ways from overlapping data TO mean temperatures selected within this range we’d end up with an envelope of estimates for the absolute means surface temperature as a function of time that spans at least 1.2 to 1.5 C, and that if we added the uncertainties in the actual anomalies to THAT the range would get even LARGER, we nevertheless do, indeed, know the mean surface anomaly within (say) the 0.1 C illustrated in AR5 and that anomaly can directly and fairly be compared to the similar anomaly computed in 1930, or 1900, or 1870, without any need to concern ourselves with error when positively asserting catastrophic anthropogenic global warming over the latter half (only) of that interval.
In the end, none of these considerations disprove AGW, or CAGW — only the future will prove or disprove the latter, and it is perfectly plausible that humans have had some effect on the climate, although that effect as a secular trend is essentially impossible to separate from non-CO_2 linked secular trends (that is, the “natural variability” of the climate). They do substantially weaken the claim that the historical climate record itself is evidence for CAGW. As independently discussed elsewhere, the rest of the evidence is the global circulation models, and these models independently fail any sort of reasonable hypothesis test when compared to the last 20 years of data and are almost certainly incorrect. I cannot even tell you HOW they are incorrect (and I doubt anyone else can, for all of the “certainty” that it is really this not that), only that one can pretty reliably reject the null hypothesis “this model is correct and predicts the climate future within the statistical spread of results this model produces when conditions are perturbed”, one model at a time. The average of 30+ plus such failed models is then a model that, on average, fails as Mr. Monckton illustrates above, although I think he would do just as well to reproduce figure 1.4 from AR5 is it does an admirable job of this all by itself (as do other figures from the body of AR5, e.g. the spread of model results from selected specific model compared to the actual surface temperatures, however accurate or inaccurate their computation might be).
rgb
[“The different SAT anomaly data sets” … And SAT means? Mod]

August 28, 2013 7:31 am

Eli understands that there is some betting action to be had on the proposition that “A math geek with a track-record of getting stuff right tells me we are in for 0.5 Cº of global cooling. It could happen in two years, but is very likely by 2020.”. It is for two bets of $1000 each from John Abraham to Lord Monckton. Eli is looking perhaps for some smaller side bets on the proposition and what the good Lord’s reaction will be.

Fergus Mclean
August 28, 2013 7:38 am

What percentage of years in recent centuries has the statement “the last decade was the warmest in the last 1000 years” been true?

beng
August 28, 2013 7:47 am

***
rgbatduke says:
August 28, 2013 at 5:29 am
It is cloudy to partially cloudy almost all the time, not with the high haze of humidity that makes it killer hot, but with honest cumulus and a fair bit of rain.
***
I watch the visibility closely. This summer, as most of the recent, had relatively little haze — except for yesterday! Felt like I was back in the 70s-80s. Rain this morning has washed it out.

Mark Buehner
August 28, 2013 8:50 am

Most of the climate models look great… if they only look back in time from when they were built. Happily most of the graphs the warmists push to the media include their post-dictions to give them the credibility they deserve. This makes them seem as though they made accurate predictions for many years before inexplicably falling off. The falling off just happens to begin with the year of the models creation. By the way my model of Superbowl winner predictions work the same way if anybody is looking for a system. I established it in 2005 and it accurately predicted the previous 20 winners. Since 2005 it hasn’t done very well, but there is probably some innocuous factor out there dampening the results, but it will surely clear up at any time and come roaring back to success.

Matthew R Marler
August 28, 2013 9:20 am

Henry Clark: Sea level change is plotted by most publicizing institutions in an extremely misleading manner via showing only total cumulative gain rather than variation in the rate of change. However, doing the latter leads to a striking pattern in sea level rise, cloud cover, humidity at appropriate altitude, and temperature:
http://s24.postimg.org/rbbws9o85/overview.gif
As can be seen, there is quite a reason that the *derivative* of sea level change is almost never, ever, ever plotted in graphs distributed by those favoring the CAGW movement.

I concur on the importance of estimating derivatives and relating derivatives to potentially relevant factors that alter the rate of change. Murry Salby did this very informatively with derivatives of global mean temperature and mean CO2 concentration. That link is hard to read, and lacks descriptions of what the graphs contain. Do you have a paper? That looks worthwhile.