The “ensemble” of models is completely meaningless, statistically

This  comment from rgbatduke, who is Robert G. Brown at the Duke University Physics Department on the No significant warming for 17 years 4 months thread. It has gained quite a bit of attention because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post

rgbatduke says:

June 13, 2013 at 7:20 am

Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.

Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.

Say what?

This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed

“noise” (representing uncertainty) in the inputs.

What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).

So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that R^2 or p for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.

Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.

Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)

Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)

A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact thatindividual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.

A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).

In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physicsomitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.

Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.

So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.

Which of these is going to be the winner? LDF, of course. Why? Because theparameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.

Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the wayphysics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.

What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.

Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever bynot computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.

Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.

Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.

Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.

And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors.Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.

This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!

So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.

It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.

Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and stillpossibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.

rgb

0 0 votes
Article Rating
323 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
KitemanSA
June 18, 2013 5:17 pm

Might it be a valid mean of random stupidity?

Ian W
June 18, 2013 5:24 pm

An excellent post – it would be assisted if it had Viscount Monckton’s and Roy Spencer’s graphs displayed with references.

June 18, 2013 5:28 pm

This assertion (wrong GCM’s should be ignored not averaged) is so clearly explained and justified, I am amazed no statistician made the point earlier, like sometime in the last 10 years as the climate change hysteria became so detached from reality, as all bad weather is now blamed on climate change.

OK S.
June 18, 2013 5:34 pm

The Bishop has some something to say regarding this comment over at his place:
http://bishophill.squarespace.com/blog/2013/6/14/on-the-meaning-of-ensemble-means.html

PaulH
June 18, 2013 5:36 pm

The ensemble average of a Messerschmidt is still a Messerschmidt. :->

June 18, 2013 5:37 pm

What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically
Indeed. At best, the outputs of climate models are the opinions of climate modellers numerically quantified.
As such, I’d argue the variance, is direct evidence the claimed consensus is weak.

mark
June 18, 2013 5:43 pm

damn.
just damn.

June 18, 2013 5:47 pm

rgb: “a small group of scientists and politicians,
It’s not a small group. It’s a large group.
Among US scientists, it’s the entire official institutional hierarchy, from the NAS, through the APS, the ACS to the AGU and the AMS. Among politicians, it’s virtually the entire set of Democratic electees, and probably a fair fraction of the Republican set, too.
And let’s not forget the individual scientists who have lied consistently for years. None of this would be happening without their conscious elevation of environmental ideology over scientific integrity. Further, none of this would be happening if the APS, etc., actually did due diligence on climate science claims, before endorsing them. The APS analysis, in particular, is pathetic to the point of incompetent.
And all of this has been facilitated by a press that has looked to their political prejudices to decide which group is telling the truth about climate. The press has overlooked and forgiven obvious shenanigans of climate scientists (e.g., Climategate I&II, back to 1400 CENSORED, the obvious pseudo-investigatory whitewashes, etc.) the way believers hold fast to belief despite the grotesqueries of their reverends. It’s been a large-scale failure all around; a worse abuse of science has never occurred, nor a worse failure by the press.

Admin
June 18, 2013 5:49 pm

Lets face it, Ensemble Means were brought to us by the same idiots who thought multi-proxy averaging was a legitimate way to reduce the uncertainty of temporally uncertain temperature proxies.

June 18, 2013 5:53 pm

By the way, my 2008 Skeptic article provides an analysis of GCM systematic error, and shows that their projections are physically meaningless.
I’ve updated that analysis to the CMIP5 models, and have written up a manuscript for publication. The CMIP5 set are no better than the AMIP1 set. They are predictively useless.

tz2026
June 18, 2013 5:56 pm

Well put. In great detail too.

k scott denison
June 18, 2013 5:58 pm

Brilliant, thank you. Can’t wait to see the defenders of the faith stop by to tell us, once again, “but, but, but they’re the best we have!!!” Mosher comes to mind.
That the best we have are all no good never seems to cross some people’s minds. Dr. Brown, the simplicity of your advice to ask the key questions about the models is greatly appreciated.

MaxL
June 18, 2013 6:00 pm

I have been doing operational weather forecasting for several decades. The weather models are certainly a mainstay of our business. We generally look at several different models to gain a feel for what may occur. These include the Canadian, American and European models. They all have slightly differing physics and numerical methods. All too often the models show quite different scenarios, especially after about 48 hours. So what does one do? I have found through the years that taking the mean (ie. ensemble mean of different models) very seldom results in the correct forecast. It is usually the case that one of the models produces the best result. But which one is the trick. And you never now beforehand. So you choose what you think is the most reasonable model forecast, bearing in mind what could happen given the other model output. And just because one model was superior in one case does not mean it will be the best in the next case.

Eeyore Rifkin
June 18, 2013 6:01 pm

Philip Bradley says:
“At best, the outputs of climate models are the opinions of climate modellers numerically quantified.”
Agreed, but I don’t believe that’s meaningless, statistically or otherwise.
“As such, I’d argue the variance, is direct evidence the claimed consensus is weak.”
I think the magnitude of the variance depends on the scale one uses. Pull back far enough and it looks like a strong “consensus” to exaggerate.

Greg L.
June 18, 2013 6:02 pm

I have mostly stayed out of the fray, as most of the arguing over runaway anthropogenic global warming has for a good bit of time looked to me as far more religious than scientific on all sides. Having said that, and as a professional statistician (who possesses graduate degrees in both statistics and meteorology), I finally have seen a discussion worth wading into.
The post given here makes good sense, but I want to add a caution to the interpretation of it. Saying that making a judgement about an ensemble (i.e., a collection of forecasts from a set of models and their dispersion statistics) has no scientific/statistical validity does not mean that such a collection has no forecast utility. Rather, it means that one cannot make a statement about the validity of any individual model contained within the set based upon the performance of the ensemble statistics versus some reference verification. And this is exactly the point. We are a long way from the scientific method here – the idea that that an experimental hypothesis can be verified/falsified/replicated through controlled experiments. We are not going to be able to do that with most integrated atmospheric phenomena as there simply is no collection of parallel earths available upon which to try different experiments. Not only that, but the must basic forms of the equations that (we think) govern atmospheric behavior are at best unsolvable, and in a number of cases unproven. Has anyone seen a proof of the full Navier-Stokes equations? Are even some of the simplest solutions of these equations solvable (see, for example, the solution to the simplest possible convection problem in Kerry Emmanuel’s Atmospheric Convection text – it is an eight order differential equation with a transcendental solution). And yet we see much discussion on proving or validating GCM’s – which have at best crude approximations to many governing equations, do not include all feedbacks (and may even have the sign wrong of some that they do include), are attempting to model a system that is extremely nonlinear …
Given this, I actually don’t think the statement in this post goes far enough. Even reducing the set of models to the 10% or so that have the least error does not tell one anything. We cannot even make a statement about a model that correlates 99% with reality as we do not know if it has gotten things “right” for the right reasons. Is such a model more likely to be right? Probably. But is it? Who knows. And anyone who has ever tried to fit a complicated model to reality and watch the out-of-sample observations fail knows quickly just how bad selection bias can be. For example, the field of finance and forecasting financial markets is saturated with such failures – and such failures involve a system far less complicated than the atmosphere/ocean system.
On the flip side – this post does not invalidate using ensemble forecasts for the sake of increasing forecast utility. An ensemble forecast can improve forecast accuracy provided the following assumptions hold – namely, that the distribution of results is bounded, the errors of the members are not systematically biased, and that the forecast errors of the members are at least somewhat uncorrelated. Such requirements do not mean whatsoever that the member models use the same physical assumptions and simplifications. But once again – this is a forecast issue – not a question of validation of the individual members. And moreover, in the case of GCM’s within an ensemble, the presence of systematic bias is likely – if for no other reason than the unfortunate effects of publication bias, research funding survivorship (e.g, those who show more extreme results credibly may tend to get funding more easily), and the unconscious tendency of humans that fit models with way too many parameters to make judgement calls that causes the model results to look like what “they should be”.

Chuck Nolan
June 18, 2013 6:02 pm

I believe you’re correct.
I’m not smart enough to know if what you are saying is true, but I like your logic.
Posting this on WUWT tells me you are not afraid of critique.
Everyone knows nobody gets away with bad science or math here.
My guess is the bad models are kept because it’s taxpayer money and there is no need for stewardship so they just keep giving them the money.
cn

Abe
June 18, 2013 6:04 pm

WINNER!!!!!
The vast majority of what you said went WAY over my head, but the notion of averaging models for stats as if they were actual data being totally wrong I totally agree. I think looking at it in that light says a lot about the many climate alarmists who continue to use their model outputs as if they were actual collected data and ignore or dismiss real empirical data.

June 18, 2013 6:14 pm

All the climate models were wrong. Every one of them. [Click in chart to embiggen]
You cannot average a lot of wrong models together and get a correct answer.

June 18, 2013 6:18 pm

Eeyore Rifkin says:
June 18, 2013 at 6:01 pm

I agree with both your points.
I was agreeing with rgb’s statements in relation to the actual climate. Whereas my points related to the psychology/sociology of climate scientists, where the model outputs can be considered data for statistical purposes. And you may well be right that those outputs are evidence of collective exaggeration, or a culture of exaggeration.

June 18, 2013 6:22 pm

Can someone send enough money to RGB to get him to do the 10 minutes of work, and the extra work to publish a model scorecard and ranking for all to see. Like in golf or tennis. At the BH blog someone pointed out that some models are good for temperature, others for for precipitation. So there could be a couple of ranking list. But keep it simple.

Nick Stokes
June 18, 2013 6:22 pm


This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
“noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation.”

As I said on the other thread, what is lacking here is a proper reference. Who does this? Where? “Whoever it was that assembled the graph” is actually Lord Monckton. But I don’t think even that graph has most of these sins, and certainly the AR5 graph cited with it does not.
Where in the AR5 do they make use of ‘the variance and mean of the “ensemble” of models’?

Editor
June 18, 2013 6:24 pm

The idiocy of averaging “the terrible with the merely poor.” Nice.

edcaryl
June 18, 2013 6:28 pm

Averaging climate models is analogous to averaging religions, with about the same validity.

Pamela Gray
June 18, 2013 6:30 pm

That was like eating a steak. Every bite was meaty!

Mark Bofill
June 18, 2013 6:32 pm

~applause~
Very well said, Dr. Brown!

Bill Illis
June 18, 2013 6:34 pm

Here are the 23 models used in the IPCC AR4 report versus Hadcrut4 – 1900 to 2100 – Scenario A1B, the track we are on.
This is the average of each model although the majority will have up to 3 different runs.
In the hindcast period, 1900-2005, they are closer to each other and the actual temperature record but as we go out into the future forecast, there is a wide divergence.
Technically, only one model is lower than Hadcrut4 at the current time. The highest sensitivity model is now 0.65C higher than Hadcrut4, only 7 years after submitting their forecast.
Spaghetti par excelencia.
http://s13.postimg.org/mpq4zxwg7/IPCC_AR4_Model_Spread_Hadcrut4.png

June 18, 2013 6:35 pm

An interesting comment…
http://www.thegwpf.org/ross-mckitrick-climate-models-fail-reality-test/
Perhaps the problem is that the models should not be averaged together, but should be examined one by one and then in every possible combination, with and without the socioeconomic data, in case some model somewhere has some explanatory power under just the right testing scenario. That is what another coauthor and I looked at in the recently completed study I mentioned above. It will be published shortly in a high-quality climatology journal, and I will be writing about our findings in more detail. There will be no surprises for those who have followed the discussion to this point.
Ross McKitrick is a professor of economics at the University of Guelph, a member of the GWPF’s Academic Advisory Council and an expert reviewer for the Intergovernmental Panel on Climate Change. Citations available at rossmckitrick.com.

I have seen other comments from Ross that echo your concern, but they may not have been published. He can speak for himself.

Editor
June 18, 2013 6:35 pm

Nick Stokes asks what scientists are talking about ensemble means. AR5 is packed to gills with these references. Searching a few chapters for “ensemble mean” I find that chapter 10 on attribution has eleven references, chapter 11 on near-term projections has 42 references, and chapter 12 on long term projections has fifteen references.

TRBixler
June 18, 2013 6:37 pm

So the smartest guy in the room Obama says we need to reduce our carbon footprint based on meaningless climate models. Stop the pipelines kill the coal. Let energy darkness reign over the free world. Where were the academics on these subjects? Waiting for Anthony Watts it seems.

just some guy
June 18, 2013 6:38 pm

Oh my goodness most of that post went way over my head. Except for the part about the spaghettit graph looking like the end of a frayed rope, and what that says about the the accuracy of “climate science”. That is something even I can understand. 😀

June 18, 2013 6:41 pm

Brilliant. Cutting through the heavy Stats, the respect warmists pay to their own output is simply another way of saying “Given that we’re right…”

RockyRoad
June 18, 2013 6:49 pm

Looks like Nick Stokes has never read AR5. Looks like he shoots from the hip. I wonder why he comments.

Niff
June 18, 2013 6:59 pm

What I find interesting is that so many models (of the same thing) saying so many different things all get funded. If they are ALL so far from measured reality why is the funding continuing? It is easier to set this stuff up than to admit its all nonsense and courageously dismantle it. Does anyone see ANY signs of that courage anywhere?

June 18, 2013 7:00 pm

Alec Rawls says: June 18, 2013 at 6:35 pm
“Nick Stokes asks what scientists are talking about ensemble means. AR5 is packed to gills with these references.”

But are they means of a controlled collection of runs from the same program? That’s different. I’m just asking for something really basic here. What are we talking about? Context? Where? Who? What did they say?

Tsk Tsk
June 18, 2013 7:01 pm

Brown raises a potentially valid point about the statistical analysis of the ensemble, but his carbon atom comparison risks venturing into strawman territory. If he’s claiming that much of the variance amongst the models is driven by the actual sophistication of the physics that each incorporates, then he should provide a bit more evidence to support that conclusion. He could be right, but this is really just a he said/she said argument at this point. Granted this was just a comment and not meant to be a position paper, but understand the weakness.
“…and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.”
“One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!"

I presume Dr. Brown is content with the statistical methods used at CERN for the Higgs. Is it simply the case that a p-value of 0.05 is too pedestrian for us? Must it be 1e-5, or 1e-7 to be meaningful? We’re going to be waiting an awfully long time to invalidate even the worst of the models at that threshold (and we’ll have a lot fewer new medical treatments in the meantime…) So we shouldn’t use statistical tests on the validity of the models, but we should pick the “best” models from the set and continue using them. Precisely how do we determine just which of the models is the best? The outputs of the models aren’t just a single point or line. The results of each model are themselves the means of the runs. If we don’t use the modeled mean for each model, then what do we use? I agree with some of the points of the post but this one is just bizarre.
Me? I’m happy to say that the ensemble fails in a statistically significant way. There’s really not much that the CAGW crowd has to respond with other than Chicken Little it’s-in-the-pipeline campfire tales.

OssQss
June 18, 2013 7:05 pm

Is it really about climate,,,,,,, or is it really the politics and funding of such via ideology?
Some folks are figuring it out.
An example of such change for your interpretation.
Just sayin> I was a bit taken back when I saw this video. . . . . . .
Change You Can Believe In ?

Chad Wozniak
June 18, 2013 7:05 pm

More proof that models are crap, even the most “honestly” attempted ones. And I’d go with cutting out the “10 percent best” along with the rest, as Greg L says – they’re all founded on incomplete or fudged data and bad assumptions.
Since the alarmies rely entirely on their models, we need to get the focus off the models and confine it to the empirical data. Forget models – with empirical data we should be able toi kick the alarmies in their sphincters, methinks.

VACornell
June 18, 2013 7:08 pm

There is a study….just now….of twenty models…that says they are getting close.. Have you seen? Are w to be lucky enough …?
Sent from my iPad Vern Cornell

george e. smith
June 18, 2013 7:08 pm

“””””….. although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate?…..”””””
I love this message: – A linear fit to a non-linear function, will fail when higher order terms kick in.
Of course, that statement is also true in reverse. A non-linear function can always look linear over some restricted range.
In particular, the three functions y = ln (1+x) ; y = x ; and y = e^x – 1 track each other very well for small x compared to 1.
The best longest atmospheric CO2 record, from Mauna Loa, since the IGY of 1957/58 related to the lower troposphere or global surface Temperature record, cannot distinguish between those three mathematical models, sufficiently to say any one of the three is better than another.
Yet some folks keep on insisting, that the first one:- y = ln(1+x) is correct.
Maybe so; but is T x or is it y ??
But a great call on the rgb elevation to the peerage, Anthony.

Max
June 18, 2013 7:20 pm

VERY well said.

pottereaton
June 18, 2013 7:23 pm

Dr. Brown wrote: “. . . it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.”
Re the “frayed end of a rope:” while RGB is talking about model projections, this works even better as a general metaphor for the whole of climate science.
It could also be a metaphor for what IPCC-sanctioned scientists have done to the scientific method, which has, as a rope been frayed at times down through history, but has never become completely unraveled.
Is climate science at the end of its rope?

thingodonta
June 18, 2013 7:25 pm

The spaghetti graph models are just a façade to accommodate to the masses that the scientists supposedly recognise variability and uncertainty in the climate. They really would much rather just draw a straight curve upward, but they know they can’t. You could get the same set of spaghetti graphs attached to any Soviet era 5 year plan. But any model which doesn’t conform to the underlying assumption of high climate sensitivity to C02, or the enormous benefits coming from depriving kulaks of the ownership of their land, is routinely filtered out to begin with. The output is checked to make sure it conforms to the party line. Which I guess is the same thing as saying all the above.

June 18, 2013 7:26 pm

Here below is a copy of an Email I recently sent to the Head of the Met Office which makes the same point as Robert Brown re model studies but I think ,with all due modesty, more simply for general consumption.In view of Obama’s stated intention to force the US to adopt emission control measures in the near fututure and follow the appalling policies of the British the realist community urgently needs to devise some means of forcing the administration to immediately face up to the total collapse of the science behind the CAGW meme which is now taking place.
“Dear Professor Belcher
There has been no net warming since 1997 with CO2 up over 8%, The warming trend peaked in about 2003 and the earth has been cooling slightly for the last 10 years . This cooling will last for at least 20 years and perhaps for hundreds of years beyond that.. The Met office and IPCC climate models and all the impact studies depending on them are totally useless because they are incorrectly structured. The models are founded on two irrationally absurd assumptions.First that CO2 is the main driver – when CO2 follows temperature .The cause does not follow the effect. Second piling stupidity on irrationality the models add the water vapour as a feed back to the CO2 in order to get a climate sensitivity of about 3 degrees. Water vapour follows temperature independently of CO2 and is the main GHG.
Furthermore apart from the specific problems in the Met- IPCC models ,models are inherently useless for predicting temperatures because of the difficulty of setting the initial parameters with sufficient precision.Why you think you can iterate more than a couple of weeks ahead is beyond my comprehension.After all you gave up on seasonal forecasts.
For a discussion of the right way to approach forecasting see
http://climatesense-norpag.blogspot.com/2013/05/climate-forecasting-basics-for-britains.html
and several other pertinent posts also on http://climatesense-norpag.blogspot.com.
Here is a summary of the conclusions.
“It is not a great stretch of the imagination to propose that the 20th century warming peaked in about 2003 and that that peak was a peak in both the 60 year and 1000 year cycles.On that basis the conclusions of the post referred to above were as follows.
1 Significant temperature drop at about 2016-17
2 Possible unusual cold snap 2021-22
3 Built in cooling trend until at least 2024
4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
5Temperature Hadsst3 moving average anomaly 2100 – 0.5
6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
7 By 2650 earth could possibly be back to the depths of the little ice age.
8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and help maintain crop yields .
9 Warning !! There are some signs in the Livingston and Penn Solar data that a sudden drop to the Maunder
Minimum Little Ice Age temperatures could be imminent – with a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.
For a dicussion of the effects of cooling on future weather patterns see the 30 year Climate Forecast 2 Year update at
http://climatesense-norpag.blogspot.com/2012/07/30-year-climate-forecast-2-year-update.html
How confident should one be in these above predictions? The pattern method doesn’t lend itself easily to statistical measures. However statistical calculations only provide an apparent rigour for the uninitiated and in relation to the climate models are entirely misleading because they make no allowance for the structural uncertainties in the model set up.This is where scientific judgement comes in – some people are better at pattern recognition than others.A past record of successful forecasting is a useful but not infallible measure. In this case I am reasonably sure – say 65/35 for about 20 years ahead. Beyond that, inevitably ,certainty drops.”
It is way past time for someone in the British scientific establishment to forthrightly say to the government that the whole CO2 scare is based on a mass delusion and try to stop Britain’s lunatic efforts to control climate by installing windmills.
As an expat Brit I watch with fascinated horror as y’all head lemming like over a cliff. I would be very happy to consult for the Met on this matter- you certainly need to hear a forthright skeptic presentation to reconnect with reality.

Jeef
June 18, 2013 7:32 pm

That. Is. Brilliant.
Thank you.

June 18, 2013 7:36 pm

Here is my way of saying the same thing;
Each of the models contains somewhat different physics. The basic physical laws that the models try to capture are the same, but the way the models try to incorporate those laws differs. This is because the climate system is hugely more complicated than the largest computers can capture, so some phenomena have to be parameterized. These result in the different computed climatic reactions to the “forcing” of the climate by humanity and the natural drivers that are incorporated in the forecasts. When you look at the graph it is clear that some models do quite a bit better than the bulk of the ensemble. In fact, several models cannot be distinguished from reality by the usual statistical criteria.
You know that what is done in science is to throw out the models that don’t fit the data and keep for now the ones that seem to be consistent with the observational data. In this way you can learn why it is that some models do better than others and make progress in understanding the dynamics of the system. This is research.
But this is not what is done. What you read in IPCC reports is that all the models are lumped into a statistical ensemble, as if each model were trying to “measure” the same thing, and all variations are a kind of noise, not different physics. This generates the solid black line as a mean of the ensemble and a large envelope of uncertainty. The climate sensitivity and its range of uncertainty contained in the reports are obtained in this way. This enables all of the models to keep their status. But ultimately as the trend continues, it becomes obvious that the ensemble and its envelope are emerging out of the observational “signal.” This is where we are now.

SAMURAI
June 18, 2013 7:42 pm

Fundamentally, CAGW’s purpose isn’t to explain phenomena, it’s to scare taxpayers.
It’s now painfully obvious the runaway positive feedback loops baked into the climate models to create the scary Warmaggedon death spiral is bogus.
We’re now into the 18th year of no statistically significant warming trend–despite 1/3rd of all manmade CO2 emissions since 1750 being made over the last 18 years–which is a statistically significant time period to say with over 90% confidence that CAGW theory is disconfirmed.
Taxpayer-funded climatologists that advocate CAGW theory should be ecstatic that CO2’s climate sensitivity is likely to be around 1C or lower, instead they make the absurd claim, “it’s worse than we thought.” Yeah….right…
To keep the CAGW hoax alive, CAGW zealots: came up with HADCRUT4, continue to use invalidated high model projections to make the line go up faster and higher, “adjust” previous temperature databases down and current temperature databases up to maximize temperature anomalies, blame any one-off weather event (including cold events) on CAGW, push that its warmER and conveniently forget it’s not warmING and push CO2-induced ocean “acidification” instead of non-existent warming.
It’s the beginning of the end for this hoax. Economic hardship (partially attributable to $trillions wasted on CAGW rules, regulation, mandates, taxes, alternative energy subsidies/projects and grants) and the growing discrepancy between projections vs empirical data will eventually lead to CAGW’s demise.
Time heals all; including stupidity. The question is whether politicians, taxpayers and scientists will learn from history or be doomed to repeat it.

Janice Moore
June 18, 2013 7:48 pm

OssQss (what in the world does your name mean, anyway?)
THAT WAS MAGNIFICENT.
And deeply moving.
Through that 4×6 inch video window, we are looking at the future.
And it looks bright!
Thanks, so much, for sharing the big picture. In a highly technical post such as this, that was refreshing and, I daresay, needed (if for just a moment).
GO, ELBERT LEE GUILLORY!

Col A (Aus)
June 18, 2013 7:55 pm

It would appear from above that using the averages are about as correct as a concensous?
No, I mean the correct averages are a consensous of the mean deviations?
NO, NO, I mean my mean ment to be averaged but got concensoured!!
OR was that my concensous was ment to mean my averagers that I can not apply!!!
Bugger, where is Al Gore, Tim Flim Flam of Mr Mann when you really need them???? 🙂 🙂

Jordan J. Phillips
June 18, 2013 8:01 pm

I would have never imagined that electronic structure calculations would be discussed in a thread, but here it is!

OssQss
June 18, 2013 8:03 pm

Janice Moore says:
June 18, 2013 at 7:48 pm
OssQss (what in the world does your name mean, anyway?)
Well, I am always honest with all that I do. Sooooooo>
That handle is a direct result of trying several email addresses in the early 90’s, and being unsuccessful (with AOL), and basically flailing the keyboard, and well, there ya have it.
No real acronym to it, but I can think of some, but not many 🙂

Rob Ricket
June 18, 2013 8:03 pm

What a brilliant application of scientific logic in exposing the futility of attempting to prognosticate the future with inadequate tools. It takes a measure of moral courage to expose fellow academics as morally bankrupt infants bumbling about in a dank universe of deception. Bravo!
On another note: Pat Frank; are you the Pat Frank of Alas Babylon fame? it was a high school favorite.

Janice Moore
June 18, 2013 8:07 pm

Seven kindergarteners are sitting in a circle in the Humans Control the Climate School.
Teacher says, “Let’s vote on what color to paint our classroom!”
“Okay!”
“Everybody turn around. Take your brushes and choose one color from you paint sets beside you to paint on your piece of paper. We’ll see which color is the most popular.”
[7 chubby hands firmly grip brushes…. 7 furrowed brows choose a color….. 7 paint brushes going busily…..]
“Okay. Turn around. Hold up your papers to show how you voted.”
[Seven pieces of paper held aloft sporting respectively: Green, Red, Yellow, Blue, Violet, Orange, and Magenta]
Teacher: Well, isn’t that something. Hm. I don’t know what to do. Hm. [brightens] Oh, I know! We’ll just take ALL those colors and mix them together!
New Paint Color: “Kindergarten Ensemble” — looks great, huh?
(Parent muttering to child as they leave classroom together at the end of the day: Katie, why did your teacher paint your room the color of Gerber’s baby food prunes?)

Ben
June 18, 2013 8:15 pm

Or perhaps it looks more like the frayed end of a rug? The frayed end of a rope still tends to have an average that may approximate the center of the rope. But the end of a frayed rug is so broad and the fraying can be so random, it seems less likely that a center point could be approximated.
Or perhaps it looks like someone randomly threw down multiple frayed ropes, each with their own error range, which could be the frayed ends. Since there is no order applied to the multiple frayed ropes that are “dropped” in place, there is no expectation that there could be a reasonable center-point that has any meaning.
Bottom line… very well written. A much needed contribution to the discussion of the irrelevance of unsupportable climate models.

Caleb
June 18, 2013 8:16 pm

Wonderfully liberating! (As Truth tends to be, if you can face it.)
I laughed and greatly enjoyed how Mr. Brown brought in the “butterfly effect.” It truly does frustrate us, in the arts as well as the sciences, and led the poet Burns to conclude that the best laid plans of mice and men often go awry.
However do not squash the butterfly. It is due to the butterfly effect, and the fact that human beings are chaotic systems, that a loser, who has always been a loser, and who everyone knows is a loser, and everyone predicts will always be a loser, and predicts will certainly come to a bad end, baffles everyone including himself, because he wins.

June 18, 2013 8:18 pm

Dr. Brown, that was simply superb science. Challenging beyond what I think I can even recognize. But summed up well at many points within.
Thinking about it, sitting back and re-reading chunks of it, I continue to be assailed by a rather stunningly simple question…..
When was it, exactly, that we, H. sapiens sapiens, the wise, wise one, abandoned reason?
We seem to have only had it for such a short time………

Janice Moore
June 18, 2013 8:18 pm

OssQss, thanks for answering my impertinent question. LOL.
Hey, wasn’t that neat how the guy who posted just above my kindergartener analogy talked about “morally bankrupt infants bumbling about”?!
Infantile science.
Re: possible acronym meanings…… How about Outstanding, Super-Smart, Quality, Super Scientist?

Rob Ricket
June 18, 2013 8:19 pm

The proof was there all along my friends. Our error lies in attempting to compare the mean of the models with empirical data when the deviation between the models proves that most must be wrong. Step into the light, it feels good.

Janice Moore
June 18, 2013 8:23 pm

“… perhaps it looks more like the frayed end of a rug… .” [Ben at 8:15PM today]
Nice insight!
How about the frayed edge of the sleeve of “The Emperor’s New Clothes” — they were never really anything but fantasy science all along… .

OssQss
June 18, 2013 8:27 pm

Caleb says:
June 18, 2013 at 8:16 pm
Wonderfully liberating! (As Truth tends to be, if you can face it.)
I laughed and greatly enjoyed how Mr. Brown brought in the “butterfly effect.” It truly does frustrate us, in the arts as well as the sciences, and led the poet Burns to conclude that the best laid plans of mice and men often go awry.
However do not squash the butterfly. It is due to the butterfly effect, and the fact that human beings are chaotic systems, that a loser, who has always been a loser, and who everyone knows is a loser, and everyone predicts will always be a loser, and predicts will certainly come to a bad end, baffles everyone including himself, because he wins.
=======================================================================
Butterfly effect 🙂
Thanks, that brought back memories !

Roberto
June 18, 2013 8:28 pm

Well said, sir. I heartily concur. But there is one difference. In the world of physics, changing directions is often a matter of re-writing one board full of equations. The world of modelling has a bit more mass to redirect. And if the right class of adjustments weren’t designed in from the start, there can be a WHOLE lot more of that inertia.
There are some modelling problems you can solve with more speed, but this doesn’t appear to be one of them. In that case, a programmer can be working very, very hard and not really accomplishing what needs to be accomplished. And I’m sorry, guys, but your customer isn’t actually going to care how hard you worked, if the system doesn’t do its job.
And yes, if the modelers are wondering, I do have some idea how much work a billion-dollar system can take. I work there.

June 18, 2013 8:50 pm

This post covers a couple of different questions in a somewhat entangled fashion that for clarity should be treated separately.
The first question is: for how long and by how much does a single model have to diverge from reality in order to be rejected. All such criteria are arbitrary and conventional; p < 0.05 is not a particularly strong one, but it can be considered a sufficient motivation to look for a better model.
The second question: Is it, or is it not, meaningful to calculate an average and standard deviation for all of the models. I certainly agree that averaging model projections does not have the same rationale as averaging repeat measurements, which is a reasonable approach for reducing and controlling experimental error. The average and spread calculated from the models does not gives any superior approximation to reality. Instead, they simply describe the general trend and the extent of disagreement between the models.
Finally, the question is raised to what extent the models are based on physics, and it is suggested that their progressive divergence in time should not be observed if they indeed were physics-based.
I agree that the models are not sufficiently based on physics, simply because our understanding of the physical principles that govern the climate system is too incomplete.
However, I do not agree that we can conclude this from the divergence of the projections alone. If we use the analogy introduced by the author, namely physical models of electron orbitals in atoms, would we not expect increasing divergence as we progress from small and simple atoms to larger ones? As more electrons are added to an atom, the effects of neglected or differently approximated terms will compound one another. Similarly, different approximations in climate models will drive increasing divergence through successive iterations of model states.

Barry Cullen
June 18, 2013 9:02 pm

“…taken (a)back when I saw this video…” You really don’t understand???
Guillory speaks 100% truth!
I’m 70 and haven’t seen much economic change in the black community because they continue to be enslaved by big gov’t handouts. (The RINO’s do little better.)
Wake-up, that is exactly what big gov’t wants to do to the rest of us w/ this CO2/AGW scam.
And RGB; absolutely brilliant!
BC

June 18, 2013 9:16 pm

This brings me back down memory lane to about 5-6 years ago when I first started looking into AGW. (IPCC v 4 was just released I remember) I had just read a book that I had read used and was intrigued to see whether all of the things J. Chrichton was saying was true. (I had always been sceptical that scientists could use such a trace gas as a reason for the warming planet…but had never looked into the science itself). First spot I hit was realclimate (as most people will remember) and after posting and getting my questions answered with half-wit answers I went to the horse’s mouth and started reading the IPCC.
Than I saw that they averaged different models together. I was so shocked that morons would even attempt to do this that from that point on I couldn’t even begin to hazzard why they even bothered with error margins and any of the other fluff. There was no longer any reason to even research it any more, because if these “scientists” were actually competent they would have never averaged different model results together. I could explain this, but I think the article above does it much better than I could have. In any event…. I figured perhaps this was some huge mistake and that the “ensemble mean” was perhaps something else. I asked at realclimate.
I said something to this effect” That is like trying to average the size of apples and oranges and grapes and trying to find an average fruit size. The answer you get is so pointless that you are no longer showing a “possible Earth” but an Earth with unicorns and dinosaurs still living on it. ”
Oh sure, realclimate answered me after I had pointed out that mistake ….. But in this instance you can average together models because it makes sense because of the physics behind AGW. (I am sure there are lots of us who got such an intellectually dead answer over there.) The answer was so appaling that yes I was bad and called them certain names like I did above here and yes, got banned.
Needless to say, they drove me to sceptical sites like WUWT and others because if someone is not going to answer the questions in an intelligent matter and than also ban me, why they must be full of it. Much to my dismay, I found out that these same people are some of the leading researchers in this area and that the IPCC work is seen as “the gold standard.”
Of course, once you find that one mistake in the science (there are tons of course) it is so easy to go down the list and find a good portion of them. They are all based on assumptions that are turned around and yelled that they are fact. Positive feed-backs, yup. Averaging together models can get you a useful answer…assumed and shouted to the rooftops. The point of no return is the second you assume something, forget you assumed that, and than call everyone else names who ask questions. But anyway, thanks for the trip down memory lane….I too was dismayed several years ago when I found out they actually averaged together different model results kind of similar to a first grader who might do it because they did not know better. I still to this day can not believe that these scientists would sign their name to such a fluff piece as the IPCC with such mistakes!

Tilo Reber
June 18, 2013 9:36 pm

I think this problem is different from modeling the spectrum of carbon in that your examples of models involves the piece-wise inclusion of refinements that are built on demonstrated physics. In the climate model game it is an issue of including more or less assumptions all of which are loosely suppose to be supported by, but not demonstrated to be, physics. And since refining with assumptions may or may not produce a better model, it is hard to know, apriori, which the better models will be. Empirically the outliers can claim that their models are still more correct and that a longer time interval will demonstrate this. I think that the bottom line, at this point, is that we have no business producing climate models at all because we cannot do the complexity of physics required to make such modeling meaningful.

anna v
June 18, 2013 9:37 pm

Excellent you made a post out of the answer, Anthony. I have been saying the exact same things ever since I laid my eyes on the AR4 spaghetti graphs. Particular emphasis given to the non linearity of the underlying equations, assumed in the gridded models, and the chaotic nature of climate in any talks I gave to students.
sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time.
Actually this is the crux of the problem when science is overfunded because it suits the politics of the time. Professors have graduate students, graduate students become eventually professors and they tend to keep fiddling with their thesis subject creating what for each are brilliant variations on the line of research. What rgbatduke is saying should have been said by the peer reviewers of the first linear projection to the future in grid box models and the first indication that different models give so different results. That is what peer review is for, to catch errors, misunderstandings and replications. In a system that developed into back slapping and approval so that the research pie was distributed among an inner group the results should be inevitable. I am sure that if weapons related physics research is declassified in the next century the studies will show a similar eating at the trough trend with little real physics, all because of bad or non/existent peer review , since normal physicists are out of the circuit. A great waste of public money which could be excused because of defense of the nation reasons.
My late father used to say it takes a lot of manure to grow a rose. I have used this saying to defend wasting money in unsuccessful research , but funding should be discontinued once it has been proven unsuccessful. This bunch has corrupted the peer review method, the very meaning of what successful means in science, in order to keep increasing their funding and the politicians who love taxes close the circle. A lot more money goes around with carbon markets then distributed to the priesthood of climate change ( at least as they planned carbon markets).

Larry Kirk
June 18, 2013 9:41 pm

A rather succinct comment. And if that is how difficult it is to model the properties of a single atom, then it is unlikely to be any easier to model the properties of the entire planet’s atmosphere/hydrosphere with any predictive certainty. In fact it will probably be impossible, particularly as many of the variables are still unknown or unquantifiable, and many of them have such a long periodicity that they are out of the range of our historically momentary recent measurements. The problem is just too complex. Apart from which, the models will only end up reflecting what the modeller put in because that was what was thought to be most important. They certainly won’t reflect what was left out because it was unknown or inadvisedly considered not to matter.

Gary Hladik
June 18, 2013 10:08 pm

Anyone reading this thread who hasn’t read all of RGB’s comments in the original thread should do so now. They add much more to the discussion.

June 18, 2013 10:20 pm

My view has always been that trying to model the climate is a complete waste of time and money because you can’t model chaos.
It’s been a “lovely little earner” for he huge AGW industry over the years though and will continue to be so until enough people come round to realizing they have been had and start threatening to vote our countless idiotic politicians out of their jobs.

Brian H
June 18, 2013 10:37 pm

It’s about time the statisticians of the world got tired of being goosed by Climate Scientists, and stood up, thereby depriving them of a target.

June 18, 2013 10:42 pm

I think that wraps it up. We should start hitting politicians over the head with this very post, and demand that most of those models get defunded and mothballed ASAP – also that the politicians stop wasting taxpayer money on poor science. Just how much does it take to burst the Greenie CAGW bubble?

June 18, 2013 10:42 pm

Oh dear, now I feel really worried. You see, I had realised that the models were wrong. But, you see, if you take two wrong models and average them, then each is only half wrong! Now isn’t that an improvement? When you have 70 or so wrong models and you average them, the wrongness is so small, they must be nearly right — isn’t that the case?
Now I’m told that you can’t average them at all…

DonV
June 18, 2013 10:48 pm

Liars, damn liars, and then there are statisticians. . . . I mean AGW climate scientologists.
Statistics are an objective tool to be used by good scientists/statisticians to help discern truth. Our postmodern academia has almost universally adopted relativistic humanism as its main world view and consequently there is no such thing as absolute truth – everyone’s “model” has to have some truth in it, therefore the “average” of all of them is where the ‘real’ truth lies. This of course conflicts with the main objective of science which is to discover the objective truth about things in the world. For a die-hard AGW convert, when world view conflicts with scientific objective, world view wins.
Hence you have these scientologists making up sciencey sounding words and phrases that get “peer review” published that are “relatively” right, and still objectively and absolutely FALSE. AGW has become a religion, not a science. . . . so your rant about the correct use of statistics is going to fall on deaf ears to the faithful.
But those of us who are still seeking objective truth hear you. We agree. Well said.

Alan Clark, paid shill for Big Oil
June 18, 2013 10:50 pm

I had to apply the full force of my GED Diploma to comprehension but alas, even this was insufficient given the depth of the subject matter. Nonetheless, the basic message was quite clear even to the likes of me which is precisely what draws me back to WUWT daily. The understanding that one can gain if only one is willing to read.
Thanks Dr. Brown.

June 18, 2013 10:57 pm

Liar Stokes, in the fashion of a paid troll, continues to recite his long-debunked lie that it was I who was responsible for assembling the graph of an ensemble of spaghetti-graph outputs that was in fact assembled by the IPCC at Fig. 11.33a of its Fifth Assessment Report. As previously explained, in my own graph I merely represented the interval of projections encompassed by the spaghetti graph and added a line to represent the IPCC’s central projection.
Professor Brown states that he was specifically condemning the assembly of an ensemble of models’ outputs. I was not, repeat not, responsible for that assembly: the modelers and the IPCC were. It was them – not me – that Professor Brown was criticizing. Indeed, he states plainly that I have accurately represented the IPCC’s graph.

Eugene WR Gallun
June 18, 2013 11:03 pm

Gee, i love it when science makes common sense. And so plainly written that I can understand it. Give this guy two ears and a tail.
Eugene WR Gallun

DonV
June 18, 2013 11:14 pm

I’d like to add 2 more cents.
Average temperature is a meaningless measure when you are trying to determine an energy budget for the whole freakn’ WORLD! Average WHAT temperature. Hi temp? Lo Temp? the integral of the temperature over a given day? at what elevation? Ground level, 1000 ft above ground? 10,000 ft above ground?
We sit in the middle of an ocean of air that is, at some times during a given year, filled with varying concentrations of all of the different phases of water . . . vapor, liquid and ice . . . . and that water contains almost ALL of the energy, and yet along with temp we aren’t using the measure of it’s relative concentration and therefore the TRUE ENERGY CONTENT at any given temp location.
I assert that the world”s climate is a self correcting system. I assert that on any given day, over any given week, month, year, the world takes in energy, and the water on this planet ACTIVELY changes phase and MOVES vertically and horizonally to return that energy back to space so that life can savely exist between waters two extremes – 0 and 100 degrees. We just happen to live in the layer of the ocean of air/water that maintains the 0 to ~ 40 degree layer. AGW fanatics think that that layer is the most important and must NOT deviate by 1 degree or all manner of nasty things will happen (when in fact on any given day it varies by more than 20 degrees! Silly AGW scientologists!)
Average temperature is meaningless when trying to determine energy flows. I assert that at the very least one needs to measure the integral of temp and water content over time!

Venter
June 18, 2013 11:17 pm

That’s par for course with Nick Stokes, to lie every single time and go to any extent to support the climate science liars. Not a single word of what he says should be trusted.

Nick Stokes
June 18, 2013 11:35 pm

Monckton of Brenchley says: June 18, 2013 at 10:57 pm
“Liar Stokes, in the fashion of a paid troll, continues to recite his long-debunked lie that it was I who was responsible for assembling the graph of an ensemble of spaghetti-graph outputs that was in fact assembled by the IPCC at Fig. 11.33a of its Fifth Assessment Report.”

Well, I’m actually just asking a pretty fundamental question – what is RGB talking about? He’s made some specific charges. Someone has been “treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean” etc. But who and where?
The quote here is explicit. Para 2:
“This is reflected in the graphs Monckton publishes above”
Para 3:
“Note the implicit swindle in this graph… “
Next substantive para:
“This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph…”
He’s referring to the graphs in that thread, and they aren’t spaghetti graphs of individual results like Fig 11.33a, which has no atatistics. Now as I said in my earlier post, I can’t see that RGB’s criticism properly does apply to Lord Monckton’s graph, although that is what it says. So then, what does it apply to?

Peter Miller
June 18, 2013 11:49 pm

At the end of the day, those who peddle their highly flawed climate models – no matter how beautiful they might think they are – are no different from those who peddled the Y2K scare back at the end of the last century.
The reason they did it was for financial self-interest, despite all their claims for supposedly trying to save civilisation as we know it.

June 19, 2013 12:18 am

Nick Stokes makes a pertinent point when he asks if AR5 actually does average different models and not just averages different runs of the same model.
If the modelled physics is the same then averaging the runs seems sound.
But, I think we can agree, if the models are different you can’t get a meaningful result from mixing them up.
So does AR5 use multi-model averages? Following Alex Rawls links I found this:

The results of multiple regression analyses of observed temperature changes onto the simulated responses to greenhouse gas, other anthropogenic, and natural forcings, exploring modelling and observational uncertainty and sensitivity to choice of analysis period are shown in Figure 10.4 (Gillett et al., 2012b; Jones and Stott, 2011; Jones et al., 2012). The results, based on HadCRUT4 and a multi-model average, show robustly detected responses to greenhouse gas in the observational record whether data from 1851–2010 or only from 1951–2010 are analysed (Figure 10.4a,c).

Emphasis mine.
Now, I can’t see the graph and, being a bear of very little brain, may not be able to understand it anyway but it does look to me like AR5 has made the blunder that this post is about.

Gail COmbs
June 19, 2013 12:26 am

Much thanks to Dr Brown for this comment.
……
William McClenney says: @ June 18, 2013 at 8:18 pm
…..When was it, exactly, that we, H. sapiens sapiens, the wise, wise one, abandoned reason?
>>>>>>>>>>>>>>>>>
When Human Greed was sold as “Save the Children/Environment” and Acadamia and the Media decided to jumped on board the gravy train too.
….
Thanks Anthony for promoting this comment to a post. Makes it easier to book mark.

Nick Stokes
June 19, 2013 12:29 am

M Courtney says: June 19, 2013 at 12:18 am
“AR5 has made the blunder that this post is about”.

I think it is likely that for various purposes AR5 has calculated model averages. AR4 and AR3 certainly did, and no-one said it was a blunder. People average all sorts of things. Average income, average life expectancy. But this post says much more than that. Let me quote again:
“by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”
RGB says this all happened in Lord M’s graph. I don’t think it is a fair description of that graph. So where?

brad
June 19, 2013 12:50 am

But isn’t the ensemble model average exactly what hurricane folks use and it works great? They use the average of a bunch of noisy models and it does a great job of predicting hurricane movement.
Not that I agree with the models, but…

June 19, 2013 12:57 am

As further thought to RGB’s excellent comment, I wonder how much the physics of the models has actually changed since, say, TAR -> AR4 -> AR5? My suspicion is not much. The models have way more cells run on bigger computers, but I think the physics they use is pretty much the same. Why mention this point? Because I think the IPCC and the modellers want to claim, when their predictions from say TAR or AR4 are shown to diverge from reality after just 5 to 10 years, that “ah – but the models are much better now. Now you can trust the results. Look – when we hindcast now we get a really fit.”
RGB nails this perfectly with his comment about attractors. I am a geophysicist working in earth science models (large and stochastic) and I am not familiar with this kind of description, but he makes the point so clearly and so well. As new real world data becomes available (ie time passes since the last work), the modellers fine tune the parameters so as the fit of the hind cast looks great. But all they are doing is fine-tuning the parameters to a particular local attractor, like fitting the elephant. Once the real world system state changes to another attractor, then the models have no predictive power at all. And this is as plain to see as the nose on your face – none of the models eg TAR predicted anything other than inexorable increasing temperature with time, and they were all wrong. The IPCC almost admits this to be true: they talk not about predictions but about “scenarios”. The falsehood is then allowing users of those reports – activists, politicians, other, less critical scientists – to behave as though those “scenarios” are predictions. They may be one set (of very biased/groupthink) possibilities but there probability of occurrence must be vanishingly small.
How anyone can believe that the climate modellers have solved a set of non-linear equations, involving Navier-Stokes, with poorly defined initial conditions, poorly defined boundary conditions, using incomplete observations of the real world problem, unknown/incomplete physics in a chaotic and non-linear system that includes coupled ocean-atmosphere exchanges, radiative physics, convection, diffusion, the biosphere, phase-state changes and external factors such as the sun, cosmic rays and so forth is beyond me. And then they claim that this hugely complex, non-linear system is catastrophically sensitive to just one parameter – CO2 concentration of the atmosphere, measured in parts per million. It is absurd. And to then elevate the results to “settled science” and start basing public policy on it is the the most irresponsible thing I have ever seen in my lifetime. Time for the politicians and modellers to wake up and smell the coffee before, as RGB puts it, we “bring out pitchforks and torches as people realise just how badly they’ve been used by a small group of scientists and politicians”.

Michel
June 19, 2013 1:03 am

Can be summarized to a simple evidence: If two different models based on the same related parameters provide statistically insignificant results, averaging of these two will not help.
And all of this is being done for temperature evolution, as if a particular climate would be defined just by temperatures.
Climates (in the plural form) are the long term combinations of seasonal temperatures and rainfalls that will very slowly drive living conditions for flora and fauna – homo sapiens included – in different regions of the globe.
Temperature predictions are notoriously wrong. Where are rainfall predictions without which no climate discussion can be made? And the biomass response to temperature, rainfalls and soil composition? Can they once be ascertained? Probably never, even with the most powerful and sophisticated computers.
So we remain with conjectures about living conditions on Earth that may improve, or worsen, or not change, depending of one’s particular state of mind.

Greg Goodman
June 19, 2013 1:09 am

Lengthy but very well argued. The key line burried in the middle somewhere is this:
” Only if we are very stupid or insane or want to sell something.”

TFN Johnson
June 19, 2013 1:18 am

What is a “bitch-slap” please. Is it a PC term?

Stephen Richards
June 19, 2013 1:26 am

Robert, Thanks, best summary of early quantum physics I have ever read. I don’t believe, at least I hope not, that any self respecting sceptic ever thought that the models were valid and certainly not the stupidity of the median ensemble but I/we have never been able to get inside the models to look which would have helped. The defense of the models by the modelers and their users (Betts comes to mind) has been unequivable over the years and remains so. This great piece will be read by these idiots but sadly dismissed with utter contempt BUT don’t stop your probing and ‘eloquent’ reposes I enjoy them emmensely.

Stephen Richards
June 19, 2013 1:30 am

Nick Stokes says:
June 19, 2013 at 12:29 am
M Courtney says: June 19, 2013 at 12:18 am
“AR5 has made the blunder that this post is about
STRAWMAN ALERT !! Nick your intellect is worthier of a better response so do it.

Stephen Richards
June 19, 2013 1:34 am

Janice Moore says:
June 18, 2013 at 8:18 pm
Your contributions since joining this blog have been brilliant, thanks.

June 19, 2013 1:35 am

http://rankexploits.com/musings/wp-content/uploads/2012/12/Changed_Baseline.jpg
Would they use temperature (upper graph) instead of anomalies, they would have spaghetti all over the place.

Gail COmbs
June 19, 2013 1:39 am

Tilo Reber says: @ June 18, 2013 at 9:36 pm
…. I think that the bottom line, at this point, is that we have no business producing climate models at all because we cannot do the complexity of physics required to make such modeling meaningful.
>>>>>>>>>>>>>>>>>>>
Heck, Climastrology has not even taken the very first step of trying to HONESTLY determine what all the parameters are that effect climate because it has always been about politics.

….Water is an extremely important and also complicated greenhouse gas. Without the role of water vapor as a greenhouse gas, the earth would be uninhabitable. Water is not a driver or forcing in anthropogenic warming, however. Rather it is a feedback, and a rather complicated one at that. The amount of water vapor in the air changes in response to forcings (such as the solar cycle or warming owing to anthropogenic emission of carbon dioxide).
We are at our best when we follow evidence rather than lead it. My name is Rich. I am physical chemist interested in public discourse and teaching moments for evidence-based thinking.

Water, probably THE most important on-planet climate parameter gets demoted to a CO2 feed back and isn’t even listed by the IPCC as a forcing. That point alone make ALL the models nothing but GIGO. SEE IPCC Components of Radiative Forcings Chart
It was a con from the start and the IPCC even said it was.
The IPCC mandate states:

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environmental Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to assess the scientific, technical and socio-economic information relevant for the understanding of human induced climate change, its potential impacts and options for mitigation and adaptation.
http://www.ipcc-wg2.gov/

Western Civilization was tried and found guilty BEFORE the IPCC ever looked at a scientific fact. The IPCC mandate is not to figure out what factors effect the climate but to dig up the facts needed to hang the human race. The IPCC assumes the role of prosecution and and the skeptics that of the defense but the judge (aka the media) refuses to allow the defense council into the court room.
Academia is providing the manufactured evidence to ‘frame’ the human race and they are KNOWINGLY doing so. In other words Academics who prides themselves as being ‘lofty socialists’ untainted by plebeian capitalism are KNOWINGLY selling the rest of the human race into the slavery designed by the bankers and corporate elite. (Agenda 21)
“Can we balance the need for a sustainable planet with the need to provide billions with decent living standards? Can we do that without questioning radically the Western way of life? These may be complex questions, but they demand answers.” ~ Pascal Lamy Director-General of the World Trade Organization
“We need to get some broad based support, to capture the public’s imagination…
So we have to offer up scary scenarios, make simplified, dramatic statements and make little mention of any doubts… Each of us has to decide what the right balance is between being effective and being honest.”
~ Prof. Stephen Schneider, Stanford Professor of Climatology, lead author of many IPCC reports
“The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.” ~ Prof. Chris Folland, Hadley Centre for Climate Prediction and Research
“The models are convenient fictions that provide something very useful.” ~ Dr David Frame, climate modeler, Oxford University
“The only way to get our society to truly change is to frighten people with the possibility of a catastrophe.” ~ Daniel Botkin emeritus professor Department of Ecology, Evolution, and Marine Biology, University of California, Santa Barbara.
The Bankers, CEOs, Academics, and Politicians know exactly what they are doing, and that is the complete gutting of western civilization for profit. The lament “it is for our future children” has to be the vilest lie they have ever told, since their actions sell those same children into slavery.

World Bank Carbon Finance Report for 2007
The carbon economy is the fastest growing industry globally with US$84 billion of carbon trading conducted in 2007, doubling to $116 billion in 2008, and expected to reach over $200 billion by 2012 and over $2,000 billion by 2020

The results of the CAGW hoax:

International Monetary Fund
World Economy: Convergence, Interdependence, and Divergence
Finance & Development, September 2012, Vol. 49, No. 3
by Kemal Derviş
….Within many countries the dramatic divergence between the top 1 percent and the rest is a new reality. The increased share of the top 1 percent is clear in the United States and in some English-speaking countries and, to a lesser degree, in China and India….
This new divergence in income distribution may not always imply greater national inequality in all parts of a national distribution. It does, however, represent a concentration of income and, through income, of potential political influence at the very top, which may spur ever greater concentration of income. The factors—technological, fiscal, financial, and political—that led to this dynamic are still at work. …And the euro area crisis and its accompanying austerity policies will likely lead to further inequality in Europe as budget constraints curtail social expenditures while the mobility of capital and the highly skilled make it difficult to effectively increase taxes on the wealthiest.
New convergence
The world economy entered a new age of convergence around 1990, when average per capita incomes in emerging market and developing economies taken as a whole began to grow much faster than in advanced economies…. For the past two decades, however, per capita income in emerging and developing economies taken as a whole has grown almost three times as fast as in advanced economies, despite the 1997–98 Asian crisis…
…A third significant cause of convergence is the higher proportion of income invested by emerging and developing countries—27.0 percent of GDP over the past decade compared with 20.5 percent in advanced economies. Not only does investment increase the productivity of labor by giving it more capital to work with, it can also increase total factor productivity—the joint productivity of capital and labor—by incorporating new knowledge and production techniques and facilitate transition from low-productivity sectors such as agriculture to high-productivity sectors such as manufacturing, which accelerates catch-up growth. This third factor, higher investment rates, is particularly relevant in Asia—most noticeably, but not only, in China. Asian trend growth rates increased earlier and to a greater extent than those of other emerging economies….
The economy of China will no doubt become the largest in the world, and the economies of Brazil and India will be much larger than those of the United Kingdom or France.
The rather stark division of the world into “advanced” and “poor” economies that began with the industrial revolution will end,….

The Uncomfortable Truth About American Wages
the real earnings of the median male have actually declined by 19 percent since 1970. This means that the median man in 2010 earned as much as the median man did in 1964 — nearly a half century ago. Men with less education face an even bleaker picture; earnings for the median man with a high school diploma and no further schooling fell by 41 percent from 1970 to 2010….

Workers in the USA, EU, Australia and Canada are now competing (on par thanks to WTO) with Asian workers while our tax dollars are used to fund their brand spanking new World Bank Funded COAL PLANTS. The Guardian states World Resources Institute identifies 1,200 coal plants in planning across 59 countries, with about three-quarters in China and India Also thanks to the WTO and Clinton US technical secrets including military secrets have been given to China as part of the WTO Technology Transfer Agreement.

Gail Combs
June 19, 2013 1:44 am

Gary Hladik says:
June 18, 2013 at 10:08 pm
Anyone reading this thread who hasn’t read all of RGB’s comments in the original thread should do so now….
>>>>>>>>>>>>>>>>>>>>>>>
Agreed. I copied and saved them in a LibreOffice file so I could reread them as a group.

Gail Combs
June 19, 2013 2:01 am

TFN Johnson says:
June 19, 2013 at 1:18 am
What is a “bitch-slap” please. Is it a PC term?
>>>>>>>>>>>>>>>>>
No…. but thanks for making me look it up. (Good for a laugh.)

http://www.urbandictionary.com/define.php?term=bitch-slap
The kind of slap a pimp gives to his whores to keep them in line or punish them. However, it is most commonly used to describe an insulting slap from one man to another, as if the slapper is treating the slappee as his bitch.

rogerknights
June 19, 2013 2:07 am

Gary Hladik says:
June 18, 2013 at 10:08 pm
Anyone reading this thread who hasn’t read all of RGB’s comments in the original thread should do so now. They add much more to the discussion.

A moderator or rgb should copy them over.

June 19, 2013 2:17 am

In private enterprise, all the government-employed turkeys who produced the various climate models would have been given their marching orders by now. They would have had to produce predictions that turned out right or they would have been given the sack!

Jolan
June 19, 2013 2:33 am

Where is Mosher? Strangely silent don’t you think?

AndyG55
June 19, 2013 3:06 am

Moshpit may as well be silent, he rarely says anything of any import anyway. !!
Remember, he’s a journalist with zero scientific education !
So long as you interpret his posts as such, you realise how little they say.

Nick Stokes
June 19, 2013 3:13 am

Stephen Richards says: June 19, 2013 at 1:30 am
“Nick Stokes says: June 19, 2013 at 12:29 am
M Courtney says: June 19, 2013 at 12:18 am
“AR5 has made the blunder that this post is about
STRAWMAN ALERT !! Nick your intellect is worthier of a better response so do it.”

I’m happy with the response. But if you think just calculating a model mean is a blunder, then let’s look at the graph from Dr Spencer, referenced by RGB and featured at WUWT just two weeks ago. A spaghetti plot of 73 model results from CMIP5, prepared by Dr Spencer, with no claim AFAIK that AR5 was involved.
And what’s that big black line down the middle? A multi-model mean!

SandyInLimousin
June 19, 2013 3:24 am

Nick Stokes
“I think it is likely that”
When you know for sure come back and I’m sure worthier people than I will discuss it with you.

richard verney
June 19, 2013 3:25 am

O/T but this is REALLY IMPORTANT NEWS. see http://www.dailymail.co.uk/news/article-2343966/Germany-threatens-hit-Mercedes-BMW-production-Britain-France-Italy-carbon-emission-row.html
It would appear that Germany (Europe’s most powerful player) has woken up and has now realised the adverse effect of carbon emission restrictions.
I have often commented that Germany will not let its manufacturing struggle as a consequence of such restrictions and/or high energy prices (which germany is begining to realise is disastrouse for its small industries which are the life blood of German manufacturing).
First, germany is moving away from renewables and is building 23 coal powered stations for cheap and reliable energy.
Second, Germany wants to rein back against too restrictive carbon emissions.
The combination of this new approach is a ground changer in European terms.

June 19, 2013 3:43 am

I agree with DonV, that “global average temperature” does not have a physical meaning. I believe there was a great post by Anthony about a year ago on this topic.
If I were to try and take an average temperature of my house, where do I start? Placing thermometers in the bedrooms, loft and basement, fridge, freezer, cooker and oven, inside the shower, and inside the bed, and then averaging the measurements? OK, to a man with a hammer everything looks like a nail, and to a man with lots of thermometers average temperatures can be measured anywhere.
But what meaning exactly this number will have? And how is it at all possible, to measure average of the whole planet, to an accuracy of a tenth of a degree, I pray tell me, when a passing cloud can drop temperatures of a thermometer by many degrees? The error bars of any such “average” should be about 10 deg, so any decimal is utterly meaningless, and simply bad science.

eco-geek
June 19, 2013 3:52 am

Surely the ensemble average would be OK if warmists could claim they were based on non-Markovian guesses?
Surely this is the rub!

AndyG55
June 19, 2013 3:56 am

“And what’s that big black line down the middle? A multi-model mean!”
OMG Nick..
yes. it was a multi model mean.. designed to show how meaningless the multi-model mean is !!
And I credited you with some meagre intelligence… my bad !!!

eco-geek
June 19, 2013 3:57 am

Er… I think that should be Markovian. Tired and all that…

son of mulder
June 19, 2013 3:58 am

“The “ensemble” of models is completely meaningless, statistically”
Absolutely correct. What i think has happened in the world of climate modelling is that it has been assumed that one can apply The Central Limit Theorem to the results of independent and different models.
http://en.wikipedia.org/wiki/Central_limit_theorem
Models are the mathematical formulation of assumptions combined with a subset of physical laws.
The Central limit theorem relates to the multiple sampling (measuring) of reality.

DirkH
June 19, 2013 4:04 am

“Let me repeat this. It has no meaning!”
That’s why I call the warmist government scientists pseudoscientists for a number of years now.

cd
June 19, 2013 4:04 am

Duke
You say that the type of statistics carried out on the collated models is nonsense or has no meaning. Can you or others point out why I’m wrong – I can’t see a problem with the approach. Perhaps I don’t quite grasp the issue.
1) The process adopted seems to be one akin to stochastic modelling in that you do several runs with different starting conditions to give you a range of outputs with a cumulative distribution function (cdf) and mean. From this one has a measure of uncertainty.
2) Climate models, have inputs and assumed sensitivities (correct?). Surely these are you’re starting conditions which can be changed to give a range of simulated results: cdf.
3) Each team models under different assumptions and therefore the runs are akin to stochastic models, and the widely used methodology is sound.
Personally I think the models are nonsense.

Roy Spencer
June 19, 2013 4:10 am

I agree with much of what Bob has said here, as well as Greg L’s expansion upon those comments. It seems that the fact that we have only one realization — one Earth — greatly limits what we can determine about causation in the climate system, and makes a lot of the statistics regarding multiple model simulations rather meaningless. We don’t even know whether the 10% of the models closest to the observations are closer by chance (they contain similar butterflies) or because their underlying physical processes are better.

Bruce Cobb
June 19, 2013 4:12 am

The models are all junk, being based on a fundamental flaw – that increased C02 is somehow “forcing” temperatures up. They then try to disguise those flaws, using conveniently uncertain variables, such as volcanoes and aeorosols, which can be used to tweak the models accordingly. But now, after over 17 years of temperatures flatlining, even the yeoman tweaksters can’t do enough tweaking of their playstation climate models to make them coincide with reality. So, they have resorted to fantasies, such as the “missing” heat hiding in the deep oceans. Even they must know that the jig is up.

DirkH
June 19, 2013 4:15 am

Nick Stokes says:
June 19, 2013 at 12:29 am
“I think it is likely that for various purposes AR5 has calculated model averages. AR4 and AR3 certainly did, and no-one said it was a blunder. ”
The first time I heard of the multi model means I thought this is propaganda, not science. Why would you even think of averaging the output of several different computer programs and hope that the AVERAGE is better than each of the instances? One broken model can complete wreck your predictive skill.
IF Warmism were a SCIENCE there would have to be a JUSTIFICATION for this but there ISN’
T ONE.
“People average all sorts of things. ”
People do all sort of stupid things. Scientists are supposed to know what they are doing. They are also supposed to be honest. Nothing of this is the case in warmist science; it is a make-work scheme for con-men.

Nick Stokes
June 19, 2013 4:19 am

AndyG55 says: June 19, 2013 at 3:56 am
“And what’s that big black line down the middle? A multi-model mean!”
“yes. it was a multi model mean.. designed to show how meaningless the multi-model mean is !!”

No it wasn’t. The WUWT post was headed EPIC FAIL. Nothing is said against the model mean. Instead, it is the statistic used to show discrepancy between the models and radiosonde/satellite data (also averaged). If model mean is a dud statistic, it would be useless for that purpose.
REPLY: Fixed your Italics Nick. The EPIC fail has to do with the envelope of the models, not the mean, diverging from the observations, much like the AR5 graph:
http://wattsupwiththat.files.wordpress.com/2012/12/ipcc_ar5_draft_fig1-4_without.png
A mean of junk, is still junk. Neither the models envelope nor their mean have any predictive skill, hence they are junk.
This is further illustrated by the divergence of trend lines, which don’t rely on the mean nor envelopes.
http://www.drroyspencer.com/wp-content/uploads/CMIP5-19-USA-models-vs-obs-20N-20S-MT.png
I know that is hard for you to admit being ex CSIRO and all, but the climate models are junk, and that’s the reality of the situation. – Anthony

Mindert Eiting
June 19, 2013 4:26 am

Completely agree with Brown. You don’t have to be a statistician to understand that the mean opinion of a number of deluded people is not closer to the truth than any individual opinion. What Brown said: take one model and simulate many results using random noise. The relative frequency of results with temperature slopes below the observed slope, is the type II error rate. If it is less than five percent, the model should be rejected.

June 19, 2013 4:46 am

Fixed your Italics Nick.
Thanks.
The EPIC fail has to do with the envelope of the models, not the mean, diverging from the observations,…
Yes, agreed. But if the mean is meaningless, why was it added, and so emphatically?
REPLY: Maybe just following the lead from Real Climate? – Anthony
http://www.realclimate.org/images/model122.jpg
From:
http://www.realclimate.org/index.php/archives/2013/02/2012-updates-to-model-observation-comparions/

Ant
June 19, 2013 4:59 am

Excellent work. Made me chuckle. “Lies, Damn Lies, and Statistics”. Muwahahahahaaaaaa

June 19, 2013 5:04 am

rgb, Amen and halle-fraking-lujah!
As someone who has some modeling experience (15+ years electronics simulations and models), I’ve been complaining about them for a decade, but you’ve stated it far far better than I could.

AndyG55
June 19, 2013 5:08 am

Poor Nick, comprehension seems not to be your strong point !!

Jimbo
June 19, 2013 5:11 am

For Warmists who say that Robert Brown doesn’t know much about computing or models see an excerpt from his about page.
http://www.phy.duke.edu/~rgb/About/about.php

About Robert G. Brown
…….In 1994 reviewed and (together with Rob Carter and Andrew Gallatin) effectively rewrote from scratch the systems engineering and design of a proposed upgrade of the Undergraduate Computing Clusters (postscript document)……..
From 1995 on, has been extremely involved in the beowulf movement — the development of commodity off the shelf (COTS) parallel supercomputers based on the Linux operating system. Built the original beowulf at Duke, Brahma, a more recent beowulf named Ganesh, set up the Duke Beowulf User’s Group and has supported countless others seeking to build beowulfs.
In 1995 co-founded Market Driven Corporation. This was originally a company dedicated to predictive modeling and data mining using a highly advanced neural network, Discovertm written by rgb. More recently the company has evolved into a generalized web services company where predictive modeling is just one component of its service offerings.

AndyG55
June 19, 2013 5:11 am

In cricketing parlance.. twenty scores of 5, DOES NOT mean you have scored a century !!
Junk is junk.. …. is junk !!
Can’t bat, can’t bowl… you know the story !

Patrick
June 19, 2013 5:13 am

“Nick Stokes says:
June 19, 2013 at 4:19 am
I know that is hard for you to admit being ex CSIRO and all, but the climate models are junk, and that’s the reality of the situation. – Anthony”
If this is true Anthony, it explains, to me living in Aus, a lot. Nick clearly is not a fool, but severely biased. Nick really needs to put aside that bias, read through RGB’s post with an open mind. RGB’s post, as do all his posts, makes complete sense to me.

June 19, 2013 5:13 am

Nick: Adding the mean to the display shows clearly where all the (hopeless) models lie. None of the models in the ensemble have any real value as predictors over even a relatively short time period since 1979. The IPCC does show displays with the mean model ensemble, as has been pointed out elsewhere on the thread.
Harping on as you are about who shows the mean of the ensemble is essentially a side-show. The bottom line is not one of the models in the ensemble is of any use as a predictor of climate over even the realtively short period since 1979, no matter how wonderful, complex, physically plausible or anything else they maybe. They may be fantastic, interesting, wonderful academic tools to help people develop new, better models in the future, but as predictors they are worse than useless: they are all clearly biased to high temperatures. And to go then further and base public policy on such nonsense is absurd and negligent.

AndyG55
June 19, 2013 5:13 am

“But if the mean is meaningless, why was it added, and so emphatically?”
To emphasise the farce.. DOH !!!!!

Ant
June 19, 2013 5:17 am

All properly trained statisticians understand the prerequisite conditions for the validity of statistical measures. Fundamental to all measures is the requirement for independent, identically distributed, random sampling. To publish statistics knowing these conditions have not been met is at best stooopidity, at worst it’s a sinister, blatant fraud.
Now I don’t believe these guys are stoooopid, just misguided, after all really stoooooopid people don’t get to have THAT much influence. That only leaves blatant deception and fraud, or have I missed something?

June 19, 2013 5:20 am

Nick Stokes says – June 19, 2013 at 3:13 am
I’m not sure the argument holds up.
It may well be that lots of people average things that don’t have any justification to be averaged together. Just because I may like the outcome and the person who did the averaging does not mean they are right to do it.
The practise has to be justified on its own terms not in terms of ‘that’s my side’ or ‘that’s a handy outcome’. The original post makes a very good case that averaging models which embody different understandings of the physics tells us nothing of value and hides what value their may be in the models.
It shouldn’t be done.
It is certainly the case that averaging multiple runs of the same model is a sound practise. It tells us something about the characteristics of that model and the understanding of the physics it embodies.
But I don’t think that’s what the AR5 quote I linked to is doing.
I don’t think AR5 is a sound practise.

Jimbo
June 19, 2013 5:25 am

Have you noticed that over the last few years Warmists will point to the lowest model projection and say it closely matches observed temperature. This here is part of the con job at work. No mention about the other 95% failed projections. The models failed some time back and are demonstrating their continued failure for each day the standstill continues. How much longer can they carry on this charade?

Alan D McIntire
June 19, 2013 5:25 am

Robert G. Brown uses a quantum mechanics analogy to make his point. The vast majority of us have no knowledge of quantum mechanics nor do we have any way to make meaningful measurements in the field. In contrast, we have all spent a lifetime experiencing climate, so we all have at least a rudimentary knowledge of climate.
Believers in catastrophic global warming have often used the “doctor” analogy to argue their case. They state something like, “The consensus of climatologist says there is serious global warming. Who would you trust, a doctor (consensus) or a quack(non consensus). A better analogy, (given the average person’s familiarity with climate and motor vehicles as opposed to quantum mechanics and medicine) might be,
“Who would you trust, a friend or a used car salesman?”

commieBob
June 19, 2013 5:36 am

Tsk Tsk says:
June 18, 2013 at 7:01 pm
Brown raises a potentially valid point about the statistical analysis of the ensemble, but his carbon atom comparison risks venturing into strawman territory.

“To ‘attack a straw man’ is to create the illusion of having refuted a proposition by replacing it with a superficially similar yet unequivalent proposition (the ‘straw man’), and to refute it, without ever having actually refuted the original position.” http://en.wikipedia.org/wiki/Straw_man
Modelling a carbon atom is very simple compared with modelling the global climate. rgbatduke enumerates some of the problems involved with using the ‘well-known physics’ to model the simpler case and points out that the chance of success is much smaller in the more complicated case. Unless I am badly misunderstanding something, this is hardly a strawman argument.

rgbatduke – “Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics.”

Yes, they do indeed claim that they correctly implement well-known physics. They’re wrong. All the models are wrong because they are all based on a faulty understanding of the well-known physics.

John Archer
June 19, 2013 5:39 am

I was going to post this on the earlier thread No significant warming for 17 years and 4 months by Lord Monckton on June 13th but maybe here is better now.
FAN FEEDBACK:
Prof. R G Brown’s contributions here are the dog’s bollocks. This is highly informed rational thinking at its best. It has given me HUGE pleasure reading them. I second Lord Monckton’s appeal to have them given greater prominence and in general for them to be more widely circulated.
The experience has been absolutely THRILLING! I really don’t know why I should associate the two, but it gave me something akin to the intense pleasure I got from watching the exquisite artistry of Cassius Clay when he was on top form!
WAIT! It’s them knockout combos! YES, that’s it!
Oh, and that lovely footwork, too! 🙂
Thank you very much indeed, Professor. Sock it to ’em!

Jimbo
June 19, 2013 5:41 am

Nick Stokes has put up a brave but foolhardy defence of failure. How ever you want to look at the models they have FAILED. Garbage went in, garbage came out. Policies around the world are being formulated on the back of failed garbage. The IPCC is like the Met Office UK, they have a high success rate in temperature projection / prediction failure. Just looking at the AR5 graph is quite frankly embarrassing even for me.

MattN
June 19, 2013 5:43 am

RGB, how often do you try to talk sense into your colleague Bill Chameides at http://www.thegreengrok.com? Or have you just given up by now?

Scott
June 19, 2013 5:52 am

Here’s an analogy to climate models. Every year mutitudes of NFL fantasy football websites tweak their “models” incorporating “hindcasts” of last years NFL player performances to “predict” with “high confidence” the players 2013 performance. For good measure, most throw in a few “extreme” predictions to make people think they are really smart and try to entice them to buy into their predictions. There is even a site that “averages” this “ensemble” of predictions so a fantasy football drafter can distill it all down into one prediction and draft with the best information available. Such a drafter hardly ever wins. Why? because all these smart predictors had an eye on each other, trying to make sure their predictions weren’t too outlandish because if they were 1) no one would believe them this year because all the other experts are saying something different, and 2) from a business perspective if they ended up dead last in their predictions they might be out of business next year. They don’t want to be too far from the average, so they aren’t. It ends up being like a crowd of drunks with arms on each others shoulders, staggering and weaving but in the end all supporting each other right up to the start of the season. Then the predictions immediately start falling apart but really don’t matter much anymore. Because the purpose of all these high confidence predictions is to sell subscriptions to the website, not necessarily to be right. In fact being the best of the worst is sometimes the definition of perfection in the game of prediction.

george h.
June 19, 2013 5:56 am

RGB, my ensemble of models predict an IRS audit in your future.

Jeff Norman
June 19, 2013 5:56 am

rgb sez:
“””We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.””””
Please note that he sez “the models”, not just the models selected by the IPCC to illustrate their self serving concept of th the future climate, but all the models.

McComber Boy
June 19, 2013 5:57 am

The more I read Nick Stokes’ drivel and harping on about – neener, neener, nearer, he did it first – the less I expect any chance of real discourse from that corner. I’m reminded so much of the old Spinal Tap, in character, interview about their amplifiers. When sanity is introduced, the answer is always, “Ours go to eleven”.

Poor Nick! He just says, over and over, that all of the knobs on all the climate amplifier models are already set on 11! But of course none of them are actually plugged in.
pbh

jeanparisot
June 19, 2013 5:58 am

My CEU credits for the month are taken care of. Thank you.

Mike M
June 19, 2013 6:01 am

One thing I’m certain of is that any model that may have inadvertantly predicted cooling would have immediately been erased by someone whose continued income depends on predicting warming. I guess I’m pointing out my certainty of the possibility of a larger uncertainty.

jeanparisot
June 19, 2013 6:05 am

Shouldn’t these models be judged on multiple criteria to avoid introducing current measurement bias (as opposed to the existing bias in the inputs), so: sea levels, precipitation, the distribution of warming, atmospheric water vapor, etc.

Mike M
June 19, 2013 6:10 am

Scott says: “It ends up being like a crowd of drunks with arms on each others shoulders,”
Which is a lot like the stock market; we’re all going in this direction because…..

Bill_W
June 19, 2013 6:32 am

Nick Stokes,
I assume if it says multi-MODEL mean that this is exactly what it says. If it was multiple runs from the same model, they would call it something else, I would hope. Like a multi-run mean or CMP76-31 output mean. (fyi- I just made that climate model up).

June 19, 2013 6:45 am

The current issue of The Economist magazine from London includes “Tilting at Windmills”, a lengthy, revealing article about Germany’s infatuation with renewable energy. Will it likely result in a consumer revolution over skyrocketing electric power costs and an eventual economic train wreck? Seems like a high price to pay for enduring the last 17 years of no global warming.
But the Europeans have been duly brainwashed to continuing fighting the war on carbon. Here’s a link to this informative article:
http://www.economist.com/news/special-report/21579149-germanys-energiewende-bodes-ill-countrys-european-leadership-tilting-windmills

Jimbo
June 19, 2013 6:46 am

Monckton of Brenchley in reply to the arm waving Nick Stokes on this thread says it starkly.

…It does not matter whether one takes the upper bound or lower bound of the models’ temperature projections or anywhere in between: the models are predicting that global warming should by now be occurring at a rate that is not evident in observed reality….

Nick Stokes is attempting bring up all kinds of defences for the climate models but does not want to deal with the elephant in the room. You could call it a dead parrot.

Monty Python
…..’E’s not pinin’! ‘E’s passed on! This parrot is no more! He has ceased to be! ‘E’s expired and gone to meet ‘is maker! ‘E’s a stiff! Bereft of life, ‘e rests in peace! If you hadn’t nailed ‘im to the perch ‘e’d be pushing up the daisies! ‘Is metabolic processes are now ‘istory! ‘E’s off the twig! ‘E’s kicked the bucket, ‘e’s shuffled off ‘is mortal coil, run down the curtain and joined the bleedin’ choir invisible!! THIS IS AN EX-PARROT!!…..

tadchem
June 19, 2013 6:48 am

It is a core principle of the scientific method that demonstrably erroneous hypotheses that lead to inaccurate and unreliable prediction must be discarded. Evidently, ACC ‘modelers’ have discarded the scientific method.

June 19, 2013 6:49 am

jeanparisot says:
June 19, 2013 at 6:05 am

Shouldn’t these models be judged on multiple criteria to avoid introducing current measurement bias (as opposed to the existing bias in the inputs), so: sea levels, precipitation, the distribution of warming, atmospheric water vapor, etc.

They should be, but they can’t, they’re horribly wrong. This is why they trot out a global average temperature, regional annual temps just don’t match reality, let alone regional monthly/daily temps, they’re even worse.

June 19, 2013 6:51 am

Oh, I should note, even the global annual temps don’t match reality, since that’s the topic of this blog.

Latitude
June 19, 2013 6:55 am

Roy Spencer says:
June 19, 2013 at 4:10 am
We don’t even know whether the 10% of the models closest to the observations are closer by chance (they contain similar butterflies) or because their underlying physical processes are better.
===================
thank you……over and out

Lloyd Martin Hendaye
June 19, 2013 7:01 am

We could say all this in some 150 words, and have for years, yet such common statistical knowledge ever bears repeating. In any rational context, the number of egregious major fallacies infesting GCMs would make laughing-stocks of their proponents. But whoever thought that AGW Catasrophism with its Green Gang of Klimat Kooks was ever about objective fact or valid scientific inference?
Unfortunately, political/economic shyster-ism on this scale has consequences. “Kleiner mann, was nun?” as bastardized Luddite sociopaths trot out the mega-deaths.

johnmarshall
June 19, 2013 7:03 am

The frayed rope graph that Dr. Spencer produced, together with the real time data, just shows how far the models are from reality . It is the fixation on CO2 that has caused this disfunction. The divergence just shows how far the models are from reality.
Reality is what it is about not some religious belief that a trace gas drives climate.

eyesonu
June 19, 2013 7:05 am

This is a very good posting as written by Dr. Robert G. Brown. No summarizing needed as Brown sums it up quite well.
It’s going to take a while to read all the comments posted so far but it would seem to be a good idea to include the Spencer spaghetti graph which would be important to any readers that haven’t been following closely the past couple of weeks. Just saying.

T.C.
June 19, 2013 7:06 am

So, the basic equation here is:
Something from nothing = Nothing.

Frank K.
June 19, 2013 7:06 am

First of all, BRAVO. Excellent article by Dr. Brown, and spot on, based on my 20 years working professionally in computational fluid dynamics.
Jimbo says:
June 19, 2013 at 5:11 am
“For Warmists who say that Robert Brown doesn’t know much about computing or models see an excerpt from his about page.”
http://www.phy.duke.edu/~rgb/About/about.php
Jimbo – you should know by now that NONE of the CAGW scientists ever reall want to discuss the models! Every time I bring up annoying issues like differential equations, boundary and initial conditions, stability, well-posedness, coupling, source terms, non-linearity, numerical methods etc., they go silent. It’s the strangest phenomenon I’ve ever experienced, considering you would think that they would LOVE to talk about their models.
BTW, one reason you will never see any progress towards perfecting multi-model ensemble climate forecasts is that none of the climate modelers want to say whose models are “good” and whose are “bad”…

CAL
June 19, 2013 7:18 am

The original scientific approach of AR4 was not so far from what Robert Brown is arguing. The models were required to be tested against a number of scenarios including the whole of the 20th century, the Roman warm period, the Medieval Warm period and the annual variation of sea ice. In the technical section of AR4 they are honest about the fact that none of the models was able to correlate with any of the scenarios without adding a fudge factor for deep sea currents and none of the models correlated well with all the scenarios.
Unfortunately these failures had become mere uncertainties by the time they got to the technical summary and these uncertainties had become certainties by the executive summary. The politicians had been left with no alternative but to conjure up this statistical nonsense. Why did the scientists not speak out? Well some did and were sacked. Others resigned. Most however seemed to realise how much money could be made and went with the flow.
Many people have suffered greatly as a consequence of this fraud but Science is the biggest loser.

angech
June 19, 2013 7:21 am

Two different competing premises here,
The first is that of trying to predict [model] a complex non linear chaotic climate system.
The second is that the climate system is in a box that just doesn’t change very much and reproduces itself fairly reliably on a yearly, decadely and even millennial time scale.
The first gives rise to weather which is constantly changing and becomes unpredictable [“Australia, a land of droughts and flooding rains”] for different reasons at daily weekly monthly and yearly levels.
The second gives rise to climate change in years, centuries and millenia [ice ages and hot spells] at a very slow rate.
Most people try to confuse climate change and weather. Models are predicting short term changes [weather] and calling it climate change.
All the current models are biased to warming ie self fulfilling models.
The chance of the earth warming up from year to year is 50.05 percent. ]. Why ? because the current long range view shows that we are still very slowly warming over the last 20,000 years. Climate models should basically be random walk generators reverting to the mean
They should have the capacity to put in AO’s, ENSO’s, Bob Tisdale and Tamino, Judy and WUWT plus the IPCC and make a guesstimate model yearly to 5 yearly with the ability to change the inputs yearly to reflect the unknown unknowns that occurred each year. No models should be able to claim to tell the future more than a year out because the chaotic nature of weather on short term time frames excludes it.
The fact that all climate models are absolutely programmed to give positive warming and thus cannot enter negative territory[ which is just under 50% on long term averages] means that all climate models are not only wrong but they are mendaciously wrong.

Jeremy
June 19, 2013 7:28 am

I don’t mean to toot my own horn here, but I’ve been saying what Robert Brown said on the comment sections of this website and other climate comment sections for years. Physicists use significant computational power daily in their jobs attempting to unlock mysteries of single atoms, and they fail daily trying to do this. Single atoms are unthinkably simpler than trying to compute the climate system, and yet climatologists pretend like they know what the temperatures will be in 200 years.
Climate modelers who pretend their “predictions” have any meaning are either insane, or they’re selling something. This is something I’ve said here for years. Thanks Robert for saying it again in a better way.

John Archer
June 19, 2013 7:30 am

Robert G. Brown uses a quantum mechanics analogy to make his point. The vast majority of us have no knowledge of quantum mechanics nor do we have any way to make meaningful measurements in the field. In contrast, we have all spent a lifetime experiencing climate, so we all have at least a rudimentary knowledge of climate.
Believers in catastrophic global warming have often used the “doctor” analogy to argue their case. They state something like, “The consensus of climatologist says there is serious global warming. Who would you trust, a doctor (consensus) or a quack(non consensus). A better analogy, (given the average person’s familiarity with climate and motor vehicles as opposed to quantum mechanics and medicine) might be, “Who would you trust, a friend or a used car salesman?”
” — Alan D McIntire, June 19, 2013 at 5:25 am
I don’t think it’s a better analogy. The timescales under consideration are far too long for Joe Soap to have any meaningful acquaintance changes in climate. Weather is a different matter but that’s beside the point. So I prefer your medic analogy. Let’s stick with that then.
Now, what you and others like you are wilfully ignoring is the abundant evidence that we are dealing here with a care facility staffed exclusively by Harold Shipman clones.
So now how do feel about that colonoscopy they’ve recommended you undergo? They want to rip you a new one, and worse.
It’s what they’ve been doing to the rest of us economically for decades now.
Being a layman confers no free pass in the responsibility stakes and an appeal to argument from authority should be the very last resort for the thinking man. Always do your own as much as you can.

ferd berple
June 19, 2013 7:34 am

see “Green Jelly Beans Cause Acne”
==========
If you test enough climate models, some of them will accidentally get the right answer. Going forward these “correct” models will have no more predictive power than any other model.
If you do enough tests eventually some will test positive by accident. If you only report the positive tests and hide the negative tests, you can prove anything with statistics. Regardless of whether it is true or not. Which is what we see in the news. Only the positive results are reported.
http://www.kdnuggets.com/2011/04/xkcd-significant-data-mining.html

john robertson
June 19, 2013 7:42 am

Thank you Dr Brown the truth shines thro.
As for N. Stokes, there is a reason Climate Audit calls you Racehorse Haynes.

June 19, 2013 7:46 am

Thanks, Robert.
Yes, the “models ensemble” is a meaningless average of bad science.

megawati
June 19, 2013 7:50 am

The mean of the models is nothing but a convenient way of illustrating what they saying. It was done by Monckton as it has been done by many others. In no way does it imply or suggest anything about the validity of the models, nor that the mean itself is a meaningful quantity in terms of having a physical underpinning.
I don’t see Mr Stokes defending the models anywhere, and neither am I – they patently suck. He is simply asking what bitch rgb wants to slap, since it’s primarily the practice of taking the mean and variance (the ‘implicit swindle’) that offends, apparently.

Resourceguy
June 19, 2013 7:52 am

By extension then, the 97% consensus does not know who to use or interpret statistics and is not competent in physics either.

ferdberple
June 19, 2013 8:01 am

Frank K. says:
June 19, 2013 at 7:06 am
BTW, one reason you will never see any progress towards perfecting multi-model ensemble climate forecasts is that none of the climate modelers want to say whose models are “good” and whose are “bad”…
===========
the reason for this is quite simple. you find this all the time in organizations. no one dares criticize anyone else, because they know their own work is equally bad. if one person gets the sack for bad work, then everyone could get the sack. so everyone praises everyone, saying how good a job everyone is doing, and thereby protect their own work and jobs.
climate scientists don’t criticize other climate scientists work because they know their models have no predictive power. that isn’t why they build models. models are built to attract funding, which they do quite well. this is the true power of models.
instead, climate scientists criticize mathematicians, physicists and any other scientists that try and point out how poor the climate models are performing, how far removed from science the models are. climate scientists respond that other scientists cannot criticize climate science, because only climate scientists understand climate science.
in any case, climate science is a noble cause. it is about saving the earth, so no one may criticize. it is politically incorrect to criticize any noble cause, no matter how poor the results. at least they are trying, so if they make some mistakes, so what. they are trying to save the planet. you can’t make an omelet without breaking eggs.
no one lives forever. if some people are killed in the process, that is the price of saving everyone else. if anything, it is a mercy. it saved them having to suffer through climate change. they were the lucky ones. those that were left behind to suffer, those are the victims of climate change.

ferdberple
June 19, 2013 8:20 am

Greg L. says:
June 18, 2013 at 6:02 pm
that the errors of the members are not systematically biased
============
unfortunately none of the models test negative feedback. they all assume atmospheric water vapor will increase with warming. yet observations show just the opposite. during the period of warming, atmospheric water vapor fell.
unfortunately, all the models predict a tropical hotspot – that the atmosphere will warm first followed by the surface. however, this is not what has been observed. the surface has warmed faster than the atmosphere.
these errors point to systemic bias and high correlation of the errors. Both of which strongly argue against the use of an ensemble mean.

MinB
June 19, 2013 8:20 am

I’ve read this post four times now and appreciate its clearly articulated logic more each time I read it, although I must admit my favorite part is a physicist that says “b*tch slap”. Gotta love this guy.

June 19, 2013 8:23 am

I have two points, one of which is included in the various comments already given.
1) The climate models are clearly not independent, being based on related data sets and hypotheses. The you can not use the usual approach to the uncertainty of the mean by saying: 23 models? then the standard deviation of the mean is the standard deviation divided by the square root of 23.
2) The average scenario curve (temperature versus years) is probably not the most likely scenario.
What could be done is to consider each model curve as a set of data for n years. The alternative data ( m models) also give such data sets. Then in multidimensional space, with n times m dimensions plus one for the temperature, each scenario is a single point. The most likely scenario is where the greatest density of points occurs. This might give a quite different result from just making an average temperature scenario. ( By the way, “Most likely” is the mode and not the mean, although in a symmetric distribution they coincide).
Would it be worth to make such an exercise? To my feeling not worth it because of lack of confidence in the models.
M.H.Nederlof, geologist.

June 19, 2013 8:27 am

Thank You.

June 19, 2013 8:28 am

The Model Mean argument was meant, I think, to augment the equally unscientific consensus argument. Or, at the very least, formulated by the same ignorant minds.

Tim
June 19, 2013 9:16 am

If 97% of climate scientists, with their remarkably computationally primitive brains, all attempt to understand the global climate system and calculate the effects of increased atmospheric CO2, starting from different assumptions and having read only a small % of the available literature, does that make them right on average?

michael hart
June 19, 2013 9:29 am

As rgb correctly points out, different models effectively represent different theories. Yet they may all claim to be ‘physics-based’.
If you have a medical condition you might visit a physician, or a priest, or a witch-doctor, or an aromatherapist. One or none of them may give you an accurate diagnosis, but trying to calculate an “average” is just not logical, Captain.
But that hasn’t stopped the politically minded applying their own favorite therapy.

lemiere jacques
June 19, 2013 9:36 am

i guess everybody who did little science in his life knows that…
so the issue is elsewhere, why so many people accept to discuss about that????
next will be you know earth is not in a state of equilibrium…
next will be global temperature is not a temperature….and meaningless to guess the heat content..
Ok most of what we saw was not refutable..it was not science.

June 19, 2013 9:37 am

Rob Ricket, nope, another guy. I liked his work, too, though.

StanleySteamer
June 19, 2013 9:41 am

We must always keep in mind that models are simplified constructs of complex systems. The focus is on the word simple as in “What do the simple folk do?” Hey, that makes for a good song title!!! It is, and the play is called Camelot. sarc/off

Martin Lewitt
June 19, 2013 9:44 am

Recall that in addition to the basic errors Dr. Brown discussed, even within the modeling field, correlated error is documented across all the models in the diagnostic literature, on precipitation by Wentz, surface albedo bias by Roesch and Arctic melting by Stroeve and Scambos. I’ve sure this is just the tip of the iceberg, pardon the pun.

June 19, 2013 9:47 am

“Essentially, all models are wrong, but some are useful. …the practical question is how wrong do they have to be to not be useful.”
“Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.”
– George E. P. Box
Climate science has a dearth of statistics expertise and a big, fat stamp of mediocrity. I”m going to dub it “The Scarlett M”.

JimK
June 19, 2013 9:51 am

In Bullseye shooting, if you shoot a circular pattern of sixes it does not average out to a 10. I think the same principle applies here.

DirkH
June 19, 2013 9:52 am

megawati says:
June 19, 2013 at 7:50 am
“The mean of the models is nothing but a convenient way of illustrating what they saying. It was done by Monckton as it has been done by many others. In no way does it imply or suggest anything about the validity of the models, nor that the mean itself is a meaningful quantity in terms of having a physical underpinning.
I don’t see Mr Stokes defending the models anywhere, and neither am I – they patently suck. He is simply asking what bitch rgb wants to slap, since it’s primarily the practice of taking the mean and variance (the ‘implicit swindle’) that offends, apparently.”
You are spreading disinformation. The multi-model mean is constantly presented by all the consensus pseudoscientists as the ultimate wisdom.
Let’s look at the IPCC AR4.
“There is close agreement of globally averaged SAT multi-model mean warming for the early 21st century for concentrations derived from …”
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-es-1-mean-temperature.html
I think the real problem for megawati, Stokes and all other warmists is that nice NSA subsidiary called Google. Thanks, NSA.

June 19, 2013 9:54 am

Is the problem wider than the ensemble of 23 models that Brown discusses? As I understand, each of the 23 model results are themselves averages. Every time the model is run starting from the same point the results are different so they do several runs and produce an average. I also understand the number of runs is probably not statistically significant because it takes so long to do a single run.
The deception is part of the entire pattern of IPCC behavior and once again occurs in the gap between the Science report and the Summary for Policymakers (SPM). The latter specifically says “Based on current models we predict:” (IPCC, FAR, SPM, p. xi) and “Confidence in predictions from climate models” (IPCC, FAR, SPM, p. xxviii). Nowhere is the difference more emphasized than in this comment from the Science Report in TAR. “”In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”
O’Keefe and Kueter explained how a model works: “The climate model is run, using standard numerical modeling techniques, by calculating the changes indicated by the model’s equations over a short increment of time—20 minutes in the most advanced GCMs—for one cell, then using the output of that cell as inputs for its neighboring cells. The process is repeated until the change in each cell around the globe has been calculated.” Imagine the number of calculations necessary that even at computer speed of millions of calculations a second takes a long time. The run time is a major limitation.
In personal communication with IPCC computer modeller Andrew Weaver told me individual runs can take weeks. All of this takes huge amounts of computer capacity; running a full-scale GCM for a 100-year projection of future climate requires many months of time on the most advanced supercomputer. As a result, very few full-scale GCM projections are made.
http://www.marshall.org/pdf/materials/225.pdf
A comment at Steve McIntyre’s site, Climateaudit, illustrated the problem. “Caspar Ammann said that GCMs (General Circulation Models) took about 1 day of machine time to cover 25 years. On this basis, it is obviously impossible to model the Pliocene-Pleistocene transition (say the last 2 million years) using a GCM as this would take about 219 years of computer time.” So you can only run the models if you reduce the number of variables. O’Keefe and Kueter explain. “As a result, very few full-scale GCM projections are made. Modelers have developed a variety of short cut techniques to allow them to generate more results. This was confirmed when I learned that the models used for IPCC ensemble do not include the MIlankovitch Effect. Weaver told me that it was left out because of the time scales on which it operates.
Since the accuracy of full GCM runs is unknown, it is not possible to estimate what impact the use of these short cuts has on the quality of model outputs.” Omission of variables allows short runs, but allows manipulation and removes the model further from reality. Which variables do you include? For the IPCC only those that create the results they want. Also, every time you run the model it provides a different result because the atmosphere is chaotic. They resolve this by doing several runs and then using an average of the outputs.
After appearing before the US Congress a few years ago I gave a public presentation on the inadequacy of the models. My more recent public presentation on the matter was at the Washington Heartland Conference in which I explained how the computer models and their results were the premeditated vehicle used to confirm the AGW hypothesis. In a classic circular argument they then argued that the computer results were proof that CO2 drove temperature increase.

JimF
June 19, 2013 10:03 am

I would love to see a Global Warming Modeller’s Cookbook based on the ensemble recipe of, say, a cake. Start with an egg (i.e. CO2 content), add more eggs as required, then various bits and pieces of sugar, salt, baking powder, flour, milk and vanilla. Stir well, and bake as long as you like. Yummy, and it’s SO different every time.

Chad Wozniak
June 19, 2013 10:12 am

I’ve been saying all along that models are nothing but constructs engineered to replicate a foregone conclusion. Funny – but also sad and dangerous – that constructs should be preferred over hard evidence.
In re comments about engineers and other practical appliers of science mostly being skeptics – the historical perspective, if you are not blinded by ideology, also will certainly tend you towards skepticism, because the historical records of the Dust Bowl, the Medieval Warm Period, the Roman Climate Optimum and trhe Hittite-Minoan-Mycenean Warm Period leave no doubt of the lack of correlation, let alone causation, of warming with CO2 – and those records can’t be erased by even the most splendiferous, whoop-de-do models.
I shouldn’t think Mann & Co. have much chance of breaking into the libraries where these records are kept, to destroy them.

paddylol
June 19, 2013 10:29 am

Mosher, where art thou?

MarkW
June 19, 2013 10:36 am

Reminds me of the claims that we get a better picture of what the planets temperature by just adding more readings.
Averaging together a bunch of thermometers of unknown provenance and with undocumented quality and control issues does not create more accuracy, it actually creates more uncertainty.

Steve Oregon
June 19, 2013 10:36 am

Nick Stokes,
Either the Climate Models are junk or they are not.
There is far more evidence they are junk than ever existed to support AGW.
Do you believe the climate models are junk? Yes or no?
And will you ever run out of lipstick?
Because at some point even you must acknowledge the pig is damn ugly.

Doug Proctor
June 19, 2013 10:50 am

Precisely so. There is no validity to having the original Scenarios still present in AR5 despite the last 25 years of observation not matching most of the Scenarios. All but the lowest have been invalidated in the first 25 years of their model run.
But your comment that the physics is not consistent, that the variations in Scenarios are more than just the result on addition of noise, surprises me: the sphagetti graphs didn’t look like they “dolphined”, i.e. bounced all over the place. They looked more like had different amplitude swings due to random factors but with a consistent upward climb indicative of similar primary parameters with different scalar values. If the physics is not consisent in the ensemble, then we have a very serious problem: only similar fundamentals should be grouped, and the acceptance of diverse physics has to be accompied by a recognition that the science is not settled and the outcome, not certain TO THE POINT THAT NATURE MAY BE RESPONSIBLE, NOT MAN.
Still, all this discussion does not mean that the temperatures can’t rise to scary levels of >4C by 2100, as the IPCC narrative would have it, IF:
1) the physics as modelled is identified as “wrong”, and so needs to be corrected,
b) the contributing negative forcing parameters have, in part or in whole, more power and so need to be corrected, or
c) the climate is sufficiently chaotic, non-linear or threshold controlled to suddenly shift, which is a paradigm shift of climate narrative which needs to be corrected.
If any of these three situations is claimed, happy today can be made miserable tomorrow – but at a cost of “settled” and “certain”.
At any rate, I agree fully that all climate Scenarios need to be restarted at 2013, with only those that include the recent past displayed. OR explanations given as to how we get from “here” to “there” with the other Scenarios.
Except one: that going forward we have a Scenario that tracks the last 25 years AND takes us to other places (than the current trend) in 2100.

Richard M
June 19, 2013 11:02 am

Sometimes we are a little hard on the climate modelers. They are doing the best they can do. It isn’t the models per se that are the problem. It is how they are being used by the propagandists at the IPCC and environmental groups. An analogy may be appropriate.
Say we wanted to model a horse. At the current time we have the technology to properly model a tail wagging. We simply do not have enough information (knowledge) to model a complete horse. However, idealistic groups wish to claim that horses flop around and swat flies. They show the models to support their claims. The fact that the modelers only could model the tail gets lost.
Of course, the modelers really should be standing up and complaining. Maybe some of them are and they are being silenced. The basic problem of insufficient knowledge is not getting the attention it should be getting. It is up to some honest scientists like Dr. Brown, Dr. Spencer and small group of others to bring this to the attention of the world. Where are the rest of the scientists?

Richard M
June 19, 2013 11:17 am

I believe the chaos argument is fraught with peril (even though I have used it in the past). It is completely true that chaotic systems can have periods of non-chaotic behavior. This is true even in weather. Places like the tropics often have long periods of time where the weather from one day to the next varies little. It is only occasionally interrupted by a tropical storm. The same can be argued for climate only with longer time periods.
Areas around attractors can be quite stable. If we are not experiencing any of the forces that drive a system from that attractor state then it should be possible to predict future times. I would argue that the current cyclic ocean patterns have been driving our climate for over 100 years (possibly longer) with a slight underlying warming trend (possibly regression to the mean). There is really little chaotic behavior to be seen from a climate perspective.
This doesn’t mean some force might not be right around the corner throwing everything into a tizzy. However, it shouldn’t stop us from trying to understand what might happen if those chaotic forces don’t appear.

KenB
June 19, 2013 11:19 am

Monkeying with a Super Computer keyboard, does not a scientist make!

June 19, 2013 11:24 am

Richard M, the climate modelers I’ve encountered have defended their models and have defended the way they are being used. It’s not just the IPCC and environmental groups. It’s rank-and-file climate modelers. They appear to have removed their models from direct evaluation using standard physical methods, and seem to prefer things that way.

Janice Moore
June 19, 2013 11:54 am

Stephen Richards! Wow. Thanks, so much, for your generous and kind words. They were especially encouraging on a thread such as this one, for, while I understood the essence of Brown’s fine comment/post, I could only understand about half the content.
***********************************************
Below are some (couldn’t include them all, there were so MANY great comments above — and likely more when I refresh this page!) WUWT Commenters’ HIGHLIGHTS (you are a wonderfully witty bunch!)
************************************************
“These go to eleven.”
— McComber Boy (today, 5:57AM)– that Spinal Tap clip was HILARIOUS. Thanks!
************************
“… like a crowd of drunks with arms on each others shoulders, …
we’re all going in this direction because…..”

[Mike M. 6:10AM June 19, 2013]
We CAN! lol (“Yes, we can!” — barf)
**********************************
“… if you shoot a circular pattern of sixes it does not average out to a 10.”
[Jim K. 9:51AM June 19, 2013]
LOL. Waahll, Ahhlll be. Waddaya know. [:)]
******************************************************
For a good laugh see: “Global Warming Modeller’s Cookbook” (by Jim F. at 10:03AM today)
*************************************************
“‘… E’s passed on! This parrot is no more! He has ceased to be! ‘E’s expired and gone to meet ‘is maker! ‘E’s a stiff! Bereft of life, ‘e rests in peace! If you hadn’t nailed ‘im to the perch ‘e’d be pushing up the daisies! … THIS IS AN EX-PARROT!!…..” [Jimbo quoting Monty Python]
LOL. You always find the greatest stuff, Jimbo.
******************************************************
In Sum (this bears repetition and emphasis (as do 90% of the above comments — can’t acknowledge ALL the great insights, though!):
All the climate models were wrong. Every one of them. [Click in chart to embiggen]
“You cannot average a lot of wrong models together and get a correct answer.”

[D. B. Stealey, 6:14PM, June 18, 2013]

KLA
June 19, 2013 12:05 pm

As I understand it, it is like rolling a dice multiple times and then averaging the results. If your roll it often enough you get an average of 3.5. Then betting that the dice “on average” would come up with 3 or 4. Casino owners love such statistically challenged gamblers.

StephenP
June 19, 2013 12:08 pm

Ed Davey’s at it again. Apparently we are all “crackpots”.
http://www.telegraph.co.uk/earth/energy/10129372/Ed-Davey-Climate-change-deniers-are-crackpots.html

June 19, 2013 12:31 pm

Please stop attacking Nick Stokes. He doesn’t have to come here but he does.
He is one of the few of his opinion who can put a polite conversation across and is willing to.
I don’t agree with him over the significance of everyone drawing mean lines on ensembles of multi-model graphs; I think everyone who does it is wrong.
But I applaud his courage and willingness to debate when he comes here and points out that it has become the climate industry norm. A bad norm, I think, but he is right when he says it is the norm.
That’s why I look at the impact on AR5 and not the politics of those who use such graphs in presentations.

Lars P.
June 19, 2013 12:55 pm

Excellent. I second many comments above. Wonderful liberating, clear judgement, well said. And it had to be said. It was long due.
And again why has this not been already done? (eliminate models which show unreasonable results?!)
Which begs the question: are the persons doing this modelling interested in maintaining the climate hysteria or interested in scientific research?
http://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
“If the observations in the above graph were on the UPPER (warm) side of the models, do you really believe the modelers would not be falling all over themselves to see how much additional surface warming they could get their models to produce?”

TomRude
June 19, 2013 12:57 pm

Mike MacCracken the Director of the Climate Institute shares his knowledge -or lack of- on Yahoo climatesceptic group…
“The emerging answer seems to be that a cold Arctic (and so strong west to east jet stream) is favored when there is a need to transport a lot of heat to the Arctic—that being the relatively more effective circulation pattern. With a warmer Arctic, the jet stream can meander more and have somewhat bigger (that is latitudinally extended) waves, so it gets warm moist air further poleward in the ridges and cold air further equatorward in the troughs. In addition, the wavier the jet stream, the slower moving seem to be the waves, so one can get stuck in anomalous patterns for a bit longer (so get wetter, or drier, as the case may be).
There was an interesting set of talks on this by Stu Ostro, senior meteorologist for The Weather Channel, and Jennifer Francis, professor at Rutgers, on Climate Desk Live (see http://climatedesk.org/category/climate-desk-live/). Basically, such possibilities are now getting more attention as what is happening is looked at more closely and before things get averaged away in taking the few decade averages to get at the changes in the mean climate—so, in essence, by looking at the behavior of higher moments of weather statistics than the long-term average. And doing this is, in some ways, requiring a relook at what has been some traditional wisdom gleaned from looking at changes in the long-term average.
Mike MacCracken”
That MacCracken would take the half backed undemonstrated inference by Francis as science is pathetic. GIGO!

Lars P.
June 19, 2013 1:05 pm

hm, just read about the 97% number in the thread and it struck me: looks like 97% of the models are junk…

clark
June 19, 2013 1:09 pm

I think this is a brilliant article. While I now know that the mean has no statistical significance, I think there is no doubt it has acquired a political significance over the years. And to the degree that the climate deviates away from this mean it hurts the cause of the climate change activists. I think all of us who follow this should realize that being right on the statistics and science does not always equate to being on the winning side politically.

Latitude
June 19, 2013 1:14 pm

StephenP says:
June 19, 2013 at 12:08 pm
Ed Davey’s at it again. Apparently we are all “crackpots”.
========================================
“”and while many accept we will see periods when warming temporarily plateaus, all the scientific evidence is in one direction””
====================================
He can admit to this, but not admit the opposite……..when cooling temporarily plateaus
http://www.foresight.org/nanodot/wp-content/uploads/2009/12/histo3.png

Gary Hladik
June 19, 2013 1:23 pm

angech says (June 19, 2013 at 7:21 am): “The chance of the earth warming up from year to year is 50.05 percent. ]. Why ? because the current long range view shows that we are still very slowly warming over the last 20,000 years. ”
The Holocene began about 11,500 years ago with the end of the latest ice age. Since the Holocene Optimumabout 8,000 years ago, proxy temps have generally trended down. So you could say we’re not just “recovering” from the Little Ice Age, we’re also recovering from the “Holocene Minimum”. Personally, I hope the recovery continues past the current temp plateau. 🙂

Gail Combs
June 19, 2013 1:23 pm

angech says:
June 19, 2013 at 7:21 am
…. All the current models are biased to warming ie self fulfilling models.
The chance of the earth warming up from year to year is 50.05 percent. ]. Why ? because the current long range view shows that we are still very slowly warming over the last 20,000 years….
>>>>>>>>>>>>>>>>>>>>>>>
That is another one of the FALSE ASSumptions. The Earth is now in a long term cooling mode but that does not fit the political agenda.
10,000 yrs GISP (Greenland Ice Core) graph – Data from Richard B. Alley of the U.Penn. was elected to the National Academy of Sciences, chaired the National Research Council on Abrupt Climate Change. for well over a decade and in 1999 was invited to testify about climate change by Vice President Al Gore. In 2002, the NAS (alley chair) published a book “Abrupt Climate Change”
140,000 yrs Vostok graph (Present time on the left) data source NOAA and petit et al 1999
graph last four interglacials VostoK (Present time on the left) data source petit et al 1999
NH solar energy overlaid on the Greenland and Vostok Ice core data from John Kehr’s post NH Summer Energy: The Leading Indicator “Since the peak summer energy levels in the NH started dropping 9,000 years ago, the NH has started cooling….That the NH has been cooling for the past 6,000 years has found new supporting evidence in a recent article (Jakobsson, 2010) …”
John points out more evidence from another paper in his post Norway Experiencing Greatest Glacial Activity in the past 1,000 year and a newer paper in his post Himalaya Glaciers are Growing
This comment of John’s is the take home

…Earth experienced the warmest climate of the last 100,000 years bout 6,000 years ago and since then (especially over the past 4,000 years) the Northern Hemisphere has been experiencing a gradual cooling. That does not mean that each century is colder than the one before, but it means that each millennium is colder than the one before…

The Climate is cooling even if the Weather warms over the short term.
As far as I can tell from the geologic evidence we have been darn lucky the temperature has been as mild and as even as it has been since ‘Abrupt Climate Change’ is part of the geologic history of the earth.

Abrupt Climate Change: Inevitable Surprises
Committee on Abrupt Climate Change
National Research Council
NATIONAL ACADEMY PRESS
Washington, D.C.
executive summary:
…Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence….

“tendencies of abrupt onset and great persistence” sure sounds like the Dr. Brown’s Strange Attractors
More on Strange Attractors
http://www.stsci.edu/~lbradley/seminar/attractors.html

Mindert Eiting
June 19, 2013 1:52 pm

Latitude says:
June 19, 2013 at 6:55 am “thank you……over and out”.
Compare it with a multiple-choice exam. If you don’t know anything of the subject, by pure chance you may get some items correct. But you fail because of your total score. What should we think of your good-luck achievements? Something special with a few percent correct answers? Tell it your teacher and he will say that you failed. This happens to everybody. Why should should we be kind to the climate modellers?

Gail Combs
June 19, 2013 1:53 pm

Alan D McIntire @ June 19, 2013 5:25 am
….Robert G. Brown uses a quantum mechanics analogy to make his point. The vast majority of us have no knowledge of quantum mechanics nor do we have any way to make meaningful measurements in the field. In contrast, we have all spent a lifetime experiencing climate, so we all have at least a rudimentary knowledge of climate….
>>>>>>>>>>>>>>>>
I have no problem with Dr. Brown’s use of a quantum mechanics analogy. It makes me go out and learn something new. Dr. Brown was careful to give enough information that a layman could do a search for more information if he became confused but his explanation was good enough that you could follow the intent of his explanation with just a sketchy knowledge of physics.
I also think the analogy was very good because you are talking about the physics used to describe a system that is less complex than climate but one that is politically neutral (Medicine is not) The other positive point about Dr. Brown’s analogy is it showed just how complicated the physics gets on a ‘Simple Atom’ and therefore emphasizes how much more complex the climate is and how idiotic it is to think we can use these models to say we are looking at CAGW.
…….
John Archer says
“…Being a layman confers no free pass in the responsibility stakes and an appeal to argument from authority should be the very last resort for the thinking man. Always do your own as much as you can. ”
John is correct. We now know we can not rely on the MSM to give us anything but propaganda so if we do not want to be a Conman’s Mark Politicians Patsy we have to do our own research and thinking. (Now if only we had someone to vote FOR…)

June 19, 2013 2:01 pm

So “The Flying Spaghetti Monster” has been grounded.8-)
Why does what they did with the spaghetti models remind me of “Mike’s Nature Trick”?
If one graph starts to go south on you, just graft in another!

Duster
June 19, 2013 2:15 pm

Nick Stokes says:
June 18, 2013 at 7:00 pm
Alec Rawls says: June 18, 2013 at 6:35 pm
“Nick Stokes asks what scientists are talking about ensemble means. AR5 is packed to gills with these references.”
But are they means of a controlled collection of runs from the same program? That’s different. I’m just asking for something really basic here. What are we talking about? Context? Where? Who? What did they say?

Nick, this response is so disingenuous it should be embarrassing. Simply running a web search for the term “climate model ensemble means” turns up numerous examples of its use, and none of the top-listed sites and papers where it occurs is a skeptic site or critical paper. Tebaldi (2007) for instance in Ensemble of Climate Models, though she does note that the models are not independent, does state the “model mean is better than a single model,” though what it might be better for is debatable. Others suggest weighting individual models by their proximity to reality, in effect a “weighted mean” approach, since no results are discarded as irrelevant.
Really,disagreeing with any scientist’s argument is the universal right of all other scientists. Why not simply argue the science? What have citations, references or context to with whether models are meaningful, either individually or in ensemble?

Gail Combs
June 19, 2013 2:23 pm

Steve Oregon says:
June 19, 2013 at 10:36 am
Nick Stokes,….
Because at some point even you must acknowledge the pig is damn ugly.
>>>>>>>>>>>>>>>>>>
Don’t insult the pig.

Duster
June 19, 2013 2:25 pm

angech says:
June 19, 2013 at 7:21 am

The chance of the earth warming up from year to year is 50.05 percent. ]. Why ? because the current long range view shows that we are still very slowly warming over the last 20,000 years. Climate models should basically be random walk generators reverting to the mean…

This is one of those factoids that gets one accused of cherry picking. Twenty thousand years ago is the Last Glacial Maximum, well more like 19,000, but near enough. In fact the planet has been “cooling” for roughly the last 100 ky or so, warming for the last 20 ky, cooling for the last 8 ky, etc. It illustrates the reason that “trends” are not merely controversial but very likely meaningless in climate discussions, and completely irrelevant when attempting to tease out minor (global) anthropic influences.

Gary Hladik
June 19, 2013 2:39 pm

Tim Ball says (June 19, 2013 at 9:54 am): “O’Keefe and Kueter explained how a model works:”
Thanks for the reference, Tim. It looks like a good place to start.
I like this paragraph early on:
“A model is considered validated if it is developed using one set of data and its
output is tested using another set of data. For example, if a climate model was
developed using observations from 1901 to 1950, it could be validated by testing
its predictions against observations from 1951 to 2000. At this time [2004], no climate model has been validated.” (emphasis mine)
It seems our political masters have been making policy based on chicken entrails. Again.

June 19, 2013 2:40 pm

Lets say you went to a garage sale and bought a box of rusted and broken toys for a penny. No child can or play with them. They have zero play value. But one of them is (or was) a Buddy L. Another a Lionel 700E. Etc. Not a one of them works but they may have some value to a collector.
It would seem these models have no value in predicting climate but they may hold value for those out to collect funding and/or power and influence.

Nick Stokes
June 19, 2013 2:41 pm

Duster says: June 19, 2013 at 2:15 pm
>Nick Stokes says:
>But are they means of a controlled collection of runs from the same program? That’s different. I’m just asking for something really basic here. What are we talking about? Context? Where? Who? What did they say?
“Nick, this response is so disingenuous it should be embarrassing.”

How? RGB made a whole lot of specific allegations about statistical malpractice. We never heard the answers to those basic questions I asked in your quote. After a long rant about assuming iid variables, variances etc, all we get in this thread is reference to a few people talking about ensemble means. Which you’ll get in all sorts of fields – and as I’ve said here, a multimodel mean is even prominent in a featured WUWT post just two weeks ago.

Robert Scott
June 19, 2013 2:48 pm

I make this post as a regular reader of this blog but rare commentator, as my expertise in life has little to offer to the debate (other than an ability, as a fraud investigator, to recognise bluster and obfustication as a defence mechanism by those subject of a justified query of their past activities).
First, I must agree with Mr Courtney when he gives credit to Nick Stokes for his courteous attempts to answer the fusillade of well argued criticisms of the AGW/CAGW/Climate Change mantra. I do therefore wonder why, if the science is so “settled”, he is so alone in his mission. Surely, if those others who so resolutely believe that they have got it right are so confident, not only with their case, but in the errors of the sceptics, they would enter the fray and offer argument to prove their detractors wrong. We have seen here in this post a refutation that is (whilst way outside my expertise) carefully argued by someone who is clearly knowledgeable in the field, yet (apart from Nick’s valiant but unsuccesful attempts) not one supporter of the allegedly overwhelming consensus group is willing to point out where its author goes wrong. Instead, we are offered weasel comments (I paraphrase) like “we don’t engage with trolls, it only encourages them”.
Message to the non-sceptics: Answer the criticisms and you might just gain some headway with the growing band of sceptics. Carry on like you are and your cause is doomed.

Scott Basinger
June 19, 2013 2:59 pm

Nick,
Just because it’s a prominent WUWT post doesn’t mean it’s correct. Never knew you were such a fanboi.

AndyG55
June 19, 2013 3:20 pm

Its actually quite funny watching these modelling guys falling over each other.
HadCrud and Giss have been “adjusted” to create artificial warming trends.
The modellers calibrate to these 2 temperature series.. (which probably means using high feedback factors)
Then they wonder why their projections are all way too high !!
Sorry guys, but ………. DOH !!!

AndyG55
June 19, 2013 3:33 pm

And of course, if they want their models to start producing reasonable results, they have to first admit to themselves that HadCrud and GISS are severely tainted ,and that there hasn’t really been much warming since about 1900 .
Don’t see that happening somehow. 😉

Frank
June 19, 2013 3:49 pm

Dr. Brown (and Nick and GregL): In AR4 WG1 section 10.1, the IPCC admits some of the problems you discuss, calling the collection of models they use an “ensemble of opportunity” and warning against statistical interpretation of the spread. The real problem is that these caveats are found only in the “fine print” of the report, instead of being placed in the caption of each relevant graph.
“Many of the figures in Chapter 10 are based on the mean and spread of the multi-model ensemble of comprehensive AOGCMs. The reason to focus on the multi-model mean is that averages across structurally different models empirically show better large-scale agreement with observations, because individual model biases tend to cancel (see Chapter 8). The expanded use of multi-model ensembles of projections of future climate change therefore provides higher quality and more quantitative climate change information compared to the TAR. Even though the ability to simulate present-day mean climate and variability, as well as observed trends, differs across models, no weighting of individual models is applied in calculating the mean. Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic. However, attempts are made to quantify uncertainty throughout the chapter based on various other lines of evidence, including perturbed physics ensembles specifically designed to study uncertainty within one model framework, and Bayesian methods using observational constraints.
One of the first papers to properly explore ensembles of models was Stainforth et al, Nature 433, 403-406 (2005)
http://media.cigionline.org/geoeng/2005%20-%20Stainforth%20et%20al%20-%20Uncertainty%20in%20predictions%20of%20the%20climate%20response%20to%20rising%20GHGs.pdf

Janice Moore
June 19, 2013 4:04 pm

“Don’t insult the pig.” [Gail Combs at 2:23PM today]
Aww. That was sooo cute. No lipstick needed there, that’s for sure!
And who could insult this little fella?

bathes
June 19, 2013 4:07 pm

So no chimpanzee can predict the weather.
But the average of ten chimpanzees gets it pretty much right!
In fact – the more chimpanzees, the more right they will be!
Yeah right..

Janice Moore
June 19, 2013 4:12 pm

Unlike, Babe, however…. the Fantasy Science Club’s models will NEVER get the sheep herded into the pen (i.e., a real problem solved or even plausibly defined!).
There will be no Farmer Hoggett smiling down on them saying, “That’ll do, pig. That’ll do.”
“Pig slop!” is all the Fantasy Science Club can ever expect or deserve to hear.
No matter how sincere and earnest they are.

Eve
June 19, 2013 4:17 pm

The point is that nobody cares. The writers of these models care only about showing warming if people do not stop using energy or use renewable energy or pay more to their government, etc. They do not care if they are correct or if anyone knows if they are correct or incorrect. The average person on the street does not care because they could not understand the statistics and they already know that humans are causing the planet to warm up or cause extreme weather or whatever they are calling it today. As I go through my everyday life, I notice that I am the only one that cares about the cost of electricity, gas, heating oil, etc. I am pretty sure I have more money than my friend but she has her air conditioning on in 70 F temperatures whil my furnace is still kicking in at night. My boss, who pays for carbon credits whenever he sends anything by UPS, has his air on 24/7 all spring, summer and fall in Philly. My sister from the UK who believes this climate disruption stuff uses air conditioning in the car at 72 F. I know she knows how to roll down a window.
There are times I think I am living in a nightmare because nobody could think up stuff this strange.

Gary Pearse
June 19, 2013 4:22 pm

My take away is that we should examine the most important ‘piece’ and add on pieces to refine the models. Obviously it is no real scientists idea that we should assume CO2 dunnit and build all the models with that as the most important piece that then that has to be rationalized by exaggerating “lesser” cooling pieces.
An engineering approach: I think it would be worthwhile to design a model of an earth-size sphere of water with a spin, a distance from the sun, tilt, etc., and an undefined atmosphere that allows gradational latitudinal heating to occur to occur and see what happens to create currents and heat flow. After you have the general things that happen just because it is water being heated. Now you add some solid blocks that divide the water into oceans – a zig-zag, equal width ocean oriented N-S (Atlantic) a very large wedge shaped ocean with the apex at the north end and broadening out to south (Pacific), leave a circular ocean space at the north pole end with a small connection to the Pacific and a wide connection to the Atlantic. Put a large circular block centered on the south pole and trim the blocks on the southern end to permit complete connection of the southern ocean to both the pacific and Atlantic with a restriction…. Now “reheat” the model and see what the difference is. Then characterize the atmosphere and couple it with the ocean and lets see what we get, permitting evaporation and precipitation, winds and storms. We will better understand the magnitudes of the different influences of the major pieces. Finally we can play with this model – reduce sun’s output, add orbital changes, magnetics etc. etc. If we need a bit more warming, we can then perhaps add an increment from CO2 or some other likely parameter.

Nick Stokes
June 19, 2013 4:30 pm

Frank says: June 19, 2013 at 3:49 pm
“Dr. Brown (and Nick and GregL): In AR4 WG1 section 10.1, the IPCC admits some of the problems you discuss, calling the collection of models they use an “ensemble of opportunity” and warning against statistical interpretation of the spread. The real problem is that these caveats are found only in the “fine print” of the report, instead of being placed in the caption of each relevant graph.”

It’s hardly in the fine print – it’s prominent in the introduction. It seems like a very sensible discussion.
People do use ensemble averages. That’s basically what the word ensemble means. What we still haven’t found is anything that remotely matches the rhetoric of this post.

Jimbo
June 19, 2013 5:05 pm

Nick, how many of the climate models failed regarding their temperature projections? How many of the climate models succeeded regarding their temperature projections? Please don’t try to distract, answer the question.

eyesonu
June 19, 2013 5:14 pm

Alan D McIntire says:
June 19, 2013 at 5:25 am
===============
Alan, there is a wide range of viewers at WUWT and quite a bit more than a few have a very good grasp of physics. Those that don’t probably never read past the very first part of Dr. Brown’s post. Some of us have read and reread what he wrote so as to not miss anything. Brown’s post is for those of us that can grasp the context or those that wish to. Kind of like WUWT University.
I believe that the level of knowledge of the majority readers here @ WUWT would be astounding by any measure. To be quite honest, it would be a boring site for one not so knowledgeable.

AndyG55
June 19, 2013 5:16 pm

Nick, If you are an Aussie, you know the expression..
You can’t polish a t***, but you can sprinkle it with glitter. !

Jimbo
June 19, 2013 5:17 pm

I see unemployment rearing its ugly head for lazy Nintendo players. If only they went out more often and reduced pressing the ENTER button, writing up crap and getting generous funding. This whole fraud has been driven by money and the ‘hidden’ agenda of activists. The time for being nice is over. Should I be nice to con artists who defraud little old women? Of course not.

ferdberple
June 19, 2013 5:38 pm

Frank says:
June 19, 2013 at 3:49 pm
because individual model biases tend to cancel (see Chapter 8).
===========
Sorry, but that is nonsense. If the model biases showed random distribution around the true (observed) mean then one could argue that the ensemble mean might have some value. However, that is not the case.
Look at the model predictions. They are all higher than observed temperatures. The odds of this being due to chance are so fantastic as to be impossible.
What the divergence between the ensemble mean and observation is showing you is that the models have systemic warm bias, in no uncertain terms, and the odds that this is accidental are as close to zero as to be zero.

Alex
June 19, 2013 5:38 pm

As an old reader of climate audit I must admit I simply ignore Nick Stokes. 0 credibility. Always wrong, always missing the big picture.

Janice Moore
June 19, 2013 5:48 pm

“An engineering approach: I think it would be worthwhile to design a model of an earth-size sphere of water with a spin, a distance from the sun, tilt, etc., … and see what happens to create currents and heat flow.” [Gary Pearse at 4:22PM today]
A real model! Of course, only a genuine gearhead (that is a compliment, by the way), would come up with that. GREAT IDEA!
There’s an old globe-shaped, revolving sign that the Seattle P-I used (until it — thank You, Lord — went out of business a few years ago (2009?)). That might be used… . Have NO idea where it is, now. Size? Not sure, but, from what I recall when driving by it, it was likely about 10 feet in diameter. Yeah, I realize an old Union 76 ball or any ball (not have to have been an actual Earth globe) would do, but, it would LOOK cool. Maybe you could make use of the neon tubing used for the lettering, too. And the motor.
Well, sorry WUWT scholars for all that very basic level brainstorming about Pearse’s model. Kind of fun to think about how it could be implemented (at the most basic level, I mean — I have NO IDEA how to design the details!).

Gail Combs
June 19, 2013 5:58 pm

Gary Pearse says:
June 19, 2013 at 4:22 pm
My take away is that we should examine the most important ‘piece’ and add on pieces to refine the models. Obviously it is no real scientists idea that we should assume CO2 dunnit and build all the models with that as the most important piece that then that has to be rationalized by exaggerating “lesser” cooling pieces…..
>>>>>>>>>>>>>>>>>>>>>>>
HUH?
The whole point of IPCC was to show ‘ CO2 dunnit’
The IPCC mandate states:

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environmental Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to assess the scientific, technical and socio-economic information relevant for the understanding of human induced climate change, its potential impacts and options for mitigation and adaptation.
http://www.ipcc-wg2.gov/

The World Bank had put in an employee, Robert Watson, as the IPCC chair. Recently the World Bank produced a report designed to scare the feces out of people WUWT discussion and link to The World Bank 4 degree report
The World Bank went even further and was whispering in the US treasury department’s ear on how to SELL CAGW so the US government can collect more taxes (to pay the bankers interest on their funny money debt)

FOIA and the coming US Carbon Tax via the US Treasury
In November, I and CEI sued the Department of the Treasury to produce emails and other records mentioning “carbon”….
Despite its better efforts Treasury managed to hand over some docs that in an attentive world should prove extremely useful, offering fantastic language if often buried in pointy-headed advice they received in Power Points and papers from the IMF, G-20 and, in graphic terms, an analysis from the World Bank on how to bring a carbon tax about.
These documents represent thoughtful advice on how to mug the American taxpayer and coerce them out of unacceptable and anti-social behavior, diverting at least 10% of the spoils to overseas wealth transfers. The major focus is language, how to sell it to the poor saps not by noting the cost or that it is a tax but as, for example, the way to be the leader in something like solar technology.
This expensive advice certainly does sound familiar. The best language came in a paper produced for World Bank clients (Treasury)….
…One email harkened back to the World Bank’s advice on how to sucker the public into tolerating more energy taxes,
in a wink to Harvard’s Joseph Aldy about it sure would be neat to see some work on what the public thinks about a “carbon charge”, instead of what he had presented to them, how they view a national renewable energy standard. In short, it’s all about getting the masses nodding….

Now the TRUE face of the World Bank, and the elite
This Graph shows world bank funding for COAL fired plants in China, India and elsewhere went from $936 billion in 2009 to $4,270 billion in 2010. (20% of that money came from US tax payers) What we tax payers ‘Bought’ with those tax dollars was up to a 40% loss in wages (men with a high school education) and an aprox. 23% unemployment rate.
Meanwhile according to the IMF …the top earners’ share of income in particular has risen dramatically. In the United States the share of the top 1 percent has close to tripled over the past three decades, now accounting for about 20 percent of total U.S. income… Seems all those solar farms and windmills produce lots of $$$ for the top 1% because the USA sure isn’t producing much of anything else.U.S.-based industry’s share of total nonfarm employment has now sunk to 8.82 percent Shadow Government Statistics link 1 (explanation) and link 2
President Obama will face a continuing manufacturing recession….The brightest face that can reasonably be put on this news is that U.S.-based manufacturing is starting to look like the rest of the American economy – barely slogging along. If you look at the Alternate Gross Domestic Product Chart that is not cheery news.
However it does make the financiers very happy.

World Bank Carbon Finance Report for 2007
The carbon economy is the fastest growing industry globally with US$84 billion of carbon trading conducted in 2007, doubling to $116 billion in 2008, and expected to reach over $200 billion by 2012 and over $2,000 billion by 2020

We see an attractive long-term secular trend for investors to capitalize on over the coming 20–30 years as today’s underinvested and technologically challenged power grid is modernized to a technology-enabled smart grid. In particular, we see an attractive opportunity over the next three to five years to invest in companies that are enabling this transformation of the power grid.
http://downloads.lightreading.com/internetevolution/Thomas_Weisel_Demand_Response.pdf

Smart Meters are needed so power companies can shut down the power to your house when the wind stops blowing or a cloud passes the sun.

The Department of Energy Report 2009
A smart grid is needed at the distribution level to manage voltage levels, reactive power, potential reverse power flows, and power conditioning, all critical to running grid-connected DG systems, particularly with high penetrations of solar and wind power and PHEVs…. Designing and retrofitting household appliances, such as washers, dryers, and water heaters with technology to communicate and respond to market signals and user preferences via home automation technology will be a significant challenge. Substantial investment will be required….

Smart Grid System Report Annex A and B
Smart-grid technologies will address transmission congestion issues through demand response and controllable load. Smart-grid-enabled distributed controls and diagnostic tools within the transmission system will help dynamically balance electricity supply and demand, thereby helping the system respond to imbalances and limit their propagation when they occur. These controls and tools could reduce the occurrence of outages and power disturbances attributed to grid overload. They could also reduce planned rolling brownouts and blackouts like those implemented during the energy crisis in California in 2000. http://www.smartgrid.gov/sites/default/files/pdfs/sgsr_annex_a-b.pdf

Carbon Trading is a fraud that produces nothing but poverty. It does not produce a single penny of wealth and instead acts as a short circuit across the advancement and wealth of an entire civilization.
Dr. Brown’s argument is very elegant but there is nothing like telling a Mark citizen that the wealthy elite wants to pick his pocket to get his attention no matter what his political point of view. (Unless of course he is on the receiving end of the suction hose.)

ferdberple
June 19, 2013 6:14 pm

Gary Hladik says:
June 19, 2013 at 2:39 pm
“A model is considered validated if it is developed using one set of data and its
output is tested using another set of data.”
==========
The “hidden data” approach is routinely used in computer testing. Divide the data in half, train the model on one half, and see if it can predict the other half, the missing half better than chance. This is so fundamental to computer science that it is a given.
It is surprising this has never been done with climate models. Except of course that it is highly likely that it has been done. And the results will have shown that the models have no predictive skill. Which is why these sorts of results have never been published and why instead we have ensemble means instead of validated models.
The simple fact is that there is no known computer algorithm that can compute the hidden data except in the case of trivial problems, in anything less than the lifetime of the universe.

F. Ross
June 19, 2013 6:16 pm

@ [rgb]
What he said!

Jimbo
June 19, 2013 6:29 pm

Here is one other big issue with models on comments sections everywhere. When you point out a failure Warmists will often say phrases like “oh, but the models predicted it”. In other words they just pick one model run from ANY paper to back up their claim. They have so much crap out there that they can always back up any claim. Winters to be warmer, winters to be colder. Earth to spin faster, Earth to spin slower and so on……………. This is what lot’s of funding can achieve, that’s why I pay no attention to claims about the mountains of ‘evidence’. LOL.

JohnB
June 19, 2013 6:36 pm

I’ve always thought that averaging the models was the same as saying;
I have 6 production lines that make cars. Each line has a defect.
1. Makes cars with no wheels.
2. Makes cars with no engine.
3. Makes cars with no gearbox.
4. Makes cars with no seats.
5. Makes cars with no windows.
6. Makes cars with no brakes.
But “on average” they make good cars.
I thank the good Doctor for explaining why I’ve always thought that. 😉

Jimbo
June 19, 2013 6:47 pm

Am I right in summarizing this thread as?:
The climate models have failed. Observations don’t match reality. Stop beating this poor, dead horse. The jig is up. The party is over. The fat lady is inhaling. The rats are scampering. It has flatlined. The final whistle has been blown.
Good night all.

Jimbo
June 19, 2013 6:51 pm

Ooops. Correction:
“The climate models have failed. Projections don’t match reality. Stop beating this poor, dead horse. The jig is up. The party is over. The fat lady is inhaling. The rats are scampering. It has flatlined. The final whistle has been blown. “

Janice Moore
June 19, 2013 8:00 pm

You are correct, Jimbo!
Good night. Sleep well.
*****************
Hey! There is MAGIC working around here! #[:)]
(thanks Anthony or Moderator — for the “that’ll do” enhancement!)
[de nada. — mod.]

Reg Nelson
June 19, 2013 8:05 pm

Tim Ball says:
June 19, 2013 at 9:54 am
Is the problem wider than the ensemble of 23 models that Brown discusses? As I understand, each of the 23 model results are themselves averages. Every time the model is run starting from the same point the results are different so they do several runs and produce an average. I also understand the number of runs is probably not statistically significant because it takes so long to do a single run.
—-
Thank you, Dr. Ball, I found your post(s) on this topic incredibly illuminating. I had no idea of the incredible amount of time, effort and resources, Climate Scientist\Modelers go through to ultimately reach their intended, predetermined results. I don’t think I’ve ever seen such an incredibly ridiculous, worthless, futile human endeavor.
The immediate thought that comes to mind is, “Why bother?”
Of course the answer is, “We need pseudo-facts to sell the pseudo-science to the unwashed, low-intelligence voters.”

DaveA
June 19, 2013 8:45 pm

I sense a certain distress in the author’s words in that piece. Wonder if he’s followed the goings on at Climate Audit – upside down Mann, HS extractors, Gergis etc – will need chill pills after that.

gallopingcamel
June 19, 2013 9:05 pm

rgb is right about the incoherence of the IPCC’s models. You could make exactly the same refutation of Mike Mann’s “Paleo” papers that blend many proxies. Anyone familiar with scotch whiskey will know that blending several “Single Malts” makes no sense.
If “Tree Rings” are the best data it won’t help to blend them with varves lakes or ice cores.

June 19, 2013 9:15 pm

The ‘average of the ensemble’ is akin to having one foot in boiling water and one foot in freezing water and then declaring that on average you feel just fine !

gallopingcamel
June 19, 2013 9:24 pm

Jimbo says:
June 19, 2013 at 5:11 am
“For Warmists who say that Robert Brown doesn’t know much about computing or models see an excerpt from his about page.”
http://www.phy.duke.edu/~rgb/About/about.php
As a humble member of the Duke university physics department it was my intention to be “low maintenance” even though I had a bunch of Macs, PCs and Sun machines to look after. Whenever I got into serious trouble it was “rgb” who got my chestnuts out of the fire.
If there are Warmists who question “rgb”s competence they will have to work very hard to convince me that they have a clue.

June 19, 2013 9:41 pm

paddylol says:June 19, 2013 at 10:29 am
Mosher, where art thou?
Mosher? Mosher? Mosher?

pat
June 19, 2013 9:57 pm

covering all contingencies?
19 June: Bloomberg: Alessandro Vitelli: EU Carbon Market Needs Immediate Changes, Policy Exchange Says
An independent advisory institution could be established to review the system every two to three years and recommend changes to the market, Newey said.
The specific circumstances in which the market could be adjusted would include “when macroeconomic conditions change significantly from what they were when the cap was set, if the climate science changes, or if there is progress on an international climate deal that would require the EU to take on more ambition, or less,” Newey said.
Policy Exchange was founded by Michael Gove, Francis Maude and Nicholas Boles, who are all now ministers in the U.K.’s coalition government…
http://www.bloomberg.com/news/2013-06-18/eu-carbon-market-needs-immediate-changes-policy-exchange-says.html

pat
June 19, 2013 10:12 pm

Time to stop arguing about climate change: World Bank
LONDON, June 19 (Reuters) – The world should stop arguing about whether humans are causing climate change and start taking action to stop dangerous temperature rises, the president of the World Bank said on Wednesday…
http://www.pointcarbon.com/news/reutersnews/1.2425075?&ref=searchlist

June 19, 2013 10:16 pm

Reblogged this on thewordpressghost and commented:
Friends,
Maybe someday, I will write a great comment upon the problem of Global Warming. And maybe then, my comment will rise to the level of a sticky post.
But, whether you wait for me to write a great comment, or you go read this great comment (post), remember global warming has been going on for thousands of years.
Profiting off of the fear of climate change (global warming) is a very recent marketing strategy.
Enjoy,
Ghost.

John G. Bell
June 19, 2013 10:21 pm

I’ve been a Robert Brown fan for years having become familiar with him from the compute angle. This is one bright fellow. Climate science can’t progress without more physicists like him wading in. Rather than displaying prowess in physics and statistics, climate scientists too often seem to be bench warmers.

Frank
June 19, 2013 10:27 pm

Nick wrote: “It’s hardly in the fine print – it’s prominent in the introduction. It seems like a very sensible discussion.”
Try reading the section on sea level rise (p13-14) in SPM for AR4 WG1. The scientists responsible for sea level rise refused to make an predictions for acceleration of ice flow from ice sheets because they believed they had no sound basis for estimating that acceleration. Note how clearly the authors responsible for the SPM explained this key caveat associated with their projections, both in Table SPM.3 and in the bullet point at the bottom of page 14. Now look at Figure SPM.5 showing projected climate change with one standard deviation ranges. There is NO mention of the caveats about their use of an “ensemble of opportunity and they show an estimate of uncertainty in these projections when the the introduction to Chapter 10 specifically says the ensemble is NOT suitable for this purpose: ” statistical interpretation of the model spread is therefore problematic”. Figure SPM.3 references Figures 10.4 and 10.29, which also do inappropriate statistical analysis. FIgure 10.29 adds in the uncertainty due the carbon cycle (how accurately can we predict GHG accumulation in the atmosphere from emission scenarios) and the SPM doesn’t mention this caveat either.
Stainforth’s ensembles (referenced above) have shown how dramatically projections of warming change – five-fold – when parameters controlling precipitation/clouds are randomly varied within a range consistent with laboratory measurements. Parameters associated with thermal diffusion of heat (below the mixed layer, between atmospheric grid cells, and between surface and air) weren’t varied in his scenarios, so five-fold uncertainty is only a start to estimating parameter uncertainty for a single model. Then one needs to add uncertainty associated with chaotic behavior, the carbon cycle, aerosols, and model differences. If the IPCC correctly accounted for all of these uncertainties, everyone would realize that the range of possible futures associated with various emissions scenarios is too wide to be useful. Instead, they report results from a ensemble of less than twenty models with single values for each parameter and allow policymakers to believe that a few runs from each model represent the range of possible futures associated with a particular emission scenario. Then they ignore even that uncertainty and tell us how little we can emit to limit warming to +2 degC above pre-industrial temperature (an unknown temperature during the LIA.)
Nick also wrote: “People do use ensemble averages. That’s basically what the word ensemble means. What we still haven’t found is anything that remotely matches the rhetoric of this post.”
However, one doesn’t perform a statistical analysis (mean, std, etc) of an “ensemble of opportunity”. The national models in the IPCC’s “ensemble of opportunity” were not chosen to cover the full range of possible futures consistent with our understanding of the physics of the atmosphere and ocean. They were chosen and considered to be equally valid (“model democracy”) for political reasons. All of these models evolved when climate sensitivity was assumed to be 2-4 degC and with full knowledge of the 20th century warming record. As scientists “optimized” these models without the computing power to properly explore parameter space, these models naturally converged on the “consensus” projections we have today (high sensitivity models are strongly cooled by aerosol and low sensitivity models are not). Stainforth has shown that hundreds of different models provide equally valid representations of current climate and radically different projections of the future. Observational estimates of climate sensitivity now are centered around 2 degC. We might have found, but haven’t found, systematic errors in the temperature record (UHI, for example) that reduced 20th century warming by 50%. Under those circumstances, “optimization” of the parameters in the IPCC’s models could easily have produced very different projections.
(Even when you appear to be wrong, your “trolling” improves the quality of the science discussed here.)

David Riser
June 19, 2013 10:32 pm

In Nicks Defense
I must say that while using the general circulation models as global climate models is a generally bad idea. The idea behind using multi model means and using an ensemble of a model set is not new. In hurricane forecasting it is used frequently because the overall skill in predicting the storms path is actually slightly better in the multi model mean. Additionally NOAA is working towards developing a more robust seasonal predictive capability based on a multi model mean approach. You can read about it here
http://www.cpc.ncep.noaa.gov/products/ctb/MMEWhitePaperCPO_revised.pdf
So in a day to day use of the models in weather forecasting these ideas are not new and not considered a bad practice.
That being said, the GCM’s modified for temperature prediction over decades has a proven track record of no skill at predicting the temperature so using a mean of all those failed attempts is still a failed attempt.

scf
June 19, 2013 10:43 pm

Much ado about nothing. RGB is correct, that a multi-model mean has no physical meaning. It has no stasticial significance with respect to measurement, and no predictive value.
However, even though it has no physical meaning or significance, it is still a useful tool. It is a visual tool that allows you to show a trend. If many models have upwards trajectories, the mean will illustrate that. It’s just a summary. When you look at a large number of different lines, sometimes you would like to know what is the general trend.
In this case, the multi-model mean, the general trend, is showing that most models are predicting temperature increases (with the usual caveat about means that they can be unduly affected by outliers). That is a valid and interesting fact, even if it has no physical meaning.

Alex Heyworth
June 19, 2013 10:50 pm

TomRude says:
June 19, 2013 at 12:57 pm
Mike MacCracken the Director of the Climate Institute shares his knowledge -or lack of- on Yahoo climatesceptic group…
“The emerging answer seems to be that a cold Arctic (and so strong west to east jet stream) is favored when there is a need to transport a lot of heat to the Arctic”

Que? Transporting a lot of heat to the Arctic makes it colder? That’s, ahem, highly original.

Jquip
June 19, 2013 11:32 pm

Frank:’However, one doesn’t perform a statistical analysis (mean, std, etc) of an “ensemble of opportunity”.’
Though not for Frank specifically, but for everyone. An rather late on in things.
It’s worth noting that rgbatduke produced a good example of things. It’s also worth noting that it requires some rather fundamental understandings of the scientific process that the reader is not only aware of, but unwilling to budge about. This is all before we begin speaking about secondary details such as mathematical illiteracy. And it paves no easy road for the skeptic of a process to be put in the position of having to prove the process is valid or not. It is the onus of those that put forward the process in the first place that carry the burden of establishing its worth, correctness, and limitations. So while rather late in the game, here is a simple thought experiment for proponents of ensemble means to answer satisfactorily:
——–
Mrs. Jones teaches math to a class of 23 students. They are all taught from the same source material and have the same basic grounding in how to approach and solve the math problems they’re presented. Each student is unique, has their own unique thought process, and unique understanding of the concepts involved in solving math problems; despite that they share the same fundamental teaching from Mrs. Jones.
Last Tuesday Mrs. Jones gave her students a pop-quiz to test their knowledge and growth in learning math; the subject she is teaching them. Each student dutifully answers all the questions and turns in their papers. A couple students produce the same answer while every other student produces a unique answer that no other student produced.[1] And when grading the papers, to her horror, Mrs. Jones discovers that she has lost the answer key. And despite being a teach of mathematics, she is uncertain as to the correct answer to some of the questions.
But if she admits that she has lost the answer key, her job will be in jeopardy. So she chooses to take the average value of the answers to each question and use that as the value of the correct answer. On question #16 there is a single student, Jimmy, whose answer is the same as the average value of the answers from all 23 students.
To the proponents of ensemble means and any others who do not state in uncertain terms that it is mathematical illiteracy:
1) Prove that the correct answer is the answer that Jimmy produced.
Failing this:
2) Prove that no other student produced an answer that was ‘closer’ to the correct one than Jimmy’s.
——–
Those that assert that there is nothing amiss with ensemble means should have no difficulty correctly proving one of these two questions. And if they cannot produce such a proof then there is nothing further to discuss with them. They either need to find that proof, or steadfastly hold an irrational belief in square circles.
[1] Of note here, science being science, we can firmly state that at most one student has the correct answer, and that at least 22 of them have the wrong one. Though these constraints are hardly necessary for the purposes of the problem posed.

tonyb
Editor
June 19, 2013 11:52 pm

Galloping Camel says;
‘If “Tree Rings” are the best data it won’t help to blend them with varves lakes or ice cores.’
If tree rings ARE the best data, it is high time we looked for different data.
Why did other climate scientists think it a good idea to accept a method that loosely records only its micro climate in a highly generalised fashion for its three month growing season with its subdued night time record.?
tonyb

June 20, 2013 12:06 am

Frank says: June 19, 2013 at 10:27 pm
“Now look at Figure SPM.5 showing projected climate change with one standard deviation ranges.”

Well, at last at the bottom of the second thread, someone has actually pointed to something tangible that might be statistically problematic. But this isn’t a multi-model issue. Both SPM 5 and Fig 10.4 are explicit about the sd:
” Shading denotes the ±1 standard deviation range of individual model annual averages.”
To me, that means the spread of the individual series – something one can calculate for any time series. By contrast, they refer to the center line as “are multi-model global averages”. And Fig 10.29 says:
“For comparison, results are shown for the individual models (red dots) of the multi-model AOGCM ensemble for B1, A1B and A2, with a mean and 5 to 95% range (red line and circle) from a fi tted normal distribution.”
So they are explicit about where they are getting the range, and again it seems to be a fit to an individual model. On carbon they say
“The 5 to 95% ranges (vertical lines) and medians (circles) are shown from probabilistic methods”
and give a whole stack of references.
So I still don’t think we’ve matches RGB’s bitchslap rhetoric to any actual deeds.

Frank
June 20, 2013 12:13 am

Jquip: Your ensemble is interesting, but you will find Stainforth’s more relevant to this post. His ensemble tried systematically varying all of the model parameters that control clouds and precipitation within the limits established by laboratory experiments. These parameters interact in surprising ways that can’t be understood by varying/optimizing one parameter at a time.
http://media.cigionline.org/geoeng/2005%20-%20Stainforth%20et%20al%20-%20Uncertainty%20in%20predictions%20of%20the%20climate%20response%20to%20rising%20GHGs.pdf
In subsequent papers, Stainforth has tried (and failed) to identify a small subset of his ensemble of models which provide the best representation of the earth’s current climate.

Larry Hulden
June 20, 2013 12:57 am

Gail Combs wrote:
“This Graph shows world bank funding for COAL fired plants in China, India and elsewhere went from $936 billion in 2009 to $4,270 billion in 2010.”
It should be:
$936 million in 2009 to $4270 million in 2010
or alternatively
$936 million in 2009 to $4,270 billion in 2010

Larry Hulden
June 20, 2013 1:03 am

The latter alternative should be $4.270 billion according to US practice and $4,270 billion according to Finnish practice.

June 20, 2013 1:35 am

Gail Combs says, June 19, 2013 at 1:39 am.
Gail, great, great comments – a brilliant demolition of the IPCC rationale, point by point.
BUT you are technically wrong in your specific comment about water, when you say: Water, probably THE most important on-planet climate parameter gets demoted to a CO2 feed back and isn’t even listed by the IPCC as a forcing. That point alone make ALL the models nothing but GIGO.
No! In a feedback system, a forcing is an externality – a change (whether natural or anthropogenic)that is external to the feedback system. A change in atmospheric water vapour composition on the other hand is an important feedback. It will only occur as a consequence of those changing externalities.
As a hard line skeptic, I am extremely happy for water to be described as a feedback. This is NOT a relegation of its significance, as you imply. Just correct science. Don’t forget that feedbacks can be negative or positive. There is simply no hard evidence that water feedback is net-positive rather than net-negative. If net-negative, water would be a cooling agent, would it not?

Chuck Nolan
June 20, 2013 2:28 am

Logically speaking, the fact that the models don’t average nearer to current temperature says, in general, they are all seeking the same answer.
cn

DEEBEE
June 20, 2013 2:44 am

Nick Stokes — “bitch slap”? taking it too personally Huh!

June 20, 2013 2:51 am

I am an avid WUWT fan though I don’t post often. James Delingpole wrote a blog article yesterday which, as usual, was quite amusing. http://blogs.telegraph.co.uk/news/jamesdelingpole/100222585/wind-farms-ceausescu-would-have-loved-em
Even more amusing was the tearing to shreds by Cassandra1963, Chreshirered2 and itzman of a warmista that posts under the name ‘soosoos’. I reproduce the thread below (apologies if the formatting is all over the place):-
Original Post by:- cheshirered2 . 17 hours ago
The science of global warming is represented in statistical form by those now-famous computer models. Those models have been a true disaster. That’s not ‘right wing’ rhetoric, it’s a statistical fact when we compare how the models performed against observed reality.
Look here and weep as Dr Roy Spencer shows how 73 out of 73 IPCC-influencing models failed dismally. (Yes, Ed Davey, that’s a 100% failure rate). http://www.drroyspencer.com/20
Now read on as one of the most brutal posts ever on the legendary http://www.wattsupwiththat.com smashes the alarmist case – reliant as it is on entirely failed computer models, to pieces. (It’s a heavy read, but persist and howl with derision at the alarmist stupidity). http://wattsupwiththat.com/201
On such failed and now entirely falsified anti-science is predicated the goverments ludicrously suicidal charge to ‘decarbonise’ the UK economy, with wind ‘power’ ridiculously being at the forefront. On a day when politicians supported the idea of bankers going to jail if they lose punters money we’re surely entitled to ask what will happen to politicians who cause untold financial misery and thousands of deaths due to their lunatic energy policies?
And here’s the punchline: if the models which represent the theory fail against real observations – and they do, then the theory that’s represented by those models must also fail.
Posted by: soosoos (Responding to cheshirered2). 17 hours ago
Look here and weep as Dr Roy Spencer shows how 73 out of 73 IPCC-influencing models failed dismally. (Yes, Ed Davey, that’s a 100% failure rate)
Oh look – yet another climate change denialist is regurgitating this nonsense! No one else has answered the following criticisms levelled at it. Perhaps you could?
How do you distinguish between a failure of the models and… a failure of the observations?
There is a precedent for this with satellite data, after all (also involving Spencer): their failure to accommodate orbital decay that led to a spurious cooling trend that contradicted models. The models were later exonerated. The satellite data should be treated very cautiously here.
First, they are unambiguously contaminated by stratospheric temperatures, which will impart a cooling bias (yes: there is tropospheric warming and stratospheric cooling, consistent with AGW but not solar warming).
Secondly, that graph averages UAH/RSS data. This is unfortunate because the two datasets explicitly disagree with each other. The confidence limits of their trends don’t even overlap. Something is rotten in the state of Denmark. RSS on its own is in much tighter agreement with the models – even with a stratospheric cooling bias.
But who needs a nuanced discussion of data and their misrepresentation when you can just have mindless regurgitation of nonsense from WUWT. Lol.
Posted by: Cassandra1963 (Responding to Soosoos).
Soosoos, you reality deniers are truly desperate now and it shows in the way you pick and choose and cherry pick and evade the central message. Denial, anger, bargaining, acceptance, the steps to finding out that everything you believed in was a pack of lies and a giant delusion. You are going to have a very difficult time coming to terms with the demise of the global warming fraud, I hope it doesn’t destroy you mentally as it may well do with some of the more committed cultists.
Posted y: BlueScreenOfDeath (Responding to Soosoos) 16 hours ago
“RSS on its own is in much tighter agreement with the models – even with a stratospheric cooling bias.”
Arrant nonsense. Which models? There are seventy-odd of the things and they are all over the place. Either you haven’t looked at the graph, or you are taking the piss.
Plus, you seem to ignore the radiosonde balloon data – is that because it doesn’t agree with your glib, baseless dismissal? Mind you, if an AGW sceptic scientist made an assertion that water was wet, you’d find some method of criticising it.
Posted by: cheshirered2 (Responding to Soosoos). 15 hours ago
Oh so sorry, I forgot the alarmist mantra; when the theory doesn’t match observed reality….reality is wrong! Talk about anti-science. You lot were happy enough to use ‘catastrophic’ model projections to drive the CO2 scare, and we’re now blessed with the resultant stupid energy policy.
You lot were happy to quote chapter and verse IPCC projections based on the AGW theory, and used your models as a means to an end. You lot were also happy enough to accept actual observations which fell in your favour, like Arctic ice in 2007. But now the models have been proven to have failed – please note those two words, PROVEN and FAILED, why, you only want to revise reality itself!!!
Faced with factual proof that your models have failed, you simply refuse to accept it. The embarrassment must be so acute.
You spout pure, unvarnished bullshit.
Posted by: itzman (Responding to Soosoos). 4 hours ago
Hey, we didn’t pick the data, the IPCC did. Its all very well to say WE cherry picked the data, and are therefore silly, but when YOU cherry pick the data – the one set that sort of agrees with the models, that’s science. The IPCC didn’t fail on OUR criteria, it failed on its OWN criteria.
WUWT et al are not doing any more than showing what the IPCC predicted, and what their own data sets that were USED TO JUSTIFY THE MODELS, actually did 17 years later. Utterly REFUTE their OWN models. Using THEIR OWN DATA.
You appear to be saying that the only data that is valid, is the data that supports the hypothesis, and that when it refutes the hypothesis, oh well here’s some other data that doesn’t. That’s called confirmation bias, or more commonly, being in denial, or simply ‘a denier’.
A phrase that will be hung round the neck of every refuted Green Believer*
*wasn’t that a Monkees song?

Venter
June 20, 2013 2:58 am

That shows that his veneer is wearing thin as all his BS has been systematically exposed.

June 20, 2013 4:08 am


“As I said on the other thread, what is lacking here is a proper reference. Who does this? Where? “Whoever it was that assembled the graph” is actually Lord Monckton. But I don’t think even that graph has most of these sins, and certainly the AR5 graph cited with it does not.
Where in the AR5 do they make use of ‘the variance and mean of the “ensemble” of models’?”
They must use it (even if they don’t present it) if they are making this projection:
“Despite considerable advances in climate models and in understanding and quantifying climate
feedbacks, [b]the assessed literature still supports the conclusion from AR4 that climate sensitivity is likely in the range 2–4.5°C, and very likely above 1.5°C. The most likely value remains near 3°C.[/b] An 19 ECS greater than about 6–7°C is very unlikely, based on combination of multiple lines of evidence”
They specify the range of possibilities (ie the spread) and a “most likely” which would be the mean, unless they’re just picking numbers out of thin air and deciding that 3 degrees is the winner by fiat.

June 20, 2013 4:17 am

Further: chapter 12-13
The climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response, but does not convey any information on model 44 robustness, uncertainty, likelihood of change, or magnitude relative to unforced climate variability.

June 20, 2013 4:21 am

And just to back up his assertion that all of the models do things differently, take different things into account, etc, and shouldn’t be accumulated into an ensemble mean, here’s one example from ar5.
“Treatment of the CO2 emissions associated with land cover changes is also model-dependent. Some models do not account for land cover changes at all, some simulate the biophysical effects but are still forced externally by land cover change induced CO2 emissions (in emission driven simulations), while the most advanced ESMs simulate both the biophysical effects of land cover changes and their associated CO2 emissions.”

June 20, 2013 4:25 am

On 12-26: (12.4.1.2)
“Uncertainties in global mean quantities arise from variations in internal natural variability, model response and forcing pathways. Table 12.2 gives two measures of uncertainty in the CMIP5 model projections, the standard deviation and range (min/max) across the model distribution.”

AndyG55
June 20, 2013 4:31 am

@ David Socrates.
“There is simply no hard evidence that water feedback is net-positive rather than net-negative. If net-negative, water would be a cooling agent, would it not?”
You have to be careful how you describe things. Cooling agent of “where”
H20 in its varied forms acts as an energy transfer medium from the Earth’s surface to the lower atmosphere (cloud level). So yes, it is a cooling agent with respect to the Earth’s surface.

TomVonk
June 20, 2013 4:49 am

Excellent post and I agree with every word of it. One of the best I read here.
And even if multi model means are used less today than yesterday (but they still are) what is left is the irreductible chaos in the climate system which can’t be meaningfully simulated on coarse grids of 100s of km. Even saying the word Navier Stokes in this context is an insult to Navier Stokes.
Of course space averaging of dynamical variables in a chaotic system is utter non sense too like many have been saying for several years already.

June 20, 2013 5:02 am

Just for posterity, the actual Figure 11.33 caption in the leaked AR5 draft report actually states:
“Figure 11.33: Synthesis of near-term projections of global mean surface air temperature. a) Projections of global mean, annual mean surface air temperature (SAT) 1986–2050 (anomalies relative to 1986–2005) under all RCPs from CMIP5 models (grey and coloured lines, one ensemble member per model), with four observational estimates (HadCRUT3: Brohan et al., 2006; ERA-Interim: Simmons et al., 2010; GISTEMP: Hansen et al., 2010; NOAA: Smith et al., 2008) for the period 1986–2011 (black lines); b) as a) but showing the 5–95% range for RCP4.5 (light grey shades, with the multi-model median in white) and all RCPs (dark grey shades) of decadal mean CMIP5 projections using one ensemble member per model, and decadal mean observational estimates (black lines). The maximum and minimum values from CMIP5 are shown by the grey lines. An assessed likely range for the mean of the period 2016–2035 is indicated by the black solid bar. The ‘2°C above pre-industrial’ level is indicated with a thin black line, assuming a warming of global mean SAT prior to 1986–2005 of 0.6°C. c) A synthesis of ranges for the mean SAT for 2016–2035 using SRES CMIP3, RCPs CMIP5, observationally constrained projections (Stott et al., 2012; Rowlands et al., 2012; updated to remove simulations with large future volcanic eruptions), and an overall assessment. The box 1 and whiskers represent the likely
(66%) and very likely (90%) ranges. The dots for the CMIP3 and CMIP5 estimates show the maximum and minimum values in the ensemble. The median (or maximum likelihood estimate for Rowlands et al., 2012) are indicated by a grey band.”

Second Order Draft Chapter 11 IPCC WGI Fifth Assessment Report, page 11-126
http://www.stopgreensuicide.com/Ch11_near-term_WG1AR5_SOD_Ch11_All_Final.pdf
Anyone who thinks that figure is not meant to give the impression that the climate models, either by grouping together models or by averaging/medianing and/or putting on error bars, can predict the future is a fool.
Any any rational person can see that the models are diverging from reality.

June 20, 2013 5:05 am

“An ECS greater than about 6–7°C is very unlikely, based on combination of multiple lines of evidence”
And if it’s very unlikely, why are these models still being included in the graph instead of discarded? Probably because it increases the mean and widens the uncertainty

David
June 20, 2013 5:57 am

Latitude says:
June 19, 2013 at 6:55 am
Roy Spencer says:
June 19, 2013 at 4:10 am
We don’t even know whether the 10% of the models closest to the observations are closer by chance (they contain similar butterflies) or because their underlying physical processes are better.
===================
thank you……over and out
————————————————————–
If the 10 percent of the models that are closest to the observations are all STILL wrong in the SAME direction, then this is, logically speaking, a clue that even those “best” models are systemically wrong and STILL oversenstive. In science, being CONSITENTLY wrong in ONE DIRECTION is a clue that a basic premise is wrong. In science being wrong is informative.
When one is ALLWAYS wrong to the oversenstive side of the equation , then you do not assume that the average of all your unidirectionaly wrong predictions, gets you closer to the truth.
Nick, it really is that simple.

Tim Clark
June 20, 2013 7:06 am

Gail Combs says:
June 19, 2013 at 5:58 pm
{ Now the TRUE face of the World Bank, and the elite
This Graph shows world bank funding for COAL fired plants in China, India and elsewhere went from $936 billion in 2009 to $4,270 billion in 2010.
pat says:
June 19, 2013 at 10:12 pm
Time to stop arguing about climate change: World Bank
LONDON, June 19 (Reuters) – The world should stop arguing about whether humans are causing climate change and start taking action to stop dangerous temperature rises, the president of the World Bank said on Wednesday…
http://www.pointcarbon.com/news/reutersnews/1.2425075?&ref=searchlist }
Nick,
I think you are well intentioned, albeit somewhat misguided in arguing semantics.
Regardless, even you should be able to determine, acknowledging the statistics confirmed above, that regardless whether mother Earth is warming or cooling there will never be any rational attempt by any governmental agency to institute a prescription to do a d@mn thing about it.
It’s a suffering horse being beaten to death for control and taxing authority.

Bruce Foutch
June 20, 2013 7:09 am

Rebuttal by Briggs:
http://wmbriggs.com/blog/?p=8394

Rob ricket
June 20, 2013 8:40 am

Open letter to Lord Monckton.
Sir,
As you are an IPCC reviewer, please consider requesting the following stipulations be appended to the AR5 climate models regarding the ubiquitous “predictive skill” contained in explanatory verbiage in each of the previous IPCC reports:
As proof of predictive skill in hind casting all graphic representations of models should contain representative hind casts transposed over actual/reconstructed temperatures to include all previously published model runs. I believe such a stipulation is easily addressed if the models truly demonstrate predictive skill in hind casting.
It is axiomatic that periodic ‘refreshing’ of the models with actual temperature data is akin to hitting the reset button and provides a means instilling inflated confidence in model robustness. Furthermore, the splicing of graphs from previous reports with the AR5 report may prove instrumental in revealing this so called predictive skill in both hind casting and forecasting of climatic conditions.
Humanity has the right to see a graphic historical representation of this predictive skill sans repeated actual temperature updates.
Your kind consideration is appreciated,
Rob Ricket

Dinostratus
June 20, 2013 8:44 am

Tennekes saw this coming. Well, a lot of people did but Tennekes is by far the most authoritative and earliest who saw this coming.
“In a meeting at ECMWF in 1986, I gave a speech entitled “No Forecast Is Complete Without A Forecast of Forecast Skill.” This slogan gave impetus to the now common procedure of Ensemble Forecasting, which in fact is a poor man’s version of producing a guess at the probability density function of a deterministic forecast. The ever-expanding powers of supercomputers permit such simplistic research strategies.
Since then, ensemble forecasting and multi-model forecasting have become common in climate research, too. But fundamental questions concerning the prediction horizon are being avoided like the plague. There exists no sound theoretical framework for climate predictability studies. As a turbulence specialist, I am aware that such a framework would require the development of a statistical-dynamic theory of the general circulation, a theory that deals with eddy fluxes and the like. But the very thought is anathema to the mainstream of dynamical meteorology.”

OldWeirdHarold
June 20, 2013 9:57 am

Janice Moore says:
June 18, 2013 at 8:07 pm
=====
Excellent. Reminds me of Richard Feynman’s rant about green stars and adding temperatures.

rgbatduke
June 20, 2013 10:04 am

Sorry, I missed the reposting of my comment. First of all, let me apologize for the typos and so on. Second, to address Nick Stokes in particular (again) and put it on the record in this discussion as well, the AR4 Summary for Policy Makers does exactly what I discuss above. Figure 1.4 in the unpublished AR5 appears poised to do exactly the same thing once again, turn an average of ensemble results, and standard deviations of the ensemble average into explicit predictions for policy makers regarding probable ranges of warming under various emission scenarios.
This is not a matter of discussion about whether it is Monckton who is at fault for computing an R-value or p-value from the mish-mosh of climate results and comparing the result to the actual climate — this is, actually, wrong and yes, it is wrong for the same reasons I discuss above, because there is no reason to think that the central limit theorem and by inheritance the error function or other normal-derived estimates of probability will have the slightest relevance to any of the climate models, let alone all of them together. One can at best take any given GCM run and compare it to the actual data, or take an ensemble of Monte Carlo inputs and develop many runs and look at the spread of results and compare THAT to the actual data.
In the latter case one is already stuck making a Bayesian analysis of the model results compared to the observational data (PER model, not collectively) because when one determines e.g. the permitted range of random variation of any given input one is basically inserting a Bayesian prior (the probability distribution of the variations) on TOP of the rest of the statistical analysis. Indeed, there are many Bayesian priors underlying the physics, the implementation, the approximations in the physics, the initial conditions, the values of the input parameters. Without wishing to address whether or not this sort of Bayesian analysis is the rule rather than the exception in climate science, one can derive a simple inequality that suggests that the uncertainty in each Bayesian prior on average increases the uncertainty in the predictions of the underlying model. I don’t want to say proves because the climate is nonlinear and chaotic, and chaotic systems can be surprising, but the intuitive order of things is that if the inputs are less certain and the outputs depend nontrivially on the inputs, so are the outputs less certain.
I will also note that one of the beauties of Bayes’ theorem is that one can actually start from an arbitrary (and incorrect) prior and by using incoming data correct the prior to improve the quality of the predictions of any given model with the actual data. A classic example of this is Polya’s Urn, determining the unbiased probability of drawing a red ball from an urn containing red and green balls (with replacement and shuffling of the urn between trials). Initially, we might use maximum entropy and use a prior of 50-50 — equal probability of drawing red or green balls. Or we might think to ourselves that the preparer of the urn is sneaky and likely to have filled the urn only with green balls and start with a prior estimate of zero. After one draws a single ball from the urn, however, we now have additional information — the prior plus the knowledge that we’ve drawn a (say) red ball. This instantly increases our estimate of the probability of getting red balls from a prior of 0, and actually very slightly increases the probability of getting a red ball from 0.5 as well. The more trials you make (with replacement) the better your successive approximations of the probability are regardless of where you begin with your priors. Certain priors will, of course, do a lot better than others!
I therefore repeat to Nick the question I made on other threads. Is the near-neutral variation in global temperature for at least 1/8 of a century (since 2000, to avoid the issue of 13, 15, or 17 years of “no significant warming” given the 1997/1999 El Nino/La Nina one-two punch since we have no real idea of what “signficant” means given observed natural variability in the global climate record that is almost indistinguishable from the variability of the last 50 years) strong evidence for warming of 2.5 C by the end of the century? Is it even weak evidence for? Or is it in fact evidence that ought to at least some extent decrease our degree of belief in aggressive warming over the rest of the century, just as drawing red balls from the urn ought to cause us to alter our prior beliefs about the probable fraction of red balls in Polya’s urn, completely independent of the priors used as the basis of the belief?
In the end, though, the reason I posted the original comment on Monckton’s list is that everybody commits this statistical sin when working with the GCMs. They have to. The only way to convince anyone that the GCMs might be correct in their egregious predictions of catastrophic warming is by establishing that the current flat spell is somehow within their permitted/expected range of variation. So no matter how the spaghetti of GCM predictions is computed and presented — and in figure 11.33b — not 11.33a — they are presented as an opaque range, BTW, — presenting their collective variance in any way whatsoever is an obvious visual sham, one intended to show that the lower edge of that variance barely contains the actual observational data.
Personally, I would consider that evidence that, collectively or singly, the models are not terribly good and should not be taken seriously because I think that reality is probably following the most likely dynamical evolution, not the least likely, and so I judge the models on the basis of reality and not the other way around. But whether or not one wishes to accept that argument, two very simple conclusions one has little choice but to accept are that using statistics correctly is better than using it incorrectly, and that the only correct way to statistically analyze and compare the predictions of the GCMs one at a time to nature is to use Bayesian analysis, because we lack an ensemble of identical worlds.
I make this point to put the writers of the Summary for Policy Makers for AR5 on notice that if they repeat the egregious error made in AR4 and make any claims whatsoever for the predictive power of the spaghetti snarl of GCM computations, if they use the terms “mean and standard deviation” of an ensemble of GCM predictions, if they attempt to transform those terms into some sort of statement of probability of various future outcomes for the climate based on the collective behavior of the GCMs, there will be hell to pay, because GCM results are not iid samples drawn from a fixed distribution, thereby fail to satisfy the elementary axioms of statistics and render both mean behavior and standard deviation of mean behavior over the “space” of perturbations of model types and input data utterly meaningless as far as having any sort of theory-supported predictive force in the real world. Literally meaningless. Without meaning.
The probability ranges published in AR4’s summary for policy makers are utterly indefensible by means of the correct application of statistics to the output from the GCMs collectively or singly. When one assigns a probability such as “67%” to some outcome, in science one had better be able to defend that assignment from the correct application of axiomatic statistics right down to the number itself. Otherwise, one is indeed making a Ouija board prediction, which as Greg pointed out on the original thread, is an example deliberately chosen because we all know how Ouija boards work! They spell out whatever the sneakiest, strongest person playing the game wants them to spell.
If any of the individuals who helped to actually write this summary would like to come forward and explain in detail how they derived the probability ranges that make it so easy for the policy makers to understand how likely to certain it is that we are en route to catastrophe, they should feel free to do so. And if they in fact did form the mean of many GCM predictions as if GCMs are some sort of random variate, form the standard deviation of the GCM predictions around the mean, and then determine the probability ranges on the basis of the central limit theorem and standard error function of the normal distribution (as it is almost certain they did, from the figure caption and following text) then they should be ashamed of themselves and indeed, should go back to school and perhaps even take a course or two in statistics before writing a summary for policy makers that presents information influencing the spending of hundreds of billions of dollars based on statistical nonsense.
And for the sake of all of us who have to pay for those sins in the form of misdirected resources, please, please do not repeat the mistake in AR5. Stop using phrases like “67% likely” or “95% certain” in reference to GCM predictions unless you can back them up within the confines of properly done statistical analysis and mere common wisdom in the field of predictive modeling — a field where I am moderately expert — where if anybody, ever claims that a predictive model of a chaotic nonlinear stochastic system with strong feedbacks is 95% certain to do anything I will indeed bitch slap them the minute they reach for my wallet as a consequence.
Predictive modeling is difficult. Using the normal distribution in predictive modeling of complex multivariate system is (as Taleb points out at great length in The Black Swan) easy but dumb. Using it in predictive modeling of the most complex system of nominally deterministic equations — a double set of coupled Navier Stokes equations with imperfectly known parameters on a rotating inhomogeneous ball in an erratic orbit around a variable star with an almost complete lack of predictive skill in any of the inputs (say, the probable state of the sun in fifteen years), let alone the output — is beyond dumb. Dumber than dumb. Dumb cubed. The exponential of dumb. The phase space filling exponential growth of probable error to the physically permitted boundaries dumb.
In my opinion — as admittedly at best a well-educated climate hobbyist, not as a climate professional, so weight that opinion as you will — we do not know how to construct a predictive climate model, and will never succeed in doing so as long as we focus on trying to explain “anomalies” instead of the gross nonlinear behavior of the climate on geological timescales. An example I recently gave for this is understanding the tides. Tidal “forces” can easily be understood and derived as the pseudoforces that arise in an accelerating frame of reference relative to Newton’s Law of Gravitation. Given the latter, one can very simply compute the actual gravitational force on an object at an actual distance from (say) the moon, compare it to the actual mass times the acceleration of the object as it moves at rest relative to the center of mass of the Earth (accelerating relative to the moon) and compute the change in e.g. the normal force that makes up the difference and hence the change in apparent weight. The result is a pseudoforce that varies like (R_e/R_lo)^3 (compared to the force of gravity that varies like 1/R_lo^2 , R_e radius of the earth, R_lo radius of the lunar orbit). This is a good enough explanation that first year college physics students can, with the binomial expansion, both compute the lunar tidal force and compute the nonlinear tidal force stressing e.g. a solid bar falling into a neutron star if they are a first year physics major.
It is not possible to come up with a meaningful heuristic for the tides lacking a knowledge of both Newton’s Law of Gravitation and Newton’s Second Law. One can make tide tables, sure, but one cannot tell how the tables would CHANGE if the moon was closer, and one couldn’t begin to compute e.g. Roche’s Limit or tidal forces outside of the narrow Taylor series expansion regime where e.g. R_e/R_lo << 1. And then there is the sun and solar tides making even the construction of an heuristic tide table an art form.
The reason we cannot make sense of it is that the actual interaction and acceleration are nonlinear functions of multiple coordinates. Note well, simple and nonlinear, and we are still a long way from solving anything like an actual equation of motion for the sloshing of oceans or the atmosphere due to tidal pseudoforces even though the pseudoforces themselves are comparatively simple in the expansion regime. This is still way simpler than any climate problem.
Trying to explain the nonlinear climate by linearizing around some set of imagined “natural values” of input parameters and then attempting to predict an anomaly is just like trying to compute the tides without being able to compute the actual orbit due to gravitation first. It is building a Ptolemaic theory of tidal epicycles instead of observing the sky first, determining Kepler’s Laws from the data second, and discovering the laws of motion and gravitation that explain the data third, finding that they explain more observations than the original data (e.g. cometary orbits) fourth, and then deriving the correct theory of the tidal pseudoforces as a direct consequence of the working theory and observing agreement there fifth.
In this process we are still at the stage of Tycho Brahe and Johannes Kepler, patiently accumulating reliable, precise observational data and trying to organize it into crude rules. We are only decades into it — we have accurate knowledge of the Ocean (70% of the Earth’s surface) that is at most decades long, and the reliable satellite record is little longer. Before that we have a handful of decades of spotty observation — before World War II there was little appreciation of global weather at all and little means of observing it — and at most a century or so of thermometric data at all, of indifferent quality and precision and sampling only an increasingly small fraction of the Earth’s surface. Before that, everything is known at best by proxies — which isn’t to say that there is not knowledge there but the error bars jump profoundly, as the proxies don’t do very well at predicting the current temperature outside of any narrow fit range because most of the proxies are multivariate and hence easily confounded or merely blurred out by the passage of time. They are Pre-Ptolemaic data — enough to see that the planets are wandering with respect to the fixed stars, and perhaps even enough to discern epicyclic patterns, but not enough to build a proper predictive model and certainly not enough to discern the underlying true dynamics.
I assert — as a modest proposal indeed — that we do not know enough to build a good, working climate model. We will not know enough until we can build a working climate model that predicts the past — explains in some detail the last 2000 years of proxy derived data, including the Little Ice Age and Dalton Minimum, the Roman and Medieval warm periods, and all of the other significant decadal and century scale variations in the climate clearly visible in the proxies. Such a theory would constitute the moral equivalent of Newton’s Law of Gravitation — sufficient to predict gross motion and even secondary gross phenomena like the tides, although difficult to use to compute a tide table from first principles. Once we can predict and understand the gross motion of the climate, perhaps we can discern and measure the actual “warming signal”, if any, from CO_2. In the meantime, as the GCMs continue their extensive divergence from observation, they make it difficult to take their predictions seriously enough to condemn a substantial fraction of the world’s population to a life of continuing poverty on their unsupported basis.
Let me make this perfectly clear. WHO has been publishing absurdities such as the “number of people killed every year by global warming” (subject to a dizzying tower of Bayesian priors I will not attempt to deconstruct but that render the number utterly meaningless). We can easily add to this number the number of people a year who have died whose lives would have been saved if some of the half-trillion or so dollars spent to ameliorate a predicted disaster in 2100 had instead been spent to raise them up from poverty and build a truly global civilization.
Does anyone doubt that the ratio of the latter to the former — even granting the accuracy of the former — is at least a thousand to one? Think of what a billion dollars would do in the hands of Unicef, or Care. Think of the schools, the power plants, the business another billion dollars would pay for in India, in central Africa. Go ahead, think about spending 498 more billions of dollars to improve the lives of the world’s poorest people, to build up its weakest economies. Think of the difference not spending money building inefficient energy resources in Europe would have made in the European economy — more than enough to have completely prevented the fiscal crisis that almost brought down the Euro and might yet do so.
That is why presenting numbers like “67% likely” on the basis of gaussian estimates of the variance of averaged GCM numbers as if it has some defensible predictive force to those who are utterly incapable of knowing better is not just incompetently dumb, it is at best incompetently dumb. The nicest interpretation of it is incompetence. The harshest is criminal malfeasance — deliberately misleading the entire world in such a way that millions have died unnecessarily, whole economies have been driven to the wall, and worldwide suffering is vastly greater than it might have been if we had spent the last twenty years building global civilization instead of trying to tear it down!
Even if the predictions of catastrophe in 2100 are true — and so far there is little reason to think that they will be based on observation as opposed to extrapolation of models that rather appear to be failing — it is still not clear that we shouldn’t have opted for civilization building first as the lesser of the two evils.
I will conclude with my last standard “challenge” for the warmists, those who continue to firmly believe in an oncoming disaster in spite of no particular discernible warming (at anything like a “catastrophic” rate” for somewhere between 13 and 17 years), in spite of an utterly insignificant rate of SLR, in spite of the growing divergence between the models and reality. If you truly wish to save civilization, and truly believe that carbon burning might bring it down, then campaign for nuclear power instead of solar or wind power. Nuclear power would replace carbon burning now, and do so in such a way that the all-important electrical supply is secure and reliable. Campaign for research at levels not seen since the development of the nuclear bomb into thorium burning fission plants, as the US has a thorium supply in North Carolina alone that would supply its total energy needs for a period longer than the Holocene, and so does India and China — collectively a huge chunk of the world’s population right there (and thorium is minded with rare earth metals needed in batteries, high efficiency electrical motors, and more, reducing prices of all of these key metals in the world marketplace). Stop advocating the subsidy of alternative energy sources where those sources cannot pay for themselves. Stop opposing the burning of carbon for fuel while it is needed to sustain civilization, and recognize that if the world economy crashes, if civilization falls, it will be a disaster that easily rivals the worst of your fears from a warmer climate.
Otherwise, while “deniers” might have the blood of future innocents on their hands if your future beliefs turn out to be correct, you’ll continue to have the blood of many avoidable deaths in the present on your own.
rgb

June 20, 2013 10:10 am

The share buttons should precede the comment section…

Bruce Cobb
June 20, 2013 10:44 am

I do believe prof Brown has bitch-slapped the Warmists and their anti-carbon doctrine into tomorrow. Let the whining and mis-direction from Stokes et al begin…

June 20, 2013 10:55 am

rgbatduke says:
June 20, 2013 at 10:04 am
Agree on nuke plants v. windmills, although both of course produce CO2 from making the cement needed in their concrete.
The human contribution to GHGs is negligible. The affect on climate of a slightly increased GH effect from the rise in CO2 since 1850 is similarly trivial. No catastrophe looms. So far more CO2 has been good for most living things on the planet, including humans.
Warmth & more luxurious plant growth means bigger animals, if not better.

June 20, 2013 11:16 am

PS: I should have said land animals, because the increasingly frigid conditions since the Oligocene & especially of the past 2.4 million years have led to evolution of the largest animals known, the baleen whales. (It’s possible that the biggest sauropods rivaled these marine giants, however.) Phytoplankton of course can get CO2 both from the air & sea, so can benefit from cold water retaining more carbon dioxide. The land mammals more massive than modern elephants went extinct during Oligocene & Miocene cooling from Eocene warmth.

Gary Hladik
June 20, 2013 11:22 am

rgbatduke says (June 20, 2013 at 10:04 am): [snip]
Whoa! Another home run!
While I could follow–in a very superficial way–the original modeling-a-carbon-atom analogy, I appreciate the more familiar–to me, anyway–analogies of the Polya Urn, the tides/Newton, and the Ptolemaic solar system.
“Even if the predictions of catastrophe in 2100 are true…it is still not clear that we shouldn’t have opted for civilization building first as the lesser of the two evils.”
Lomborg’s Copenhagen Consensus takes a similar view, giving “civilization building” a much higher priority than mitigating Thermageddon.
“…beyond dumb. Dumber than dumb. Dumb cubed. The exponential of dumb.”
Might I suggest the term “galactically stupid”? 🙂

Frank
June 20, 2013 11:36 am

rgbatduke: This statement from the introduction to Chapter 10 of AR4 WG1 shows that the IPCC authors understand your position, but persist in drawing “problematic” statistical conclusions from their “ensemble of opportunity”:
“Many of the figures in Chapter 10 are based on the mean and spread of the multi-model ensemble of comprehensive AOGCMs. The reason to focus on the multi-model mean is that averages across structurally different models empirically show better large-scale agreement with observations, because individual model biases tend to cancel (see Chapter 8). The expanded use of multi-model ensembles of projections of future climate change therefore provides higher quality and more quantitative climate change information compared to the TAR. Even though the ability to simulate present-day mean climate and variability, as well as observed trends, differs across models, no weighting of individual models is applied in calculating the mean. Since the ensemble is strictly an ‘ENSEMBLE OF OPPORTUNITY’, without sampling protocol, the spread of models does NOT NECESSARILY SPAN THE FULL POSSIBLE RANGE OF UNCERTAINTY, and a STATISTICAL INTERPRETATION of the model spread is therefore PROBLEMATIC. However, attempts are made to quantify uncertainty throughout the chapter based on various other lines of evidence, including perturbed physics ensembles specifically designed to study uncertainty within one model framework, and Bayesian methods using observational constraints.” [MY CAPS}

June 20, 2013 12:45 pm

rgbatduke says: June 20, 2013 at 10:04 am
“Figure 1.4 in the unpublished AR5 appears poised to do exactly the same thing once again, turn an average of ensemble results, and standard deviations of the ensemble average”
I can’t see any standard deviation claimed there. But I’m getting weary of a lone battle dealing with poorly specified and described claims against AR5, so I’m happy to hand over to W.M.Briggs, often cited as an authority here.
“I therefore repeat to Nick the question I made on other threads. Is the near-neutral variation in global temperature for at least 1/8 of a century (since 2000, to avoid the issue of 13, 15, or 17 years of “no significant warming” given the 1997/1999 El Nino/La Nina one-two punch since we have no real idea of what “signficant” means given observed natural variability in the global climate record that is almost indistinguishable from the variability of the last 50 years) strong evidence for warming of 2.5 C by the end of the century? “
Well, this was the actual topic of the original thread. And I’ve done my own version of SteveF’s calculation. And it turns out that when you look at different datasets and periods, and in particular get away from his use of just a linear function to fit the post-1950 period, then the exogenous variables of ENSO, Solar and volcanic aerosols (but mainly ENSO) do account for most of the slowdown. That is, if you remove them as SteveF did, there is a strong uptrend continuing to present. It happened that SteveF’s calc, for HAD4 and linear detrending was the one case that showed a weak remaining uptrend. Code and data are supplied.

June 20, 2013 2:30 pm

Nick Stokes: I gave the entire caption for AR5 figure 11.33. As RGB says, in (b) theyshow a mean and standard deviation envelope of the model results. It is nonsense. And worse, it can only be used to mislead policy makes and make AGW supoorters think we the models have predictive capability. They clearly don’t. I am continually astonished how clearly intelligent individuals like yourself can continue to defend the indefensible.

June 20, 2013 2:36 pm

RGB: that is the most elequent and powerful demolition of CAGW I have ever read. You have my deepest admiration and respect.

June 20, 2013 2:36 pm

Eloquent…!

June 20, 2013 3:02 pm

Nick,
Your insights are always welcome, so please never feel intimidated not to argue your case here or anywhere else. (I have suffered similarly on realscience).
There is an intrinsic problem at the base all post and discussion. There can be never be a unique “theory” of the climate. Quantum theory may calculate the magnetic dipole of the electron “exactly” but still cannot “predict” for example the Higgs mass. It lies beyond (current) theory..
There is nothing wrong in principal with running competing GCM models with varying physical input parameters to compare with future climates. However each run should be identified and described uniquely so as to be validated numerically if it turns out to accurately describe the future climate.
I suspect modellers know full well that this is impossible because their code is always evolving, and feedbacks and aerosols are always being varied in order to better match past warming. There is therefore an intrinsic uncertainty. That is why model runs are combined to produce an “ensemble” of future warming. These do indeed then resemble a fire-hose and accentuate their lack of predictive power . A cynic might be tempted to suggest this process is basic arse covering.
Surely it is best to be up front and honest to accept that inherent uncertainty will always remains. Why not simply accept that the “scientific consensus” basically covers just greenhouse theory. Feedbacks and climate sensitivity are still uncertain.

george e. smith
June 20, 2013 3:39 pm

“””””……JohnB says:
June 19, 2013 at 6:36 pm
I’ve always thought that averaging the models was the same as saying;
I have 6 production lines that make cars. Each line has a defect.
1. Makes cars with no wheels.
2. Makes cars with no engine.
3. Makes cars with no gearbox.
4. Makes cars with no seats.
5. Makes cars with no windows.
6. Makes cars with no brakes.
But “on average” they make good cars…….””””””
Well, I would say on average, they make truly lousy cars, because every one of them has some known defect.
None of the six production lines makes a functioning car.

Nick Stokes
June 20, 2013 3:57 pm

ThinkingScientist says: June 20, 2013 at 2:30 pm
“Nick Stokes: I gave the entire caption for AR5 figure 11.33. As RGB says, in (b) theyshow a mean and standard deviation envelope of the model results.”

Not true. Read it again. They give the median and quantiles. There’s no distribution assumed. It’s simply a description of their aggregate results.

Rob Ricket
June 20, 2013 4:59 pm

Nick,
Please look at the graph on 11-89. The word “mean” is clearly plastered on the graph in bold letters.
http://www.stopgreensuicide.com/Ch11_near-term_WG1AR5_SOD_Ch11_All_Final.pdf

Nick Stokes
June 20, 2013 5:37 pm

Rob Ricket says: June 20, 2013 at 4:59 pm
“Nick,
Please look at the graph on 11-89. The word “mean” is clearly plastered on the graph in bold letters.”

Yes, it is. We’ve been through this before, endlessly. The AR4 showed ensemble means, and talks at length about it. We discussed that. Roy Spencer shows ensembles means in a prominent WUWT post. W.M.Briggs says it’s fine.
But this post says someone did something very bad, deserving severe sanctions. It’s not clear what it was, and we still have no idea who did it or where (other than Lord M, who was pointer to but he didn;t do much either). But it wasn’t just calculating an ensemble mean.

June 20, 2013 5:38 pm

Anyone who has ever tried to employ mathematical models, calibrate them to the observed world, and then use the model to project into the future knows that, even with a small model, things are extremely difficult. When one tries to solve a system of nonlinear equations one comes to the same conclusion — need to fine-tune the model parameters to even find a solution (despite having calibrated to the real world), multiple attractors, etc.. This is an argument I make in my book “Climate Change, Climate Science and Economics” (Springer 2013). I am glad a physicist has finally come out so clearly and strongly on the matter.

Nick Stokes
June 20, 2013 5:38 pm

Grr “pointer tom” = pointed to
[Fixed -w.]

David
June 20, 2013 7:26 pm

Nick says…”But this post says someone did something very bad, deserving severe sanctions. It’s not clear what it was, and we still have no idea who did it or where (other than Lord M, who was pointer tom but he didn;t do much either). But it wasn’t just calculating an ensemble mean.”
Correct Nick, it was calculating an ensemble mean in the following circumstances, and claiming it, (the ensemble means) means anything but evidence of a failed hypothesis.
If the 10 percent of the models that are closest to the observations are all STILL wrong in the SAME direction, then this is, logically speaking, a clue that even those “best” models are systemically wrong and STILL oversenstive. In science, being CONSITENTLY wrong in ONE DIRECTION is a clue that a basic premise is wrong. In science being wrong is informative.
When one is ALLWAYS wrong to the oversenstive side of the equation , then you do not assume that the ensemble mean of all your unidirectionaly OVERSENSTIVE wrong predictions, gets you closer to the truth.

Nick Stokes
June 20, 2013 8:07 pm

David says: June 20, 2013 at 7:26 pm
“Correct Nick, it was calculating an ensemble mean in the following circumstances, and claiming it, (the ensemble means) means anything but evidence of a failed hypothesis.”

David, the claim that models are just wrong has been made loudly and often. It has convinced all those likely to be convinced. To make progress you need something else. RGB has tried to do that with a methods argument.
If you can make that argument, that someone put together results with wrong methods and it matters, then you might convince some people who think the models themselves have value. Such an argument would need to be properly referenced. It hasn’t been. But if you throw that away and just keep putting in caps that the models are wrong, there’s no progress.

John Bills
June 20, 2013 9:12 pm

Depending on the weather, some models are not wrong, some of the time,
War is peace freedom is slavery ignorance is strength warm is cold.

David
June 20, 2013 9:40 pm

Nich says, “David, the claim that models are just wrong has been made loudly and often.”
———————————————————
I did not just say, “the models are wrong”, So you are willfully creating a strawman. I said the models are consistently wrong in one direction. There bias is to always predict more warming then observed. Each one of the “caps” emphasizes that distinction, and furthermore shows the foolishness in moving away from the least wrong (but still wrong) models through an ensemble mean, and pretending that mean has any policy implications. (The fact that said policy, based on an asembled means of zero predictive skill, has cost billions, and impoverished many, is the crime which you are incapable of finding)
It is not complicated, and your willful missing the point is a poor reflection on you. Likewise in his last comment, and in a long previous comment, RGB specifically asked you six or seven pointed questions, which you never answered, as predicded by my Nick Stockes model which I label S.I.T. short for sophist intelligent troll.

Nick Stokes
June 20, 2013 9:55 pm

David says: June 20, 2013 at 9:40 pm
‘I did not just say, “the models are wrong”,’

It doesn’t matter here how the models are said to be wrong. RGB is presenting a methods argument which tries to show that the conclusions are wrong even if the model runs are right. Your argument is not about the method; it is about some claimed facts about the model results.
The bottom line is that people who believe your set of facts are (rightly, for them) not going to care much about the method argument. It’s only useful for people whose minds are open on the merits of models.
\

June 21, 2013 12:34 am

Nick – what difference does it make that they show the mean or the median + the envelope/variance of the models? What message is being conveyed? The point is that any summary of the models like this is useless.
The IPCC continually tries to pretend that these are just “scenarios” not predictions. If those “scenarios” are so unlikely as to be impossible, then what use are the models? This is not an academic exercise: IPCC reports are used to set public policies costing billions, even trillions of dollars. Do you think the summary for policymakers and the activists who take away those messages see your nuanced hair splitting over whether its the mean, or median or whatever? Those graphs have a message to deliver: the message is the models can predict the climate into the future and the future looks bad and we all need to act now.
As RGB points out, the models , whether plotted as spaghetti, or summarised as a median and envelope or any other pointless statistic you want to agonise over do not agree with reality and therefore should be disregarded. We cannot currently predict the future climate and given its non-linearity and complexity it is unlikely that we are going to be able to for a long time to come. If we cannot model the climate for even a short period with any degree of accuracy then we should stop doing so and admit that we don’t know what the future climate will be . Anything less is negligence and , if intended to mislead, criminal.

June 21, 2013 12:35 am

What is the difference between a “scenario” and a “prediction”.

Hoi Polloi
June 21, 2013 4:02 am

so I’m happy to hand over to W.M.Briggs, often cited as an authority here.

Interesting noticing that Stokes refers to Briggs when it suits him, never see that before. BTW have you read the update in Briggs blog, Stokes?

Although it is true ensemble forecasting makes sense, I do NOT claim that they do well in practice for climate models.

David
June 21, 2013 4:41 am

A scenario and a prediction are of course the exact same thing, if you base policy on them. But Nick may not admit that. RGB made every point I made, (and many others) and a lot of other points as well with regard to chotic systems, etc. Nick ignores the , every point I sumarised, all of which were within RGBs comments, , and instead concentrates on pedantic details of the relativist chaotic discussion. He refuses to anwer RGBs questions. He does not admit the glaring facts everyone else sees, but pretends his pedantic sophist disagreements with RGB, somehow makes the use of an ensemble mean for policy as OK, when the enesmble mean is clearly taking one further and further from the models most reflective of real world observations, and the policy is immensely destructive. .
The models are wrong in ONE direction. The ensemble mean is used for political purposes, not to find the best model, but to ignore the best models, and create a SCARY prediction. Everybody but lonely Nick sees this.

David
June 21, 2013 4:45 am

Ensemble forecasting does NOT make sense, when your errors are all biased in one directon.

Nick Stokes
June 21, 2013 5:07 am

David says: June 21, 2013 at 4:41 am
“The ensemble mean is used for political purposes, not to find the best model, but to ignore the best models, and create a SCARY prediction. Everybody but lonely Nick sees this.”
Lonely? There’s me mate W.M.Briggs. And Dr Roy Spencer. And just now Bob Tisdale, also on the pages of WUWT. I think we’re close to 97% 🙂

Tim Clark
June 21, 2013 6:28 am

{ Now, are the original or adjusted ensemble forecasts any good? If so, then the models are probably getting the physics right. If not, then not. We have to check: do the validation and apply some proper score to them. Only that would tell us. We cannot, in any way, say they are wrong before we do the checking. They are certainly not wrong because they are ensemble forecasts. They could only be wrong if they fail to match reality. (The forecasts Roy S. had up a week or so ago didn’t look like they did too well, but I only glanced at his picture.)
Although it is true ensemble forecasting makes sense, I do NOT claim that they do well in practice for climate models. I also dispute the notion that we have to act before we are able to verify the models. That’s nuts. If that logic held, then we would have to act on any bizarre notion that took our fancy as long as we perceived it might be a big enough threat. }
Regardless if the use of ensembles is appropriate, I don’t think Briggs is giving the output a rousing endorsement.

Rob ricket
June 21, 2013 10:05 am

Ensemble forecasting with graphic representation of central tendencies when combined with the periodic inputs of actual data amounts to little more than a “No Climate Modeler Left behind Act”. As mentioned before, the IPCC should include each model’s hind cast appended to the forecast transposed over actual temperature. Methinks the predictive skill of these models will skew predictably high in the hind cast envelop relative to actual data.

John Archer
June 21, 2013 10:39 am

“Black = white.
However, sometimes it’s blue, unless green is available, in which case it’s red.
Models predict bleen, on average, rising catastrophically to hotrod red. 97% of scientists agree.”
(‘Hockey’) Stick Nokes,
Schutzstaffelfühfrer Designate
Department of Disinformation
One-World Totalitarian Institute
Bunkum Bunker
77 Wilhelmstraße
UEA Campus
Norwich FU2

— not necessarily verbatim but factual content verified by colour-blind diversity celebrationists and peer-reviewed for accuracy by 99% of all chromo-scientists who have ever lived, and Dulux.
Also certified as true by Gail Coombs. 😉 🙂
The science is now settled. It’s ‘time to move on‘ …
AAARRRRGGGHHHHH! There’s that cretinous managemental-politico-speak again!

Eliza
June 21, 2013 12:37 pm

One thing I can say about Nick at least he reads the posts and then answers in his own way. I still reckon he is a paid troll.However I would not be surprised (because he seems intelligent) that within 6-12 months he may resign his post and become an official denier LOL

Lars P.
June 21, 2013 2:08 pm

Nick Stokes says:
June 20, 2013 at 5:37 pm
But this post says someone did something very bad, deserving severe sanctions. It’s not clear what it was, and we still have no idea who did it or where (other than Lord M, who was pointer tom but he didn;t do much either). But it wasn’t just calculating an ensemble mean.
You almost got it Nick, :).
“one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”
does this help?
“the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge”
“So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something.”
“It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. ”
So the severe sanctions is to defund the models that are out of the expected fluctuation range on the DATA side. Well I think it would be just the normal process that happens overall in the real world with models that try to model anything. Who would use a car or an airplane that is build based on the data of a model that give results out of the range of true DATA?
Nick Stokes says:
June 21, 2013 at 5:07 am
Lonely? There’s me mate W.M.Briggs. And Dr Roy Spencer. And just now Bob Tisdale, also on the pages of WUWT. I think we’re close to 97% 🙂
Bob and Roy and W.M. Briggs use it to show the disconnect to real data. You not. So you see, you are not mate with them.

June 21, 2013 4:49 pm

The key issue in global warming models is the apparent misappropriation of the ecology of water vapour in the air. The models all assume that water vapour arrives in the air only as a factor of thermally driven evaporation. This is false, the majority of water vapour in the air derives from evapo-transpiration of plants. In our high and rising CO2 world plants on land are putting out far less water vapour.
Thus while the warming models all expect small rises in temperature driven by CO2 will force water vapour into the air and result in much larger rises in temperatures driven by the water vapour they are fatally flawed. Here’s a post that explains it in more detail. http://russgeorge.net/2013/06/11/why-climate-models-fail/
The fabulous spaghetti models here are obvious match are the spaghetti models of hurricanes. If we distributed hurricane relief based on the average of the spaghetti models we’d almost surely frequently miss helping the victims.

Editor
June 21, 2013 4:55 pm

Nick Stokes says:
June 20, 2013 at 3:57 pm

ThinkingScientist says: June 20, 2013 at 2:30 pm

“Nick Stokes: I gave the entire caption for AR5 figure 11.33. As RGB says, in (b) theyshow a mean and standard deviation envelope of the model results.”

Not true. Read it again. They give the median and quantiles. There’s no distribution assumed. It’s simply a description of their aggregate results.

No distribution assumed? Perhaps you should try your method on the caption to the IPCC Figure

Figure 10.27. Statistics of annual mean responses to the SRES A1B scenario, for 2080 to 2099 relative to 1980 to 1999, calculated from the 21-member AR4 multi-model ensemble using the methodology of Räisänen (2001). Results are expressed as a function of horizontal scale on the x axis (‘Loc’: grid box scale; ‘Hem’: hemispheric scale; ‘Glob’: global mean) plotted against the y axis showing (a) the relative agreement between ensemble members, a dimensionless quantity defined as the square of the ensemble-mean response (corrected to avoid sampling bias) divided by the mean squared response of individual ensemble members, and (b) the dimensionless fraction of internal variability relative to the ensemble variance of responses. Values are shown for surface air temperature, precipitation and sea level pressure. The low agreement of SLP changes at hemispheric and global scales reflects problems with the conservation of total atmospheric mass in some of the models, however, this has no practical significance because SLP changes at these scales are extremely small.

Seems to me like they are drawing all kinds of statistical conclusions from the distribution of the model results.
I have the same problem that Robert Brown has with this—the models are not separate realizations of an underlying reality.
If we are drawing colored balls from a jar, the mean and standard deviation of the results tells us something.
But with the models, they are not representations of reality. This is obvious from their results—despite using widely differing inputs, they are all able to “hindcast” the 20th century.
Here’s the problem for me. The mean and standard distribution of the models does NOT reflect underlying reality like the balls int he jar. Instead, they are just a measure of how well they are tuned. If they were all tuned perfectly and worked perfectly, they would all hindcast exactly the same result … but they don’t.
As a mental exercise, consider the following:
Suppose we have a dozen models, and we look at the results. Under the IPCC paradigm, if the model results are greatly different from the observations, people say, well, the models aren’t working.
Now let’s add a bunch of really crappy models. With our new ensemble, the observations are will within the standard deviation of the models, and people say hey, the models are working! They predicted the actual outcome!
And that, to me, epitomizes the problem with the models that Robert and I see, but you say doesn’t exist. The problem is that if you add crappy models to your ensemble, it performs better … and when that happens, you know your math is wrong somewhere.
w.

Editor
June 21, 2013 4:58 pm

I’ve posted the following over on William Brigg’s site:

William, you say:

ONE Do ensemble forecast make statistical sense? Yes. Yes, they do. Of course they do. There is nothing in the world wrong with them. It does NOT matter whether the object of the forecast is chaotic, complex, physical, emotional, anything. All that gibberish about “random samples of models” or whatever is meaningless. There will be no “b****-slapping” anybody. (And don’t forget ensembles were invented to acknowledge the chaotic nature of the atmosphere, as I said above.)

I know you’re a great statistician, and you’re one of my heroes … but with all respect, you’ve left out a couple of important priors in your rant ….
1. You assume that the results of the climate model are better than random chance.
2. You assume that the mean of the climate models is better than the individual models.
3. You assume that the climate models are “physics-based”.
As far as I know, none of these has ever been shown to be true for climate models. If they have, please provide citations.
As a result, taking an average of climate models is much like taking an average of gypsy fortunetellers … and surely you would not argue that the average and standard deviation of their forecasts is meaningful.
This is the reason that you end up saying that ensembles of models are fine, but ensembles of climate models do very poorly … an observation you make, but fail to think through to its logical conclusion. Because IF your claim is right, your claim that we can happily trust ensembles of any kind of models, then why doesn’t that apply to climate models?
All you’ve done here is say “ensembles of models are fine, but not ensembles of climate models” without explaining why ensembles of climate models are crap, when the performance of climate models was the subject and the point of Robert Brown’s post … sorry, not impressed. Let me quote from Robert’s post regarding the IPCC use of “ensembles” of climate models:

What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).

Note that he is talking about climate models, not the models used to design the Boeing 787 or the models that predicted the Higgs boson … climate models.
One underlying problem with the climate models for global average temperature, as I’ve shown, is that their output is just a lagged and resized version of the input. They are completely mechanical and stupidly simple in that regard.
And despite wildly differing inputs, climate models all produce very similar outputs … say what? The only possible way they can do that is by NOT being physics based, by NOT intersecting with reality, but by being tuned to give the same answer.
If climate worked in that bozo-simple fashion, all of your points would be right, because then, the models would actually be a representation of reality.
But obviously, climate is not that simple, that’s a child’s convenient illusion. So the clustering of the models is NOT because they are hitting somewhere around the underlying reality.
The clustering of the models occurs because they share common errors, and they are all doing the same thing—giving us a lagged, resized version of the inputs. So yes, they do cluster around a result, and the focal point of their cluster is the delusion that climate is ridiculously simple.
So if you claim that clustering of climate models actually means something, you’re not the statistician you claim to be. Because that is equivalent to saying that if out of an ensemble of ten gypsies, seven of them say you should give them your money to be blessed, that you should act on that because it’s an ensemble, and “get your money blessed” is within the standard deviation of their advice …
There are many reasons why models might cluster around a wrong answer, and you’ve not touched on those reasons in the slightest. Instead, you’ve given us your strongest assurances that we can happily trust model ensembles to give us the straight goods … except climate models.
w.

June 21, 2013 6:18 pm

Willis Eschenbach says: June 21, 2013 at 4:55 pm
Well, first I just have to quote William Briggs reply:

“Willis,
No. You misunderstand because you (and others) are not keeping matters separate.
1. Do ensemble models make statistical sense in theory? Yes. Brown said no and wanted to slap somebody, God knows who, for believing they did and for creating a version of an ensemble forecast. He called such practice “horrendous.” Brown is wrong. What he said was false. As in not right. As is not even close to being right. As is severely, embarrassingly wrong. As in wrong in such a way that not one of his statistical statements could be repaired. As in just plain wrong. In other words, and to be precise, Brown is wrong. He has no idea of which he speaks. The passage you quote from him is wronger than Joe Biden’s hair plugs. It is wronger than Napoleon marching on Moscow. It is wronger than televised wrestling.”

And that issue of keeping things separate is what I’ve been arguing with David etc. RGB made specific stat method criticisms. He did not substantiate them. Then there is a flurry of anyone finding some calculation of a model mean, amd then all the stuff about, well, models are wrong anyway. But as WMB rightly says, that’s irrelevant. Stick to the issue.
It’s over a week since RGB made his initial comments about AR5, and we still haven’t found what they apply to. Now you’re suggesting it is Fig 10.27 of the AR4. But that plot is nothing like what RGB was talking about. It does not show a time progression. Instead, they are analysing differences between models, looking for consistency and inconsistency. They look at temp, precip, and slp. They note hemispheric inconsistency in SLP. Is this not something you’d expect them to do?

Paul Vaughan
June 21, 2013 6:59 pm

Diverse recombination (including cross-disciplinary & cross-cultural) facilitates survival.
Some of the alleged climate “unknown unknowns” are now known knowns. They have nothing to do with unknown physics and everything to do with sampling & aggregation.
__
Climate-Stat Paradox 101
_
Lesson 1
“Ephemeral or “mirage” correlations are common in even the simplest nonlinear systems (7, 11–13), such as shown in Fig. 1 […]”
Sugihara+(2012). Detecting causality in complex ecosystems. Science 338, 496-500.
http://www.uvm.edu/~cdanfort/csc-reading-group/sugihara-causality-science-2012.pdf
Consider Fig. 1 (temporal-only context) extensions to the broader spatiotemporal context duly emphasized by Tomas Milanovic — no complete theory yet exists, but meanwhile we need not live in dark ignorance &/or deception …
_
Lesson 2
When we carefully shine light we easily note that applied in concert the laws of conservation of angular momentum & large numbers empirically clarify in tuned aggregate (seeing past the coupling indicated by interannual mirrored mirage”) simple decadal & multidecadal attractors.
_
Lesson 3
“Nights are days – we beat a path through the mirrored maze
I can see the end…”
Metric – Breathing Underwater (ghostly if you don’t spot the paradox)
Tuned-aggregate views of coupled temperature, mass, & velocity gradients:
http://imageshack.us/a/img202/4641/lodjev.png
http://tallbloke.files.wordpress.com/2013/03/scd_sst_q.png
http://imageshack.us/a/img267/8476/rrankcmz.png (section 3)
http://imageshack.us/a/img16/4559/xzu.png (minimal outline)
“Apart from all other reasons, the parameters of the geoid depend on the distribution of water over the planetary surface.” — Nikolay Sidorenkov
Paradoxically we’re not drowning underwater (persistent cloud & heavy rain) in the Pacific Northwest.
_
Summary
Corrupt government & university modeling “science” is based on patently false assumptions.
___
Far more detail plus dovetailing new details (at both lower & higher timescales) forthcoming in the weeks & months ahead as time/resources permit…

Gary Pearse
June 21, 2013 7:18 pm

milodonharlani says:
June 20, 2013 at 10:55 am
“Agree on nuke plants v. windmills, although both of course produce CO2 from making the cement needed in their concrete”
CO2 is commonly overestimated from cement manufacture, by not taking into consideration its re-absorption by carbonation over the life cycle of the concrete (100yrs). This includes concrete poured long before the 50’s which are usually considered the period when AGW began to be significant. Even structures that are demolished and landfilled are still taking in some CO2. Also, let us not forget that cement makes up only 10% of concrete giving a huge bang for the buck with this remarkable man made stone. As usual, official exaggeration as counseled by the founding fathers of CAGW (Schneider and the disgraced UEA proponents who hired “communication consultant” slicksters to show them how to lie and exaggerate effectively).
http://www.nrmca.org/sustainability/CONCRETE%20CO2%20FACT%20SHEET%20FEB%202012.pdf
“A significant portion of the CO2 produced during manufacturing of cement is reabsorbed into
concrete during the product life cycle through a process called carbonation. One research study estimates that between 33% and 57% of the CO2 emitted from calcination will be reabsorbed through carbonation of concrete surfaces over a 100-year life cycle. (GP: calcined lime plasters also reabsorb CO2 that was driven off in its manufacture to even a higher percentage.)”

John Archer
June 21, 2013 8:51 pm

Filth!
I’m shocked that Stick Nokes quoted a fartoid Billy Boy Briggs left on his underpants.
Tut tut!
Now look here, Stick, you’re getting all second-hand emotional, so let me help you.
To clear your mind, I have removed the menstrual redundancy from your quotation and left the substance, such as it is:
“1. Do ensemble models make statistical sense in theory? Yes. Brown said no and wanted to slap somebody, God knows who, for believing they did and for creating a version of an ensemble forecast. He called such practice “horrendous.” Brown is wrong. What he said was false. As in not right. As is not even close to being right. As is severely, embarrassingly wrong. As in wrong in such a way that not one of his statistical statements could be repaired. As in just plain wrong. In other words, and to be precise, Brown is wrong. He has no idea of which he speaks. The passage you quote from him is wronger than Joe Biden’s hair plugs. It is wronger than Napoleon marching on Moscow. It is wronger than televised wrestling.”
I just wondered WTF content in that shit you actually believe supports your яgument and YTF in hell you bothered quoting it.
Maybe it’s that time of the month? Is that it?
I think the psychopathy of deviants is fascinating. Stick, you’re a very interesting case. God bless.

John Archer
June 21, 2013 8:59 pm

Anthony,
Any chance you might set up a preview pane for the textually challenged?
REPLY: For the ten thousandth time I’ve answered this question, no. wordpress.com won’t allow that plugin to run as it is a security hazard. – anthony

Paul Vaughan
June 21, 2013 9:18 pm

I won’t disagree with the following excerpts:
___
rgbatduke (June 20, 2013 at 10:04 am) wrote:
“This is not a matter of discussion about whether it is Monckton who is at fault for computing an R-value or p-value from the mish-mosh of climate results and comparing the result to the actual climate — this is, actually, wrong and yes, it is wrong for the same reasons I discuss above, because there is no reason to think that the central limit theorem and by inheritance the error function or other normal-derived estimates of probability will have the slightest relevance to any of the climate models, let alone all of them together.
[…]
I make this point to put the writers of the Summary for Policy Makers for AR5 on notice that if they repeat the egregious error made in AR4 and make any claims whatsoever for the predictive power of the spaghetti snarl of GCM computations, if they use the terms “mean and standard deviation” of an ensemble of GCM predictions, if they attempt to transform those terms into some sort of statement of probability of various future outcomes for the climate based on the collective behavior of the GCMs, there will be hell to pay, because GCM results are not iid samples drawn from a fixed distribution, thereby fail to satisfy the elementary axioms of statistics and render both mean behavior and standard deviation of mean behavior over the “space” of perturbations of model types and input data utterly meaningless as far as having any sort of theory-supported predictive force in the real world.
[…]
In the meantime, as the GCMs continue their extensive divergence from observation, they make it difficult to take their predictions seriously enough to condemn a substantial fraction of the world’s population to a life of continuing poverty on their unsupported basis.
[…]
Stop opposing the burning of carbon for fuel while it is needed to sustain civilization, and recognize that if the world economy crashes, if civilization falls, it will be a disaster that easily rivals the worst of your fears from a warmer climate.”

___
rgbatduke (June 13, 2013 at 7:20 am) wrote:
“[…] it looks like the frayed end of a rope, not like a coherent spread […]
[…]
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing […] deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it.
[…]
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased […]
[…]
Why even pay lip service to [r^2 & p-value] ? […]
[…]
You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias.
[…]
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because […] Everything works just fine as long as you […] are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors. Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground […]
[…]
[…] bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.”

old man
June 21, 2013 9:29 pm

It seems to me Dr. Brown is merely conveying what I think it was Mark Twain more succinctly conveyed over 100 years ago. “Education is that which reveals to the wise and conceals from the stupid the vast limits of their knowledge.” Note in that formulation we can be both educated and stupid. Dr. Brown appears to be one of the few wise- and educated -people who doesn’t forget that simple lesson.

John Archer
June 21, 2013 9:38 pm

[The problem with anal retentives is that they can never leave it alone. I always wondered what it was like to be one. Let’s see. JA.]
Disgusting filth!
I’m shocked that Stick Nokes quoted a fartoid Billy Boy Briggs left on his underpants.
Tut tut!
Now look here, Stick, you’re getting all second-hand emotional, so let me help you.
To clear your mind, I have removed the menstrual redundancy from your quotation and left the substance, such as it is:
“1. Do ensemble models make statistical sense in theory? Yes. Brown said no and wanted to slap somebody, God knows who, for believing they did and for creating a version of an ensemble forecast. He called such practice “horrendous.” Brown is wrong. What he said was false. As in not right. As is not even close to being right. As is severely, embarrassingly wrong. As in wrong in such a way that not one of his statistical statements could be repaired. As in just plain wrong. In other words, and to be precise, Brown is wrong. He has no idea of which he speaks. The passage you quote from him is wronger than Joe Biden’s hair plugs. It is wronger than Napoleon marching on Moscow. It is wronger than televised wrestling.”
I just wondered WTF content in that shit you actually believe supports your яgument and YTF in hell you bothered quoting it.
Maybe it’s that time of the month? Is that it?
I think the psychopathy of deviants is fascinating. Stick, you’re a very interesting case. God bless.
[Works for me. JA]

John Archer
June 21, 2013 9:59 pm

“REPLY: For the ten thousandth time I’ve answered this question, no. wordpress.com won’t allow that plugin to run as it is a security hazard.” – anthony
Well no need to get all emotional about it. Besides, I’ve never read ANYTHING ten thousand times.
In any case I wasn’t asking about Plugin — I understand he’s on remand right now and implicated in the Savile case so I’m not surprised wordpress doesn’t want to know. Even so, I don’t see what he’s got to do with any of this.
Any chance of a preview pane?

John Archer
June 21, 2013 10:11 pm

Anthony,
Stick Nokes = duck.
Not me. 🙂

June 22, 2013 3:34 pm

I think people attribute more “intelligence” to models than is real. Try looking at the code and input variables.
Can someone explain to me what function a random number generator has in a predictive physics model?
GISSModelE System.f:
MODULE RANDOM
!@sum RANDOM generates random numbers: 0<RANDom_nUmber<1
The recipe looks to me like this:
1. Take some input data
2. Apply some complex function
3. When the function looks like going out of bounds, check it against a real constraint
4. Throw in some random variation so the constraint doesn't look like clipping
5. Tweak nudging variables until the output looks like we want it.

June 22, 2013 8:29 pm

Here are the places modelE uses randoms:
ATM_DUM.f
c Set Random numbers for Mountain Drag intermittency
CLOUDS2_DRV.f
C Burn some random numbers corresponding to latitudes off
C processor
CLOUDS2_E1.f
!@var RNDSSL stored random number sequences
REAL*8 RNDSSL(3,LM)
CLOUDS2.f
!@var RNDSSL stored random number sequences
REAL*8 RNDSSL(3,LM)
MODELE.f
CALL CALC_AMPK(LM)
C****
!**** IRANDI seed for random perturbation of initial conditions (if/=0):
C**** tropospheric temperatures changed by at most 1 degree C
RAD_DRV.f
C**** Get the random numbers outside openMP parallel regions
C**** but keep MC calculation separate from SS clouds
C**** To get parallel consistency also with mpi, force each process
C**** to generate random numbers for all latitudes (using BURN_RANDOM)
STRATDYN.f
C Set Random numbers for 1/4 Mountain Drag
SYSTEM.f
INTEGER, SAVE :: IX !@var IX random number seed
TES_SIMULATOR.f
C**** Set up 2D array of random numbers over local domain.
C**** This is required for probability-based profile quality filtering.
C**** RINIT and RFINAL are called so that these calls don’t affect
C**** prognostic quantities that use RANDU.
So, if we randomly perturb initial conditions and clouds, what would we expect from a model over time – the frayed rope I presume.

gallopingcamel
June 23, 2013 10:56 pm

ThinkingScientist says: June 20, 2013 at 5:02 am,
Those model projections in AR5 WG1_Chaper 11, SOD (Second Order Draft) are obvious garbage but that won’t stop the IPCC publishing it with great fanfare 12 weeks from now. When each model is garbage the “ensemble” average of 73 models is still garbage. Maybe I am over simplifying “rgb”‘s message but here it is again from the inimitable Warmist, Roy Spencer:
http://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
Roy is only looking ahead to 2020 but the picture is similar in the 2050 scenario shown in AR5 chapter 11. If you want to see total absurdity just take the time to look at the IPCC’s projections for the year 2100. This time you need to look at page 24 (of 26), Figure SPM.5 (b).
http://www.gallopingcamel.info/Docs/WG1/SODs/SummaryForPolicymakers_WG1AR5-SPM_FOD_Final.pdf
Depending on which model you like the temperature rise (compared to the 1986-2005 average) will be anything between 1.0 to 4.0 Kelvin. That is not “Science”. It is not even “Educated Guesswork”. It belongs with the predictions of the folks who used to “Read” animal entrails.
As each year passes it becomes more and more obvious that the observed temperature is likely to remain below the bottom limit of the IPCC’s predicted band, as it is today.

Lars P.
June 25, 2013 12:41 pm

rgbatduke says:
June 20, 2013 at 10:04 am
rgbatduke, thank you for taking the time and putting up so clear the arguments!

June 25, 2013 1:03 pm

Gary Pearse says:
June 21, 2013 at 7:18 pm
———————————–
Thanks. I’m all for concrete. CACCAs carp on it, so I just throw it out there. It’s the only way they can attack nuclear power from a climate catastrophe standpoint, yet ignore its effect in blessing windmills. Of course, they don’t like dams either.

Paul Vaughan
June 27, 2013 2:47 am

Naive development:
http://judithcurry.com/2013/06/24/how-should-we-interpret-an-ensemble-of-models-part-i-weather-models/
It’s illuminating that Curry points to this:
http://wmbriggs.com/blog/?p=8394
Briggs’ entire argument rests upon patently untenable assumptions. His theoretical view of climate stats exposes blind subscription to an academic culture that ignores diagnostic insights.
Rather than admit that Brown’s arguments go to a MUCH deeper philosophical level, Briggs bases arguments on hidden layers of patently false assumptions.
This is a trust issue.
Since this particularly egregious instance of dark ignorance &/or deception follows on the heels of a longstanding pattern of dark ignorance &/or deception in Briggs’ climate discussion commentary, I’m writing him off permanently as decisively untrustworthy.