The “ensemble” of models is completely meaningless, statistically

This  comment from rgbatduke, who is Robert G. Brown at the Duke University Physics Department on the No significant warming for 17 years 4 months thread. It has gained quite a bit of attention because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post

rgbatduke says:

June 13, 2013 at 7:20 am

Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.

Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.

Say what?

This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed

“noise” (representing uncertainty) in the inputs.

What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).

So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that R^2 or p for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.

Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.

Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)

Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)

A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact thatindividual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.

A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).

In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physicsomitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.

Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.

So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.

Which of these is going to be the winner? LDF, of course. Why? Because theparameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.

Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the wayphysics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.

What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.

Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever bynot computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.

Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.

Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.

Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.

And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors.Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.

This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!

So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.

It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.

Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and stillpossibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.

rgb

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

323 Comments
Inline Feedbacks
View all comments
Rob ricket
June 21, 2013 10:05 am

Ensemble forecasting with graphic representation of central tendencies when combined with the periodic inputs of actual data amounts to little more than a “No Climate Modeler Left behind Act”. As mentioned before, the IPCC should include each model’s hind cast appended to the forecast transposed over actual temperature. Methinks the predictive skill of these models will skew predictably high in the hind cast envelop relative to actual data.

John Archer
June 21, 2013 10:39 am

“Black = white.
However, sometimes it’s blue, unless green is available, in which case it’s red.
Models predict bleen, on average, rising catastrophically to hotrod red. 97% of scientists agree.”
(‘Hockey’) Stick Nokes,
Schutzstaffelfühfrer Designate
Department of Disinformation
One-World Totalitarian Institute
Bunkum Bunker
77 Wilhelmstraße
UEA Campus
Norwich FU2

— not necessarily verbatim but factual content verified by colour-blind diversity celebrationists and peer-reviewed for accuracy by 99% of all chromo-scientists who have ever lived, and Dulux.
Also certified as true by Gail Coombs. 😉 🙂
The science is now settled. It’s ‘time to move on‘ …
AAARRRRGGGHHHHH! There’s that cretinous managemental-politico-speak again!

Eliza
June 21, 2013 12:37 pm

One thing I can say about Nick at least he reads the posts and then answers in his own way. I still reckon he is a paid troll.However I would not be surprised (because he seems intelligent) that within 6-12 months he may resign his post and become an official denier LOL

Lars P.
June 21, 2013 2:08 pm

Nick Stokes says:
June 20, 2013 at 5:37 pm
But this post says someone did something very bad, deserving severe sanctions. It’s not clear what it was, and we still have no idea who did it or where (other than Lord M, who was pointer tom but he didn;t do much either). But it wasn’t just calculating an ensemble mean.
You almost got it Nick, :).
“one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”
does this help?
“the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge”
“So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something.”
“It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. ”
So the severe sanctions is to defund the models that are out of the expected fluctuation range on the DATA side. Well I think it would be just the normal process that happens overall in the real world with models that try to model anything. Who would use a car or an airplane that is build based on the data of a model that give results out of the range of true DATA?
Nick Stokes says:
June 21, 2013 at 5:07 am
Lonely? There’s me mate W.M.Briggs. And Dr Roy Spencer. And just now Bob Tisdale, also on the pages of WUWT. I think we’re close to 97% 🙂
Bob and Roy and W.M. Briggs use it to show the disconnect to real data. You not. So you see, you are not mate with them.

June 21, 2013 4:49 pm

The key issue in global warming models is the apparent misappropriation of the ecology of water vapour in the air. The models all assume that water vapour arrives in the air only as a factor of thermally driven evaporation. This is false, the majority of water vapour in the air derives from evapo-transpiration of plants. In our high and rising CO2 world plants on land are putting out far less water vapour.
Thus while the warming models all expect small rises in temperature driven by CO2 will force water vapour into the air and result in much larger rises in temperatures driven by the water vapour they are fatally flawed. Here’s a post that explains it in more detail. http://russgeorge.net/2013/06/11/why-climate-models-fail/
The fabulous spaghetti models here are obvious match are the spaghetti models of hurricanes. If we distributed hurricane relief based on the average of the spaghetti models we’d almost surely frequently miss helping the victims.

Editor
June 21, 2013 4:55 pm

Nick Stokes says:
June 20, 2013 at 3:57 pm

ThinkingScientist says: June 20, 2013 at 2:30 pm

“Nick Stokes: I gave the entire caption for AR5 figure 11.33. As RGB says, in (b) theyshow a mean and standard deviation envelope of the model results.”

Not true. Read it again. They give the median and quantiles. There’s no distribution assumed. It’s simply a description of their aggregate results.

No distribution assumed? Perhaps you should try your method on the caption to the IPCC Figure

Figure 10.27. Statistics of annual mean responses to the SRES A1B scenario, for 2080 to 2099 relative to 1980 to 1999, calculated from the 21-member AR4 multi-model ensemble using the methodology of Räisänen (2001). Results are expressed as a function of horizontal scale on the x axis (‘Loc’: grid box scale; ‘Hem’: hemispheric scale; ‘Glob’: global mean) plotted against the y axis showing (a) the relative agreement between ensemble members, a dimensionless quantity defined as the square of the ensemble-mean response (corrected to avoid sampling bias) divided by the mean squared response of individual ensemble members, and (b) the dimensionless fraction of internal variability relative to the ensemble variance of responses. Values are shown for surface air temperature, precipitation and sea level pressure. The low agreement of SLP changes at hemispheric and global scales reflects problems with the conservation of total atmospheric mass in some of the models, however, this has no practical significance because SLP changes at these scales are extremely small.

Seems to me like they are drawing all kinds of statistical conclusions from the distribution of the model results.
I have the same problem that Robert Brown has with this—the models are not separate realizations of an underlying reality.
If we are drawing colored balls from a jar, the mean and standard deviation of the results tells us something.
But with the models, they are not representations of reality. This is obvious from their results—despite using widely differing inputs, they are all able to “hindcast” the 20th century.
Here’s the problem for me. The mean and standard distribution of the models does NOT reflect underlying reality like the balls int he jar. Instead, they are just a measure of how well they are tuned. If they were all tuned perfectly and worked perfectly, they would all hindcast exactly the same result … but they don’t.
As a mental exercise, consider the following:
Suppose we have a dozen models, and we look at the results. Under the IPCC paradigm, if the model results are greatly different from the observations, people say, well, the models aren’t working.
Now let’s add a bunch of really crappy models. With our new ensemble, the observations are will within the standard deviation of the models, and people say hey, the models are working! They predicted the actual outcome!
And that, to me, epitomizes the problem with the models that Robert and I see, but you say doesn’t exist. The problem is that if you add crappy models to your ensemble, it performs better … and when that happens, you know your math is wrong somewhere.
w.

Editor
June 21, 2013 4:58 pm

I’ve posted the following over on William Brigg’s site:

William, you say:

ONE Do ensemble forecast make statistical sense? Yes. Yes, they do. Of course they do. There is nothing in the world wrong with them. It does NOT matter whether the object of the forecast is chaotic, complex, physical, emotional, anything. All that gibberish about “random samples of models” or whatever is meaningless. There will be no “b****-slapping” anybody. (And don’t forget ensembles were invented to acknowledge the chaotic nature of the atmosphere, as I said above.)

I know you’re a great statistician, and you’re one of my heroes … but with all respect, you’ve left out a couple of important priors in your rant ….
1. You assume that the results of the climate model are better than random chance.
2. You assume that the mean of the climate models is better than the individual models.
3. You assume that the climate models are “physics-based”.
As far as I know, none of these has ever been shown to be true for climate models. If they have, please provide citations.
As a result, taking an average of climate models is much like taking an average of gypsy fortunetellers … and surely you would not argue that the average and standard deviation of their forecasts is meaningful.
This is the reason that you end up saying that ensembles of models are fine, but ensembles of climate models do very poorly … an observation you make, but fail to think through to its logical conclusion. Because IF your claim is right, your claim that we can happily trust ensembles of any kind of models, then why doesn’t that apply to climate models?
All you’ve done here is say “ensembles of models are fine, but not ensembles of climate models” without explaining why ensembles of climate models are crap, when the performance of climate models was the subject and the point of Robert Brown’s post … sorry, not impressed. Let me quote from Robert’s post regarding the IPCC use of “ensembles” of climate models:

What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).

Note that he is talking about climate models, not the models used to design the Boeing 787 or the models that predicted the Higgs boson … climate models.
One underlying problem with the climate models for global average temperature, as I’ve shown, is that their output is just a lagged and resized version of the input. They are completely mechanical and stupidly simple in that regard.
And despite wildly differing inputs, climate models all produce very similar outputs … say what? The only possible way they can do that is by NOT being physics based, by NOT intersecting with reality, but by being tuned to give the same answer.
If climate worked in that bozo-simple fashion, all of your points would be right, because then, the models would actually be a representation of reality.
But obviously, climate is not that simple, that’s a child’s convenient illusion. So the clustering of the models is NOT because they are hitting somewhere around the underlying reality.
The clustering of the models occurs because they share common errors, and they are all doing the same thing—giving us a lagged, resized version of the inputs. So yes, they do cluster around a result, and the focal point of their cluster is the delusion that climate is ridiculously simple.
So if you claim that clustering of climate models actually means something, you’re not the statistician you claim to be. Because that is equivalent to saying that if out of an ensemble of ten gypsies, seven of them say you should give them your money to be blessed, that you should act on that because it’s an ensemble, and “get your money blessed” is within the standard deviation of their advice …
There are many reasons why models might cluster around a wrong answer, and you’ve not touched on those reasons in the slightest. Instead, you’ve given us your strongest assurances that we can happily trust model ensembles to give us the straight goods … except climate models.
w.

June 21, 2013 6:18 pm

Willis Eschenbach says: June 21, 2013 at 4:55 pm
Well, first I just have to quote William Briggs reply:

“Willis,
No. You misunderstand because you (and others) are not keeping matters separate.
1. Do ensemble models make statistical sense in theory? Yes. Brown said no and wanted to slap somebody, God knows who, for believing they did and for creating a version of an ensemble forecast. He called such practice “horrendous.” Brown is wrong. What he said was false. As in not right. As is not even close to being right. As is severely, embarrassingly wrong. As in wrong in such a way that not one of his statistical statements could be repaired. As in just plain wrong. In other words, and to be precise, Brown is wrong. He has no idea of which he speaks. The passage you quote from him is wronger than Joe Biden’s hair plugs. It is wronger than Napoleon marching on Moscow. It is wronger than televised wrestling.”

And that issue of keeping things separate is what I’ve been arguing with David etc. RGB made specific stat method criticisms. He did not substantiate them. Then there is a flurry of anyone finding some calculation of a model mean, amd then all the stuff about, well, models are wrong anyway. But as WMB rightly says, that’s irrelevant. Stick to the issue.
It’s over a week since RGB made his initial comments about AR5, and we still haven’t found what they apply to. Now you’re suggesting it is Fig 10.27 of the AR4. But that plot is nothing like what RGB was talking about. It does not show a time progression. Instead, they are analysing differences between models, looking for consistency and inconsistency. They look at temp, precip, and slp. They note hemispheric inconsistency in SLP. Is this not something you’d expect them to do?

Paul Vaughan
June 21, 2013 6:59 pm

Diverse recombination (including cross-disciplinary & cross-cultural) facilitates survival.
Some of the alleged climate “unknown unknowns” are now known knowns. They have nothing to do with unknown physics and everything to do with sampling & aggregation.
__
Climate-Stat Paradox 101
_
Lesson 1
“Ephemeral or “mirage” correlations are common in even the simplest nonlinear systems (7, 11–13), such as shown in Fig. 1 […]”
Sugihara+(2012). Detecting causality in complex ecosystems. Science 338, 496-500.
http://www.uvm.edu/~cdanfort/csc-reading-group/sugihara-causality-science-2012.pdf
Consider Fig. 1 (temporal-only context) extensions to the broader spatiotemporal context duly emphasized by Tomas Milanovic — no complete theory yet exists, but meanwhile we need not live in dark ignorance &/or deception …
_
Lesson 2
When we carefully shine light we easily note that applied in concert the laws of conservation of angular momentum & large numbers empirically clarify in tuned aggregate (seeing past the coupling indicated by interannual mirrored mirage”) simple decadal & multidecadal attractors.
_
Lesson 3
“Nights are days – we beat a path through the mirrored maze
I can see the end…”
Metric – Breathing Underwater (ghostly if you don’t spot the paradox)
Tuned-aggregate views of coupled temperature, mass, & velocity gradients:
http://imageshack.us/a/img202/4641/lodjev.png
http://tallbloke.files.wordpress.com/2013/03/scd_sst_q.png
http://imageshack.us/a/img267/8476/rrankcmz.png (section 3)
http://imageshack.us/a/img16/4559/xzu.png (minimal outline)
“Apart from all other reasons, the parameters of the geoid depend on the distribution of water over the planetary surface.” — Nikolay Sidorenkov
Paradoxically we’re not drowning underwater (persistent cloud & heavy rain) in the Pacific Northwest.
_
Summary
Corrupt government & university modeling “science” is based on patently false assumptions.
___
Far more detail plus dovetailing new details (at both lower & higher timescales) forthcoming in the weeks & months ahead as time/resources permit…

June 21, 2013 7:18 pm

milodonharlani says:
June 20, 2013 at 10:55 am
“Agree on nuke plants v. windmills, although both of course produce CO2 from making the cement needed in their concrete”
CO2 is commonly overestimated from cement manufacture, by not taking into consideration its re-absorption by carbonation over the life cycle of the concrete (100yrs). This includes concrete poured long before the 50’s which are usually considered the period when AGW began to be significant. Even structures that are demolished and landfilled are still taking in some CO2. Also, let us not forget that cement makes up only 10% of concrete giving a huge bang for the buck with this remarkable man made stone. As usual, official exaggeration as counseled by the founding fathers of CAGW (Schneider and the disgraced UEA proponents who hired “communication consultant” slicksters to show them how to lie and exaggerate effectively).
http://www.nrmca.org/sustainability/CONCRETE%20CO2%20FACT%20SHEET%20FEB%202012.pdf
“A significant portion of the CO2 produced during manufacturing of cement is reabsorbed into
concrete during the product life cycle through a process called carbonation. One research study estimates that between 33% and 57% of the CO2 emitted from calcination will be reabsorbed through carbonation of concrete surfaces over a 100-year life cycle. (GP: calcined lime plasters also reabsorb CO2 that was driven off in its manufacture to even a higher percentage.)”

John Archer
June 21, 2013 8:51 pm

Filth!
I’m shocked that Stick Nokes quoted a fartoid Billy Boy Briggs left on his underpants.
Tut tut!
Now look here, Stick, you’re getting all second-hand emotional, so let me help you.
To clear your mind, I have removed the menstrual redundancy from your quotation and left the substance, such as it is:
“1. Do ensemble models make statistical sense in theory? Yes. Brown said no and wanted to slap somebody, God knows who, for believing they did and for creating a version of an ensemble forecast. He called such practice “horrendous.” Brown is wrong. What he said was false. As in not right. As is not even close to being right. As is severely, embarrassingly wrong. As in wrong in such a way that not one of his statistical statements could be repaired. As in just plain wrong. In other words, and to be precise, Brown is wrong. He has no idea of which he speaks. The passage you quote from him is wronger than Joe Biden’s hair plugs. It is wronger than Napoleon marching on Moscow. It is wronger than televised wrestling.”
I just wondered WTF content in that shit you actually believe supports your яgument and YTF in hell you bothered quoting it.
Maybe it’s that time of the month? Is that it?
I think the psychopathy of deviants is fascinating. Stick, you’re a very interesting case. God bless.

John Archer
June 21, 2013 8:59 pm

Anthony,
Any chance you might set up a preview pane for the textually challenged?
REPLY: For the ten thousandth time I’ve answered this question, no. wordpress.com won’t allow that plugin to run as it is a security hazard. – anthony

Paul Vaughan
June 21, 2013 9:18 pm

I won’t disagree with the following excerpts:
___
rgbatduke (June 20, 2013 at 10:04 am) wrote:
“This is not a matter of discussion about whether it is Monckton who is at fault for computing an R-value or p-value from the mish-mosh of climate results and comparing the result to the actual climate — this is, actually, wrong and yes, it is wrong for the same reasons I discuss above, because there is no reason to think that the central limit theorem and by inheritance the error function or other normal-derived estimates of probability will have the slightest relevance to any of the climate models, let alone all of them together.
[…]
I make this point to put the writers of the Summary for Policy Makers for AR5 on notice that if they repeat the egregious error made in AR4 and make any claims whatsoever for the predictive power of the spaghetti snarl of GCM computations, if they use the terms “mean and standard deviation” of an ensemble of GCM predictions, if they attempt to transform those terms into some sort of statement of probability of various future outcomes for the climate based on the collective behavior of the GCMs, there will be hell to pay, because GCM results are not iid samples drawn from a fixed distribution, thereby fail to satisfy the elementary axioms of statistics and render both mean behavior and standard deviation of mean behavior over the “space” of perturbations of model types and input data utterly meaningless as far as having any sort of theory-supported predictive force in the real world.
[…]
In the meantime, as the GCMs continue their extensive divergence from observation, they make it difficult to take their predictions seriously enough to condemn a substantial fraction of the world’s population to a life of continuing poverty on their unsupported basis.
[…]
Stop opposing the burning of carbon for fuel while it is needed to sustain civilization, and recognize that if the world economy crashes, if civilization falls, it will be a disaster that easily rivals the worst of your fears from a warmer climate.”

___
rgbatduke (June 13, 2013 at 7:20 am) wrote:
“[…] it looks like the frayed end of a rope, not like a coherent spread […]
[…]
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing […] deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it.
[…]
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased […]
[…]
Why even pay lip service to [r^2 & p-value] ? […]
[…]
You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias.
[…]
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because […] Everything works just fine as long as you […] are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors. Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground […]
[…]
[…] bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.”

old man
June 21, 2013 9:29 pm

It seems to me Dr. Brown is merely conveying what I think it was Mark Twain more succinctly conveyed over 100 years ago. “Education is that which reveals to the wise and conceals from the stupid the vast limits of their knowledge.” Note in that formulation we can be both educated and stupid. Dr. Brown appears to be one of the few wise- and educated -people who doesn’t forget that simple lesson.

John Archer
June 21, 2013 9:38 pm

[The problem with anal retentives is that they can never leave it alone. I always wondered what it was like to be one. Let’s see. JA.]
Disgusting filth!
I’m shocked that Stick Nokes quoted a fartoid Billy Boy Briggs left on his underpants.
Tut tut!
Now look here, Stick, you’re getting all second-hand emotional, so let me help you.
To clear your mind, I have removed the menstrual redundancy from your quotation and left the substance, such as it is:
“1. Do ensemble models make statistical sense in theory? Yes. Brown said no and wanted to slap somebody, God knows who, for believing they did and for creating a version of an ensemble forecast. He called such practice “horrendous.” Brown is wrong. What he said was false. As in not right. As is not even close to being right. As is severely, embarrassingly wrong. As in wrong in such a way that not one of his statistical statements could be repaired. As in just plain wrong. In other words, and to be precise, Brown is wrong. He has no idea of which he speaks. The passage you quote from him is wronger than Joe Biden’s hair plugs. It is wronger than Napoleon marching on Moscow. It is wronger than televised wrestling.”
I just wondered WTF content in that shit you actually believe supports your яgument and YTF in hell you bothered quoting it.
Maybe it’s that time of the month? Is that it?
I think the psychopathy of deviants is fascinating. Stick, you’re a very interesting case. God bless.
[Works for me. JA]

John Archer
June 21, 2013 9:59 pm

“REPLY: For the ten thousandth time I’ve answered this question, no. wordpress.com won’t allow that plugin to run as it is a security hazard.” – anthony
Well no need to get all emotional about it. Besides, I’ve never read ANYTHING ten thousand times.
In any case I wasn’t asking about Plugin — I understand he’s on remand right now and implicated in the Savile case so I’m not surprised wordpress doesn’t want to know. Even so, I don’t see what he’s got to do with any of this.
Any chance of a preview pane?

John Archer
June 21, 2013 10:11 pm

Anthony,
Stick Nokes = duck.
Not me. 🙂

June 22, 2013 3:34 pm

I think people attribute more “intelligence” to models than is real. Try looking at the code and input variables.
Can someone explain to me what function a random number generator has in a predictive physics model?
GISSModelE System.f:
MODULE RANDOM
!@sum RANDOM generates random numbers: 0<RANDom_nUmber<1
The recipe looks to me like this:
1. Take some input data
2. Apply some complex function
3. When the function looks like going out of bounds, check it against a real constraint
4. Throw in some random variation so the constraint doesn't look like clipping
5. Tweak nudging variables until the output looks like we want it.

June 22, 2013 8:29 pm

Here are the places modelE uses randoms:
ATM_DUM.f
c Set Random numbers for Mountain Drag intermittency
CLOUDS2_DRV.f
C Burn some random numbers corresponding to latitudes off
C processor
CLOUDS2_E1.f
!@var RNDSSL stored random number sequences
REAL*8 RNDSSL(3,LM)
CLOUDS2.f
!@var RNDSSL stored random number sequences
REAL*8 RNDSSL(3,LM)
MODELE.f
CALL CALC_AMPK(LM)
C****
!**** IRANDI seed for random perturbation of initial conditions (if/=0):
C**** tropospheric temperatures changed by at most 1 degree C
RAD_DRV.f
C**** Get the random numbers outside openMP parallel regions
C**** but keep MC calculation separate from SS clouds
C**** To get parallel consistency also with mpi, force each process
C**** to generate random numbers for all latitudes (using BURN_RANDOM)
STRATDYN.f
C Set Random numbers for 1/4 Mountain Drag
SYSTEM.f
INTEGER, SAVE :: IX !@var IX random number seed
TES_SIMULATOR.f
C**** Set up 2D array of random numbers over local domain.
C**** This is required for probability-based profile quality filtering.
C**** RINIT and RFINAL are called so that these calls don’t affect
C**** prognostic quantities that use RANDU.
So, if we randomly perturb initial conditions and clouds, what would we expect from a model over time – the frayed rope I presume.

gallopingcamel
June 23, 2013 10:56 pm

ThinkingScientist says: June 20, 2013 at 5:02 am,
Those model projections in AR5 WG1_Chaper 11, SOD (Second Order Draft) are obvious garbage but that won’t stop the IPCC publishing it with great fanfare 12 weeks from now. When each model is garbage the “ensemble” average of 73 models is still garbage. Maybe I am over simplifying “rgb”‘s message but here it is again from the inimitable Warmist, Roy Spencer:
http://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
Roy is only looking ahead to 2020 but the picture is similar in the 2050 scenario shown in AR5 chapter 11. If you want to see total absurdity just take the time to look at the IPCC’s projections for the year 2100. This time you need to look at page 24 (of 26), Figure SPM.5 (b).
http://www.gallopingcamel.info/Docs/WG1/SODs/SummaryForPolicymakers_WG1AR5-SPM_FOD_Final.pdf
Depending on which model you like the temperature rise (compared to the 1986-2005 average) will be anything between 1.0 to 4.0 Kelvin. That is not “Science”. It is not even “Educated Guesswork”. It belongs with the predictions of the folks who used to “Read” animal entrails.
As each year passes it becomes more and more obvious that the observed temperature is likely to remain below the bottom limit of the IPCC’s predicted band, as it is today.

Lars P.
June 25, 2013 12:41 pm

rgbatduke says:
June 20, 2013 at 10:04 am
rgbatduke, thank you for taking the time and putting up so clear the arguments!

milodonharlani
June 25, 2013 1:03 pm

Gary Pearse says:
June 21, 2013 at 7:18 pm
———————————–
Thanks. I’m all for concrete. CACCAs carp on it, so I just throw it out there. It’s the only way they can attack nuclear power from a climate catastrophe standpoint, yet ignore its effect in blessing windmills. Of course, they don’t like dams either.

Paul Vaughan
June 27, 2013 2:47 am

Naive development:
http://judithcurry.com/2013/06/24/how-should-we-interpret-an-ensemble-of-models-part-i-weather-models/
It’s illuminating that Curry points to this:
http://wmbriggs.com/blog/?p=8394
Briggs’ entire argument rests upon patently untenable assumptions. His theoretical view of climate stats exposes blind subscription to an academic culture that ignores diagnostic insights.
Rather than admit that Brown’s arguments go to a MUCH deeper philosophical level, Briggs bases arguments on hidden layers of patently false assumptions.
This is a trust issue.
Since this particularly egregious instance of dark ignorance &/or deception follows on the heels of a longstanding pattern of dark ignorance &/or deception in Briggs’ climate discussion commentary, I’m writing him off permanently as decisively untrustworthy.

1 11 12 13
Verified by MonsterInsights