The “ensemble” of models is completely meaningless, statistically

This  comment from rgbatduke, who is Robert G. Brown at the Duke University Physics Department on the No significant warming for 17 years 4 months thread. It has gained quite a bit of attention because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post

rgbatduke says:

June 13, 2013 at 7:20 am

Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.

Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.

Say what?

This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed

“noise” (representing uncertainty) in the inputs.

What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).

So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that R^2 or p for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.

Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.

Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)

Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)

A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact thatindividual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.

A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).

In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physicsomitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.

Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.

So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.

Which of these is going to be the winner? LDF, of course. Why? Because theparameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.

Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the wayphysics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.

What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.

Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever bynot computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.

Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.

Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.

Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.

And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors.Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.

This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!

So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.

It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.

Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and stillpossibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.

rgb

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
323 Comments
Inline Feedbacks
View all comments
DonV
June 18, 2013 11:14 pm

I’d like to add 2 more cents.
Average temperature is a meaningless measure when you are trying to determine an energy budget for the whole freakn’ WORLD! Average WHAT temperature. Hi temp? Lo Temp? the integral of the temperature over a given day? at what elevation? Ground level, 1000 ft above ground? 10,000 ft above ground?
We sit in the middle of an ocean of air that is, at some times during a given year, filled with varying concentrations of all of the different phases of water . . . vapor, liquid and ice . . . . and that water contains almost ALL of the energy, and yet along with temp we aren’t using the measure of it’s relative concentration and therefore the TRUE ENERGY CONTENT at any given temp location.
I assert that the world”s climate is a self correcting system. I assert that on any given day, over any given week, month, year, the world takes in energy, and the water on this planet ACTIVELY changes phase and MOVES vertically and horizonally to return that energy back to space so that life can savely exist between waters two extremes – 0 and 100 degrees. We just happen to live in the layer of the ocean of air/water that maintains the 0 to ~ 40 degree layer. AGW fanatics think that that layer is the most important and must NOT deviate by 1 degree or all manner of nasty things will happen (when in fact on any given day it varies by more than 20 degrees! Silly AGW scientologists!)
Average temperature is meaningless when trying to determine energy flows. I assert that at the very least one needs to measure the integral of temp and water content over time!

Venter
June 18, 2013 11:17 pm

That’s par for course with Nick Stokes, to lie every single time and go to any extent to support the climate science liars. Not a single word of what he says should be trusted.

Nick Stokes
June 18, 2013 11:35 pm

Monckton of Brenchley says: June 18, 2013 at 10:57 pm
“Liar Stokes, in the fashion of a paid troll, continues to recite his long-debunked lie that it was I who was responsible for assembling the graph of an ensemble of spaghetti-graph outputs that was in fact assembled by the IPCC at Fig. 11.33a of its Fifth Assessment Report.”

Well, I’m actually just asking a pretty fundamental question – what is RGB talking about? He’s made some specific charges. Someone has been “treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean” etc. But who and where?
The quote here is explicit. Para 2:
“This is reflected in the graphs Monckton publishes above”
Para 3:
“Note the implicit swindle in this graph… “
Next substantive para:
“This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph…”
He’s referring to the graphs in that thread, and they aren’t spaghetti graphs of individual results like Fig 11.33a, which has no atatistics. Now as I said in my earlier post, I can’t see that RGB’s criticism properly does apply to Lord Monckton’s graph, although that is what it says. So then, what does it apply to?

Peter Miller
June 18, 2013 11:49 pm

At the end of the day, those who peddle their highly flawed climate models – no matter how beautiful they might think they are – are no different from those who peddled the Y2K scare back at the end of the last century.
The reason they did it was for financial self-interest, despite all their claims for supposedly trying to save civilisation as we know it.

June 19, 2013 12:18 am

Nick Stokes makes a pertinent point when he asks if AR5 actually does average different models and not just averages different runs of the same model.
If the modelled physics is the same then averaging the runs seems sound.
But, I think we can agree, if the models are different you can’t get a meaningful result from mixing them up.
So does AR5 use multi-model averages? Following Alex Rawls links I found this:

The results of multiple regression analyses of observed temperature changes onto the simulated responses to greenhouse gas, other anthropogenic, and natural forcings, exploring modelling and observational uncertainty and sensitivity to choice of analysis period are shown in Figure 10.4 (Gillett et al., 2012b; Jones and Stott, 2011; Jones et al., 2012). The results, based on HadCRUT4 and a multi-model average, show robustly detected responses to greenhouse gas in the observational record whether data from 1851–2010 or only from 1951–2010 are analysed (Figure 10.4a,c).

Emphasis mine.
Now, I can’t see the graph and, being a bear of very little brain, may not be able to understand it anyway but it does look to me like AR5 has made the blunder that this post is about.

Gail COmbs
June 19, 2013 12:26 am

Much thanks to Dr Brown for this comment.
……
William McClenney says: June 18, 2013 at 8:18 pm
…..When was it, exactly, that we, H. sapiens sapiens, the wise, wise one, abandoned reason?
>>>>>>>>>>>>>>>>>
When Human Greed was sold as “Save the Children/Environment” and Acadamia and the Media decided to jumped on board the gravy train too.
….
Thanks Anthony for promoting this comment to a post. Makes it easier to book mark.

Nick Stokes
June 19, 2013 12:29 am

M Courtney says: June 19, 2013 at 12:18 am
“AR5 has made the blunder that this post is about”.

I think it is likely that for various purposes AR5 has calculated model averages. AR4 and AR3 certainly did, and no-one said it was a blunder. People average all sorts of things. Average income, average life expectancy. But this post says much more than that. Let me quote again:
“by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”
RGB says this all happened in Lord M’s graph. I don’t think it is a fair description of that graph. So where?

brad
June 19, 2013 12:50 am

But isn’t the ensemble model average exactly what hurricane folks use and it works great? They use the average of a bunch of noisy models and it does a great job of predicting hurricane movement.
Not that I agree with the models, but…

June 19, 2013 12:57 am

As further thought to RGB’s excellent comment, I wonder how much the physics of the models has actually changed since, say, TAR -> AR4 -> AR5? My suspicion is not much. The models have way more cells run on bigger computers, but I think the physics they use is pretty much the same. Why mention this point? Because I think the IPCC and the modellers want to claim, when their predictions from say TAR or AR4 are shown to diverge from reality after just 5 to 10 years, that “ah – but the models are much better now. Now you can trust the results. Look – when we hindcast now we get a really fit.”
RGB nails this perfectly with his comment about attractors. I am a geophysicist working in earth science models (large and stochastic) and I am not familiar with this kind of description, but he makes the point so clearly and so well. As new real world data becomes available (ie time passes since the last work), the modellers fine tune the parameters so as the fit of the hind cast looks great. But all they are doing is fine-tuning the parameters to a particular local attractor, like fitting the elephant. Once the real world system state changes to another attractor, then the models have no predictive power at all. And this is as plain to see as the nose on your face – none of the models eg TAR predicted anything other than inexorable increasing temperature with time, and they were all wrong. The IPCC almost admits this to be true: they talk not about predictions but about “scenarios”. The falsehood is then allowing users of those reports – activists, politicians, other, less critical scientists – to behave as though those “scenarios” are predictions. They may be one set (of very biased/groupthink) possibilities but there probability of occurrence must be vanishingly small.
How anyone can believe that the climate modellers have solved a set of non-linear equations, involving Navier-Stokes, with poorly defined initial conditions, poorly defined boundary conditions, using incomplete observations of the real world problem, unknown/incomplete physics in a chaotic and non-linear system that includes coupled ocean-atmosphere exchanges, radiative physics, convection, diffusion, the biosphere, phase-state changes and external factors such as the sun, cosmic rays and so forth is beyond me. And then they claim that this hugely complex, non-linear system is catastrophically sensitive to just one parameter – CO2 concentration of the atmosphere, measured in parts per million. It is absurd. And to then elevate the results to “settled science” and start basing public policy on it is the the most irresponsible thing I have ever seen in my lifetime. Time for the politicians and modellers to wake up and smell the coffee before, as RGB puts it, we “bring out pitchforks and torches as people realise just how badly they’ve been used by a small group of scientists and politicians”.

Michel
June 19, 2013 1:03 am

Can be summarized to a simple evidence: If two different models based on the same related parameters provide statistically insignificant results, averaging of these two will not help.
And all of this is being done for temperature evolution, as if a particular climate would be defined just by temperatures.
Climates (in the plural form) are the long term combinations of seasonal temperatures and rainfalls that will very slowly drive living conditions for flora and fauna – homo sapiens included – in different regions of the globe.
Temperature predictions are notoriously wrong. Where are rainfall predictions without which no climate discussion can be made? And the biomass response to temperature, rainfalls and soil composition? Can they once be ascertained? Probably never, even with the most powerful and sophisticated computers.
So we remain with conjectures about living conditions on Earth that may improve, or worsen, or not change, depending of one’s particular state of mind.

Greg Goodman
June 19, 2013 1:09 am

Lengthy but very well argued. The key line burried in the middle somewhere is this:
” Only if we are very stupid or insane or want to sell something.”

TFN Johnson
June 19, 2013 1:18 am

What is a “bitch-slap” please. Is it a PC term?

Stephen Richards
June 19, 2013 1:26 am

Robert, Thanks, best summary of early quantum physics I have ever read. I don’t believe, at least I hope not, that any self respecting sceptic ever thought that the models were valid and certainly not the stupidity of the median ensemble but I/we have never been able to get inside the models to look which would have helped. The defense of the models by the modelers and their users (Betts comes to mind) has been unequivable over the years and remains so. This great piece will be read by these idiots but sadly dismissed with utter contempt BUT don’t stop your probing and ‘eloquent’ reposes I enjoy them emmensely.

Stephen Richards
June 19, 2013 1:30 am

Nick Stokes says:
June 19, 2013 at 12:29 am
M Courtney says: June 19, 2013 at 12:18 am
“AR5 has made the blunder that this post is about
STRAWMAN ALERT !! Nick your intellect is worthier of a better response so do it.

Stephen Richards
June 19, 2013 1:34 am

Janice Moore says:
June 18, 2013 at 8:18 pm
Your contributions since joining this blog have been brilliant, thanks.

June 19, 2013 1:35 am

http://rankexploits.com/musings/wp-content/uploads/2012/12/Changed_Baseline.jpg
Would they use temperature (upper graph) instead of anomalies, they would have spaghetti all over the place.

Gail COmbs
June 19, 2013 1:39 am

Tilo Reber says: June 18, 2013 at 9:36 pm
…. I think that the bottom line, at this point, is that we have no business producing climate models at all because we cannot do the complexity of physics required to make such modeling meaningful.
>>>>>>>>>>>>>>>>>>>
Heck, Climastrology has not even taken the very first step of trying to HONESTLY determine what all the parameters are that effect climate because it has always been about politics.

….Water is an extremely important and also complicated greenhouse gas. Without the role of water vapor as a greenhouse gas, the earth would be uninhabitable. Water is not a driver or forcing in anthropogenic warming, however. Rather it is a feedback, and a rather complicated one at that. The amount of water vapor in the air changes in response to forcings (such as the solar cycle or warming owing to anthropogenic emission of carbon dioxide).
We are at our best when we follow evidence rather than lead it. My name is Rich. I am physical chemist interested in public discourse and teaching moments for evidence-based thinking.

Water, probably THE most important on-planet climate parameter gets demoted to a CO2 feed back and isn’t even listed by the IPCC as a forcing. That point alone make ALL the models nothing but GIGO. SEE IPCC Components of Radiative Forcings Chart
It was a con from the start and the IPCC even said it was.
The IPCC mandate states:

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environmental Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to assess the scientific, technical and socio-economic information relevant for the understanding of human induced climate change, its potential impacts and options for mitigation and adaptation.
http://www.ipcc-wg2.gov/

Western Civilization was tried and found guilty BEFORE the IPCC ever looked at a scientific fact. The IPCC mandate is not to figure out what factors effect the climate but to dig up the facts needed to hang the human race. The IPCC assumes the role of prosecution and and the skeptics that of the defense but the judge (aka the media) refuses to allow the defense council into the court room.
Academia is providing the manufactured evidence to ‘frame’ the human race and they are KNOWINGLY doing so. In other words Academics who prides themselves as being ‘lofty socialists’ untainted by plebeian capitalism are KNOWINGLY selling the rest of the human race into the slavery designed by the bankers and corporate elite. (Agenda 21)
“Can we balance the need for a sustainable planet with the need to provide billions with decent living standards? Can we do that without questioning radically the Western way of life? These may be complex questions, but they demand answers.” ~ Pascal Lamy Director-General of the World Trade Organization
“We need to get some broad based support, to capture the public’s imagination…
So we have to offer up scary scenarios, make simplified, dramatic statements and make little mention of any doubts… Each of us has to decide what the right balance is between being effective and being honest.”
~ Prof. Stephen Schneider, Stanford Professor of Climatology, lead author of many IPCC reports
“The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.” ~ Prof. Chris Folland, Hadley Centre for Climate Prediction and Research
“The models are convenient fictions that provide something very useful.” ~ Dr David Frame, climate modeler, Oxford University
“The only way to get our society to truly change is to frighten people with the possibility of a catastrophe.” ~ Daniel Botkin emeritus professor Department of Ecology, Evolution, and Marine Biology, University of California, Santa Barbara.
The Bankers, CEOs, Academics, and Politicians know exactly what they are doing, and that is the complete gutting of western civilization for profit. The lament “it is for our future children” has to be the vilest lie they have ever told, since their actions sell those same children into slavery.

World Bank Carbon Finance Report for 2007
The carbon economy is the fastest growing industry globally with US$84 billion of carbon trading conducted in 2007, doubling to $116 billion in 2008, and expected to reach over $200 billion by 2012 and over $2,000 billion by 2020

The results of the CAGW hoax:

International Monetary Fund
World Economy: Convergence, Interdependence, and Divergence
Finance & Development, September 2012, Vol. 49, No. 3
by Kemal Derviş
….Within many countries the dramatic divergence between the top 1 percent and the rest is a new reality. The increased share of the top 1 percent is clear in the United States and in some English-speaking countries and, to a lesser degree, in China and India….
This new divergence in income distribution may not always imply greater national inequality in all parts of a national distribution. It does, however, represent a concentration of income and, through income, of potential political influence at the very top, which may spur ever greater concentration of income. The factors—technological, fiscal, financial, and political—that led to this dynamic are still at work. …And the euro area crisis and its accompanying austerity policies will likely lead to further inequality in Europe as budget constraints curtail social expenditures while the mobility of capital and the highly skilled make it difficult to effectively increase taxes on the wealthiest.
New convergence
The world economy entered a new age of convergence around 1990, when average per capita incomes in emerging market and developing economies taken as a whole began to grow much faster than in advanced economies…. For the past two decades, however, per capita income in emerging and developing economies taken as a whole has grown almost three times as fast as in advanced economies, despite the 1997–98 Asian crisis…
…A third significant cause of convergence is the higher proportion of income invested by emerging and developing countries—27.0 percent of GDP over the past decade compared with 20.5 percent in advanced economies. Not only does investment increase the productivity of labor by giving it more capital to work with, it can also increase total factor productivity—the joint productivity of capital and labor—by incorporating new knowledge and production techniques and facilitate transition from low-productivity sectors such as agriculture to high-productivity sectors such as manufacturing, which accelerates catch-up growth. This third factor, higher investment rates, is particularly relevant in Asia—most noticeably, but not only, in China. Asian trend growth rates increased earlier and to a greater extent than those of other emerging economies….
The economy of China will no doubt become the largest in the world, and the economies of Brazil and India will be much larger than those of the United Kingdom or France.
The rather stark division of the world into “advanced” and “poor” economies that began with the industrial revolution will end,….

The Uncomfortable Truth About American Wages
the real earnings of the median male have actually declined by 19 percent since 1970. This means that the median man in 2010 earned as much as the median man did in 1964 — nearly a half century ago. Men with less education face an even bleaker picture; earnings for the median man with a high school diploma and no further schooling fell by 41 percent from 1970 to 2010….

Workers in the USA, EU, Australia and Canada are now competing (on par thanks to WTO) with Asian workers while our tax dollars are used to fund their brand spanking new World Bank Funded COAL PLANTS. The Guardian states World Resources Institute identifies 1,200 coal plants in planning across 59 countries, with about three-quarters in China and India Also thanks to the WTO and Clinton US technical secrets including military secrets have been given to China as part of the WTO Technology Transfer Agreement.

Gail Combs
June 19, 2013 1:44 am

Gary Hladik says:
June 18, 2013 at 10:08 pm
Anyone reading this thread who hasn’t read all of RGB’s comments in the original thread should do so now….
>>>>>>>>>>>>>>>>>>>>>>>
Agreed. I copied and saved them in a LibreOffice file so I could reread them as a group.

Gail Combs
June 19, 2013 2:01 am

TFN Johnson says:
June 19, 2013 at 1:18 am
What is a “bitch-slap” please. Is it a PC term?
>>>>>>>>>>>>>>>>>
No…. but thanks for making me look it up. (Good for a laugh.)

http://www.urbandictionary.com/define.php?term=bitch-slap
The kind of slap a pimp gives to his whores to keep them in line or punish them. However, it is most commonly used to describe an insulting slap from one man to another, as if the slapper is treating the slappee as his bitch.

rogerknights
June 19, 2013 2:07 am

Gary Hladik says:
June 18, 2013 at 10:08 pm
Anyone reading this thread who hasn’t read all of RGB’s comments in the original thread should do so now. They add much more to the discussion.

A moderator or rgb should copy them over.

June 19, 2013 2:17 am

In private enterprise, all the government-employed turkeys who produced the various climate models would have been given their marching orders by now. They would have had to produce predictions that turned out right or they would have been given the sack!

Jolan
June 19, 2013 2:33 am

Where is Mosher? Strangely silent don’t you think?

AndyG55
June 19, 2013 3:06 am

Moshpit may as well be silent, he rarely says anything of any import anyway. !!
Remember, he’s a journalist with zero scientific education !
So long as you interpret his posts as such, you realise how little they say.

Nick Stokes
June 19, 2013 3:13 am

Stephen Richards says: June 19, 2013 at 1:30 am
“Nick Stokes says: June 19, 2013 at 12:29 am
M Courtney says: June 19, 2013 at 12:18 am
“AR5 has made the blunder that this post is about
STRAWMAN ALERT !! Nick your intellect is worthier of a better response so do it.”

I’m happy with the response. But if you think just calculating a model mean is a blunder, then let’s look at the graph from Dr Spencer, referenced by RGB and featured at WUWT just two weeks ago. A spaghetti plot of 73 model results from CMIP5, prepared by Dr Spencer, with no claim AFAIK that AR5 was involved.
And what’s that big black line down the middle? A multi-model mean!

SandyInLimousin
June 19, 2013 3:24 am

Nick Stokes
“I think it is likely that”
When you know for sure come back and I’m sure worthier people than I will discuss it with you.