This comment from rgbatduke, who is Robert G. Brown at the Duke University Physics Department on the No significant warming for 17 years 4 months thread. It has gained quite a bit of attention because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!
This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
“noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).
So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that or
for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.
Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.
Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)
Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)
A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact thatindividual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.
A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).
In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physicsomitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.
Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.
So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.
Which of these is going to be the winner? LDF, of course. Why? Because theparameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.
Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the wayphysics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.
What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.
Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever bynot computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.
Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors.Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!
So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.
It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.
Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and stillpossibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.
rgb
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
The original scientific approach of AR4 was not so far from what Robert Brown is arguing. The models were required to be tested against a number of scenarios including the whole of the 20th century, the Roman warm period, the Medieval Warm period and the annual variation of sea ice. In the technical section of AR4 they are honest about the fact that none of the models was able to correlate with any of the scenarios without adding a fudge factor for deep sea currents and none of the models correlated well with all the scenarios.
Unfortunately these failures had become mere uncertainties by the time they got to the technical summary and these uncertainties had become certainties by the executive summary. The politicians had been left with no alternative but to conjure up this statistical nonsense. Why did the scientists not speak out? Well some did and were sacked. Others resigned. Most however seemed to realise how much money could be made and went with the flow.
Many people have suffered greatly as a consequence of this fraud but Science is the biggest loser.
Two different competing premises here,
The first is that of trying to predict [model] a complex non linear chaotic climate system.
The second is that the climate system is in a box that just doesn’t change very much and reproduces itself fairly reliably on a yearly, decadely and even millennial time scale.
The first gives rise to weather which is constantly changing and becomes unpredictable [“Australia, a land of droughts and flooding rains”] for different reasons at daily weekly monthly and yearly levels.
The second gives rise to climate change in years, centuries and millenia [ice ages and hot spells] at a very slow rate.
Most people try to confuse climate change and weather. Models are predicting short term changes [weather] and calling it climate change.
All the current models are biased to warming ie self fulfilling models.
The chance of the earth warming up from year to year is 50.05 percent. ]. Why ? because the current long range view shows that we are still very slowly warming over the last 20,000 years. Climate models should basically be random walk generators reverting to the mean
They should have the capacity to put in AO’s, ENSO’s, Bob Tisdale and Tamino, Judy and WUWT plus the IPCC and make a guesstimate model yearly to 5 yearly with the ability to change the inputs yearly to reflect the unknown unknowns that occurred each year. No models should be able to claim to tell the future more than a year out because the chaotic nature of weather on short term time frames excludes it.
The fact that all climate models are absolutely programmed to give positive warming and thus cannot enter negative territory[ which is just under 50% on long term averages] means that all climate models are not only wrong but they are mendaciously wrong.
I don’t mean to toot my own horn here, but I’ve been saying what Robert Brown said on the comment sections of this website and other climate comment sections for years. Physicists use significant computational power daily in their jobs attempting to unlock mysteries of single atoms, and they fail daily trying to do this. Single atoms are unthinkably simpler than trying to compute the climate system, and yet climatologists pretend like they know what the temperatures will be in 200 years.
Climate modelers who pretend their “predictions” have any meaning are either insane, or they’re selling something. This is something I’ve said here for years. Thanks Robert for saying it again in a better way.
“Robert G. Brown uses a quantum mechanics analogy to make his point. The vast majority of us have no knowledge of quantum mechanics nor do we have any way to make meaningful measurements in the field. In contrast, we have all spent a lifetime experiencing climate, so we all have at least a rudimentary knowledge of climate.
Believers in catastrophic global warming have often used the “doctor” analogy to argue their case. They state something like, “The consensus of climatologist says there is serious global warming. Who would you trust, a doctor (consensus) or a quack(non consensus). A better analogy, (given the average person’s familiarity with climate and motor vehicles as opposed to quantum mechanics and medicine) might be, “Who would you trust, a friend or a used car salesman?”” — Alan D McIntire, June 19, 2013 at 5:25 am
I don’t think it’s a better analogy. The timescales under consideration are far too long for Joe Soap to have any meaningful acquaintance changes in climate. Weather is a different matter but that’s beside the point. So I prefer your medic analogy. Let’s stick with that then.
Now, what you and others like you are wilfully ignoring is the abundant evidence that we are dealing here with a care facility staffed exclusively by Harold Shipman clones.
So now how do feel about that colonoscopy they’ve recommended you undergo? They want to rip you a new one, and worse.
It’s what they’ve been doing to the rest of us economically for decades now.
Being a layman confers no free pass in the responsibility stakes and an appeal to argument from authority should be the very last resort for the thinking man. Always do your own as much as you can.
see “Green Jelly Beans Cause Acne”
==========
If you test enough climate models, some of them will accidentally get the right answer. Going forward these “correct” models will have no more predictive power than any other model.
If you do enough tests eventually some will test positive by accident. If you only report the positive tests and hide the negative tests, you can prove anything with statistics. Regardless of whether it is true or not. Which is what we see in the news. Only the positive results are reported.
http://www.kdnuggets.com/2011/04/xkcd-significant-data-mining.html
Thank you Dr Brown the truth shines thro.
As for N. Stokes, there is a reason Climate Audit calls you Racehorse Haynes.
Thanks, Robert.
Yes, the “models ensemble” is a meaningless average of bad science.
The mean of the models is nothing but a convenient way of illustrating what they saying. It was done by Monckton as it has been done by many others. In no way does it imply or suggest anything about the validity of the models, nor that the mean itself is a meaningful quantity in terms of having a physical underpinning.
I don’t see Mr Stokes defending the models anywhere, and neither am I – they patently suck. He is simply asking what bitch rgb wants to slap, since it’s primarily the practice of taking the mean and variance (the ‘implicit swindle’) that offends, apparently.
By extension then, the 97% consensus does not know who to use or interpret statistics and is not competent in physics either.
Frank K. says:
June 19, 2013 at 7:06 am
BTW, one reason you will never see any progress towards perfecting multi-model ensemble climate forecasts is that none of the climate modelers want to say whose models are “good” and whose are “bad”…
===========
the reason for this is quite simple. you find this all the time in organizations. no one dares criticize anyone else, because they know their own work is equally bad. if one person gets the sack for bad work, then everyone could get the sack. so everyone praises everyone, saying how good a job everyone is doing, and thereby protect their own work and jobs.
climate scientists don’t criticize other climate scientists work because they know their models have no predictive power. that isn’t why they build models. models are built to attract funding, which they do quite well. this is the true power of models.
instead, climate scientists criticize mathematicians, physicists and any other scientists that try and point out how poor the climate models are performing, how far removed from science the models are. climate scientists respond that other scientists cannot criticize climate science, because only climate scientists understand climate science.
in any case, climate science is a noble cause. it is about saving the earth, so no one may criticize. it is politically incorrect to criticize any noble cause, no matter how poor the results. at least they are trying, so if they make some mistakes, so what. they are trying to save the planet. you can’t make an omelet without breaking eggs.
no one lives forever. if some people are killed in the process, that is the price of saving everyone else. if anything, it is a mercy. it saved them having to suffer through climate change. they were the lucky ones. those that were left behind to suffer, those are the victims of climate change.
Greg L. says:
June 18, 2013 at 6:02 pm
that the errors of the members are not systematically biased
============
unfortunately none of the models test negative feedback. they all assume atmospheric water vapor will increase with warming. yet observations show just the opposite. during the period of warming, atmospheric water vapor fell.
unfortunately, all the models predict a tropical hotspot – that the atmosphere will warm first followed by the surface. however, this is not what has been observed. the surface has warmed faster than the atmosphere.
these errors point to systemic bias and high correlation of the errors. Both of which strongly argue against the use of an ensemble mean.
I’ve read this post four times now and appreciate its clearly articulated logic more each time I read it, although I must admit my favorite part is a physicist that says “b*tch slap”. Gotta love this guy.
I have two points, one of which is included in the various comments already given.
1) The climate models are clearly not independent, being based on related data sets and hypotheses. The you can not use the usual approach to the uncertainty of the mean by saying: 23 models? then the standard deviation of the mean is the standard deviation divided by the square root of 23.
2) The average scenario curve (temperature versus years) is probably not the most likely scenario.
What could be done is to consider each model curve as a set of data for n years. The alternative data ( m models) also give such data sets. Then in multidimensional space, with n times m dimensions plus one for the temperature, each scenario is a single point. The most likely scenario is where the greatest density of points occurs. This might give a quite different result from just making an average temperature scenario. ( By the way, “Most likely” is the mode and not the mean, although in a symmetric distribution they coincide).
Would it be worth to make such an exercise? To my feeling not worth it because of lack of confidence in the models.
M.H.Nederlof, geologist.
Thank You.
The Model Mean argument was meant, I think, to augment the equally unscientific consensus argument. Or, at the very least, formulated by the same ignorant minds.
If 97% of climate scientists, with their remarkably computationally primitive brains, all attempt to understand the global climate system and calculate the effects of increased atmospheric CO2, starting from different assumptions and having read only a small % of the available literature, does that make them right on average?
As rgb correctly points out, different models effectively represent different theories. Yet they may all claim to be ‘physics-based’.
If you have a medical condition you might visit a physician, or a priest, or a witch-doctor, or an aromatherapist. One or none of them may give you an accurate diagnosis, but trying to calculate an “average” is just not logical, Captain.
But that hasn’t stopped the politically minded applying their own favorite therapy.
i guess everybody who did little science in his life knows that…
so the issue is elsewhere, why so many people accept to discuss about that????
next will be you know earth is not in a state of equilibrium…
next will be global temperature is not a temperature….and meaningless to guess the heat content..
Ok most of what we saw was not refutable..it was not science.
Rob Ricket, nope, another guy. I liked his work, too, though.
We must always keep in mind that models are simplified constructs of complex systems. The focus is on the word simple as in “What do the simple folk do?” Hey, that makes for a good song title!!! It is, and the play is called Camelot. sarc/off
Recall that in addition to the basic errors Dr. Brown discussed, even within the modeling field, correlated error is documented across all the models in the diagnostic literature, on precipitation by Wentz, surface albedo bias by Roesch and Arctic melting by Stroeve and Scambos. I’ve sure this is just the tip of the iceberg, pardon the pun.
“Essentially, all models are wrong, but some are useful. …the practical question is how wrong do they have to be to not be useful.”
“Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.”
– George E. P. Box
Climate science has a dearth of statistics expertise and a big, fat stamp of mediocrity. I”m going to dub it “The Scarlett M”.
In Bullseye shooting, if you shoot a circular pattern of sixes it does not average out to a 10. I think the same principle applies here.
megawati says:
June 19, 2013 at 7:50 am
“The mean of the models is nothing but a convenient way of illustrating what they saying. It was done by Monckton as it has been done by many others. In no way does it imply or suggest anything about the validity of the models, nor that the mean itself is a meaningful quantity in terms of having a physical underpinning.
I don’t see Mr Stokes defending the models anywhere, and neither am I – they patently suck. He is simply asking what bitch rgb wants to slap, since it’s primarily the practice of taking the mean and variance (the ‘implicit swindle’) that offends, apparently.”
You are spreading disinformation. The multi-model mean is constantly presented by all the consensus pseudoscientists as the ultimate wisdom.
Let’s look at the IPCC AR4.
“There is close agreement of globally averaged SAT multi-model mean warming for the early 21st century for concentrations derived from …”
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-es-1-mean-temperature.html
I think the real problem for megawati, Stokes and all other warmists is that nice NSA subsidiary called Google. Thanks, NSA.
Is the problem wider than the ensemble of 23 models that Brown discusses? As I understand, each of the 23 model results are themselves averages. Every time the model is run starting from the same point the results are different so they do several runs and produce an average. I also understand the number of runs is probably not statistically significant because it takes so long to do a single run.
The deception is part of the entire pattern of IPCC behavior and once again occurs in the gap between the Science report and the Summary for Policymakers (SPM). The latter specifically says “Based on current models we predict:” (IPCC, FAR, SPM, p. xi) and “Confidence in predictions from climate models” (IPCC, FAR, SPM, p. xxviii). Nowhere is the difference more emphasized than in this comment from the Science Report in TAR. “”In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”
O’Keefe and Kueter explained how a model works: “The climate model is run, using standard numerical modeling techniques, by calculating the changes indicated by the model’s equations over a short increment of time—20 minutes in the most advanced GCMs—for one cell, then using the output of that cell as inputs for its neighboring cells. The process is repeated until the change in each cell around the globe has been calculated.” Imagine the number of calculations necessary that even at computer speed of millions of calculations a second takes a long time. The run time is a major limitation.
In personal communication with IPCC computer modeller Andrew Weaver told me individual runs can take weeks. All of this takes huge amounts of computer capacity; running a full-scale GCM for a 100-year projection of future climate requires many months of time on the most advanced supercomputer. As a result, very few full-scale GCM projections are made.
http://www.marshall.org/pdf/materials/225.pdf
A comment at Steve McIntyre’s site, Climateaudit, illustrated the problem. “Caspar Ammann said that GCMs (General Circulation Models) took about 1 day of machine time to cover 25 years. On this basis, it is obviously impossible to model the Pliocene-Pleistocene transition (say the last 2 million years) using a GCM as this would take about 219 years of computer time.” So you can only run the models if you reduce the number of variables. O’Keefe and Kueter explain. “As a result, very few full-scale GCM projections are made. Modelers have developed a variety of short cut techniques to allow them to generate more results. This was confirmed when I learned that the models used for IPCC ensemble do not include the MIlankovitch Effect. Weaver told me that it was left out because of the time scales on which it operates.
Since the accuracy of full GCM runs is unknown, it is not possible to estimate what impact the use of these short cuts has on the quality of model outputs.” Omission of variables allows short runs, but allows manipulation and removes the model further from reality. Which variables do you include? For the IPCC only those that create the results they want. Also, every time you run the model it provides a different result because the atmosphere is chaotic. They resolve this by doing several runs and then using an average of the outputs.
After appearing before the US Congress a few years ago I gave a public presentation on the inadequacy of the models. My more recent public presentation on the matter was at the Washington Heartland Conference in which I explained how the computer models and their results were the premeditated vehicle used to confirm the AGW hypothesis. In a classic circular argument they then argued that the computer results were proof that CO2 drove temperature increase.