This comment from rgbatduke, who is Robert G. Brown at the Duke University Physics Department on the No significant warming for 17 years 4 months thread. It has gained quite a bit of attention because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!
This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
“noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).
So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that or
for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.
Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.
Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)
Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)
A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact thatindividual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.
A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).
In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physicsomitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.
Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.
So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.
Which of these is going to be the winner? LDF, of course. Why? Because theparameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.
Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the wayphysics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.
What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.
Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever bynot computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.
Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors.Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!
So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.
It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.
Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and stillpossibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.
rgb
What a brilliant application of scientific logic in exposing the futility of attempting to prognosticate the future with inadequate tools. It takes a measure of moral courage to expose fellow academics as morally bankrupt infants bumbling about in a dank universe of deception. Bravo!
On another note: Pat Frank; are you the Pat Frank of Alas Babylon fame? it was a high school favorite.
Seven kindergarteners are sitting in a circle in the Humans Control the Climate School.
Teacher says, “Let’s vote on what color to paint our classroom!”
“Okay!”
“Everybody turn around. Take your brushes and choose one color from you paint sets beside you to paint on your piece of paper. We’ll see which color is the most popular.”
[7 chubby hands firmly grip brushes…. 7 furrowed brows choose a color….. 7 paint brushes going busily…..]
“Okay. Turn around. Hold up your papers to show how you voted.”
[Seven pieces of paper held aloft sporting respectively: Green, Red, Yellow, Blue, Violet, Orange, and Magenta]
Teacher: Well, isn’t that something. Hm. I don’t know what to do. Hm. [brightens] Oh, I know! We’ll just take ALL those colors and mix them together!
New Paint Color: “Kindergarten Ensemble” — looks great, huh?
(Parent muttering to child as they leave classroom together at the end of the day: Katie, why did your teacher paint your room the color of Gerber’s baby food prunes?)
Or perhaps it looks more like the frayed end of a rug? The frayed end of a rope still tends to have an average that may approximate the center of the rope. But the end of a frayed rug is so broad and the fraying can be so random, it seems less likely that a center point could be approximated.
Or perhaps it looks like someone randomly threw down multiple frayed ropes, each with their own error range, which could be the frayed ends. Since there is no order applied to the multiple frayed ropes that are “dropped” in place, there is no expectation that there could be a reasonable center-point that has any meaning.
Bottom line… very well written. A much needed contribution to the discussion of the irrelevance of unsupportable climate models.
Wonderfully liberating! (As Truth tends to be, if you can face it.)
I laughed and greatly enjoyed how Mr. Brown brought in the “butterfly effect.” It truly does frustrate us, in the arts as well as the sciences, and led the poet Burns to conclude that the best laid plans of mice and men often go awry.
However do not squash the butterfly. It is due to the butterfly effect, and the fact that human beings are chaotic systems, that a loser, who has always been a loser, and who everyone knows is a loser, and everyone predicts will always be a loser, and predicts will certainly come to a bad end, baffles everyone including himself, because he wins.
Dr. Brown, that was simply superb science. Challenging beyond what I think I can even recognize. But summed up well at many points within.
Thinking about it, sitting back and re-reading chunks of it, I continue to be assailed by a rather stunningly simple question…..
When was it, exactly, that we, H. sapiens sapiens, the wise, wise one, abandoned reason?
We seem to have only had it for such a short time………
OssQss, thanks for answering my impertinent question. LOL.
Hey, wasn’t that neat how the guy who posted just above my kindergartener analogy talked about “morally bankrupt infants bumbling about”?!
Infantile science.
Re: possible acronym meanings…… How about Outstanding, Super-Smart, Quality, Super Scientist?
The proof was there all along my friends. Our error lies in attempting to compare the mean of the models with empirical data when the deviation between the models proves that most must be wrong. Step into the light, it feels good.
“… perhaps it looks more like the frayed end of a rug… .” [Ben at 8:15PM today]
Nice insight!
How about the frayed edge of the sleeve of “The Emperor’s New Clothes” — they were never really anything but fantasy science all along… .
Caleb says:
June 18, 2013 at 8:16 pm
Wonderfully liberating! (As Truth tends to be, if you can face it.)
I laughed and greatly enjoyed how Mr. Brown brought in the “butterfly effect.” It truly does frustrate us, in the arts as well as the sciences, and led the poet Burns to conclude that the best laid plans of mice and men often go awry.
However do not squash the butterfly. It is due to the butterfly effect, and the fact that human beings are chaotic systems, that a loser, who has always been a loser, and who everyone knows is a loser, and everyone predicts will always be a loser, and predicts will certainly come to a bad end, baffles everyone including himself, because he wins.
=======================================================================
Butterfly effect 🙂
Thanks, that brought back memories !
Well said, sir. I heartily concur. But there is one difference. In the world of physics, changing directions is often a matter of re-writing one board full of equations. The world of modelling has a bit more mass to redirect. And if the right class of adjustments weren’t designed in from the start, there can be a WHOLE lot more of that inertia.
There are some modelling problems you can solve with more speed, but this doesn’t appear to be one of them. In that case, a programmer can be working very, very hard and not really accomplishing what needs to be accomplished. And I’m sorry, guys, but your customer isn’t actually going to care how hard you worked, if the system doesn’t do its job.
And yes, if the modelers are wondering, I do have some idea how much work a billion-dollar system can take. I work there.
This post covers a couple of different questions in a somewhat entangled fashion that for clarity should be treated separately.
The first question is: for how long and by how much does a single model have to diverge from reality in order to be rejected. All such criteria are arbitrary and conventional; p < 0.05 is not a particularly strong one, but it can be considered a sufficient motivation to look for a better model.
The second question: Is it, or is it not, meaningful to calculate an average and standard deviation for all of the models. I certainly agree that averaging model projections does not have the same rationale as averaging repeat measurements, which is a reasonable approach for reducing and controlling experimental error. The average and spread calculated from the models does not gives any superior approximation to reality. Instead, they simply describe the general trend and the extent of disagreement between the models.
Finally, the question is raised to what extent the models are based on physics, and it is suggested that their progressive divergence in time should not be observed if they indeed were physics-based.
I agree that the models are not sufficiently based on physics, simply because our understanding of the physical principles that govern the climate system is too incomplete.
However, I do not agree that we can conclude this from the divergence of the projections alone. If we use the analogy introduced by the author, namely physical models of electron orbitals in atoms, would we not expect increasing divergence as we progress from small and simple atoms to larger ones? As more electrons are added to an atom, the effects of neglected or differently approximated terms will compound one another. Similarly, different approximations in climate models will drive increasing divergence through successive iterations of model states.
@Ossqss “…taken (a)back when I saw this video…” You really don’t understand???
Guillory speaks 100% truth!
I’m 70 and haven’t seen much economic change in the black community because they continue to be enslaved by big gov’t handouts. (The RINO’s do little better.)
Wake-up, that is exactly what big gov’t wants to do to the rest of us w/ this CO2/AGW scam.
And RGB; absolutely brilliant!
BC
This brings me back down memory lane to about 5-6 years ago when I first started looking into AGW. (IPCC v 4 was just released I remember) I had just read a book that I had read used and was intrigued to see whether all of the things J. Chrichton was saying was true. (I had always been sceptical that scientists could use such a trace gas as a reason for the warming planet…but had never looked into the science itself). First spot I hit was realclimate (as most people will remember) and after posting and getting my questions answered with half-wit answers I went to the horse’s mouth and started reading the IPCC.
Than I saw that they averaged different models together. I was so shocked that morons would even attempt to do this that from that point on I couldn’t even begin to hazzard why they even bothered with error margins and any of the other fluff. There was no longer any reason to even research it any more, because if these “scientists” were actually competent they would have never averaged different model results together. I could explain this, but I think the article above does it much better than I could have. In any event…. I figured perhaps this was some huge mistake and that the “ensemble mean” was perhaps something else. I asked at realclimate.
I said something to this effect” That is like trying to average the size of apples and oranges and grapes and trying to find an average fruit size. The answer you get is so pointless that you are no longer showing a “possible Earth” but an Earth with unicorns and dinosaurs still living on it. ”
Oh sure, realclimate answered me after I had pointed out that mistake ….. But in this instance you can average together models because it makes sense because of the physics behind AGW. (I am sure there are lots of us who got such an intellectually dead answer over there.) The answer was so appaling that yes I was bad and called them certain names like I did above here and yes, got banned.
Needless to say, they drove me to sceptical sites like WUWT and others because if someone is not going to answer the questions in an intelligent matter and than also ban me, why they must be full of it. Much to my dismay, I found out that these same people are some of the leading researchers in this area and that the IPCC work is seen as “the gold standard.”
Of course, once you find that one mistake in the science (there are tons of course) it is so easy to go down the list and find a good portion of them. They are all based on assumptions that are turned around and yelled that they are fact. Positive feed-backs, yup. Averaging together models can get you a useful answer…assumed and shouted to the rooftops. The point of no return is the second you assume something, forget you assumed that, and than call everyone else names who ask questions. But anyway, thanks for the trip down memory lane….I too was dismayed several years ago when I found out they actually averaged together different model results kind of similar to a first grader who might do it because they did not know better. I still to this day can not believe that these scientists would sign their name to such a fluff piece as the IPCC with such mistakes!
I think this problem is different from modeling the spectrum of carbon in that your examples of models involves the piece-wise inclusion of refinements that are built on demonstrated physics. In the climate model game it is an issue of including more or less assumptions all of which are loosely suppose to be supported by, but not demonstrated to be, physics. And since refining with assumptions may or may not produce a better model, it is hard to know, apriori, which the better models will be. Empirically the outliers can claim that their models are still more correct and that a longer time interval will demonstrate this. I think that the bottom line, at this point, is that we have no business producing climate models at all because we cannot do the complexity of physics required to make such modeling meaningful.
Excellent you made a post out of the answer, Anthony. I have been saying the exact same things ever since I laid my eyes on the AR4 spaghetti graphs. Particular emphasis given to the non linearity of the underlying equations, assumed in the gridded models, and the chaotic nature of climate in any talks I gave to students.
sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time.
Actually this is the crux of the problem when science is overfunded because it suits the politics of the time. Professors have graduate students, graduate students become eventually professors and they tend to keep fiddling with their thesis subject creating what for each are brilliant variations on the line of research. What rgbatduke is saying should have been said by the peer reviewers of the first linear projection to the future in grid box models and the first indication that different models give so different results. That is what peer review is for, to catch errors, misunderstandings and replications. In a system that developed into back slapping and approval so that the research pie was distributed among an inner group the results should be inevitable. I am sure that if weapons related physics research is declassified in the next century the studies will show a similar eating at the trough trend with little real physics, all because of bad or non/existent peer review , since normal physicists are out of the circuit. A great waste of public money which could be excused because of defense of the nation reasons.
My late father used to say it takes a lot of manure to grow a rose. I have used this saying to defend wasting money in unsuccessful research , but funding should be discontinued once it has been proven unsuccessful. This bunch has corrupted the peer review method, the very meaning of what successful means in science, in order to keep increasing their funding and the politicians who love taxes close the circle. A lot more money goes around with carbon markets then distributed to the priesthood of climate change ( at least as they planned carbon markets).
A rather succinct comment. And if that is how difficult it is to model the properties of a single atom, then it is unlikely to be any easier to model the properties of the entire planet’s atmosphere/hydrosphere with any predictive certainty. In fact it will probably be impossible, particularly as many of the variables are still unknown or unquantifiable, and many of them have such a long periodicity that they are out of the range of our historically momentary recent measurements. The problem is just too complex. Apart from which, the models will only end up reflecting what the modeller put in because that was what was thought to be most important. They certainly won’t reflect what was left out because it was unknown or inadvisedly considered not to matter.
Anyone reading this thread who hasn’t read all of RGB’s comments in the original thread should do so now. They add much more to the discussion.
My view has always been that trying to model the climate is a complete waste of time and money because you can’t model chaos.
It’s been a “lovely little earner” for he huge AGW industry over the years though and will continue to be so until enough people come round to realizing they have been had and start threatening to vote our countless idiotic politicians out of their jobs.
It’s about time the statisticians of the world got tired of being goosed by Climate Scientists, and stood up, thereby depriving them of a target.
I think that wraps it up. We should start hitting politicians over the head with this very post, and demand that most of those models get defunded and mothballed ASAP – also that the politicians stop wasting taxpayer money on poor science. Just how much does it take to burst the Greenie CAGW bubble?
Oh dear, now I feel really worried. You see, I had realised that the models were wrong. But, you see, if you take two wrong models and average them, then each is only half wrong! Now isn’t that an improvement? When you have 70 or so wrong models and you average them, the wrongness is so small, they must be nearly right — isn’t that the case?
Now I’m told that you can’t average them at all…
Liars, damn liars, and then there are statisticians. . . . I mean AGW climate scientologists.
Statistics are an objective tool to be used by good scientists/statisticians to help discern truth. Our postmodern academia has almost universally adopted relativistic humanism as its main world view and consequently there is no such thing as absolute truth – everyone’s “model” has to have some truth in it, therefore the “average” of all of them is where the ‘real’ truth lies. This of course conflicts with the main objective of science which is to discover the objective truth about things in the world. For a die-hard AGW convert, when world view conflicts with scientific objective, world view wins.
Hence you have these scientologists making up sciencey sounding words and phrases that get “peer review” published that are “relatively” right, and still objectively and absolutely FALSE. AGW has become a religion, not a science. . . . so your rant about the correct use of statistics is going to fall on deaf ears to the faithful.
But those of us who are still seeking objective truth hear you. We agree. Well said.
I had to apply the full force of my GED Diploma to comprehension but alas, even this was insufficient given the depth of the subject matter. Nonetheless, the basic message was quite clear even to the likes of me which is precisely what draws me back to WUWT daily. The understanding that one can gain if only one is willing to read.
Thanks Dr. Brown.
Liar Stokes, in the fashion of a paid troll, continues to recite his long-debunked lie that it was I who was responsible for assembling the graph of an ensemble of spaghetti-graph outputs that was in fact assembled by the IPCC at Fig. 11.33a of its Fifth Assessment Report. As previously explained, in my own graph I merely represented the interval of projections encompassed by the spaghetti graph and added a line to represent the IPCC’s central projection.
Professor Brown states that he was specifically condemning the assembly of an ensemble of models’ outputs. I was not, repeat not, responsible for that assembly: the modelers and the IPCC were. It was them – not me – that Professor Brown was criticizing. Indeed, he states plainly that I have accurately represented the IPCC’s graph.
Gee, i love it when science makes common sense. And so plainly written that I can understand it. Give this guy two ears and a tail.
Eugene WR Gallun