This comment from rgbatduke, who is Robert G. Brown at the Duke University Physics Department on the No significant warming for 17 years 4 months thread. It has gained quite a bit of attention because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!
This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
“noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).
So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that or
for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.
Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.
Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)
Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)
A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact thatindividual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.
A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).
In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physicsomitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.
Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.
So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.
Which of these is going to be the winner? LDF, of course. Why? Because theparameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.
Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the wayphysics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.
What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.
Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever bynot computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.
Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors.Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!
So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.
It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.
Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and stillpossibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.
rgb
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I would love to see a Global Warming Modeller’s Cookbook based on the ensemble recipe of, say, a cake. Start with an egg (i.e. CO2 content), add more eggs as required, then various bits and pieces of sugar, salt, baking powder, flour, milk and vanilla. Stir well, and bake as long as you like. Yummy, and it’s SO different every time.
I’ve been saying all along that models are nothing but constructs engineered to replicate a foregone conclusion. Funny – but also sad and dangerous – that constructs should be preferred over hard evidence.
In re comments about engineers and other practical appliers of science mostly being skeptics – the historical perspective, if you are not blinded by ideology, also will certainly tend you towards skepticism, because the historical records of the Dust Bowl, the Medieval Warm Period, the Roman Climate Optimum and trhe Hittite-Minoan-Mycenean Warm Period leave no doubt of the lack of correlation, let alone causation, of warming with CO2 – and those records can’t be erased by even the most splendiferous, whoop-de-do models.
I shouldn’t think Mann & Co. have much chance of breaking into the libraries where these records are kept, to destroy them.
Mosher, where art thou?
Reminds me of the claims that we get a better picture of what the planets temperature by just adding more readings.
Averaging together a bunch of thermometers of unknown provenance and with undocumented quality and control issues does not create more accuracy, it actually creates more uncertainty.
Nick Stokes,
Either the Climate Models are junk or they are not.
There is far more evidence they are junk than ever existed to support AGW.
Do you believe the climate models are junk? Yes or no?
And will you ever run out of lipstick?
Because at some point even you must acknowledge the pig is damn ugly.
Precisely so. There is no validity to having the original Scenarios still present in AR5 despite the last 25 years of observation not matching most of the Scenarios. All but the lowest have been invalidated in the first 25 years of their model run.
But your comment that the physics is not consistent, that the variations in Scenarios are more than just the result on addition of noise, surprises me: the sphagetti graphs didn’t look like they “dolphined”, i.e. bounced all over the place. They looked more like had different amplitude swings due to random factors but with a consistent upward climb indicative of similar primary parameters with different scalar values. If the physics is not consisent in the ensemble, then we have a very serious problem: only similar fundamentals should be grouped, and the acceptance of diverse physics has to be accompied by a recognition that the science is not settled and the outcome, not certain TO THE POINT THAT NATURE MAY BE RESPONSIBLE, NOT MAN.
Still, all this discussion does not mean that the temperatures can’t rise to scary levels of >4C by 2100, as the IPCC narrative would have it, IF:
1) the physics as modelled is identified as “wrong”, and so needs to be corrected,
b) the contributing negative forcing parameters have, in part or in whole, more power and so need to be corrected, or
c) the climate is sufficiently chaotic, non-linear or threshold controlled to suddenly shift, which is a paradigm shift of climate narrative which needs to be corrected.
If any of these three situations is claimed, happy today can be made miserable tomorrow – but at a cost of “settled” and “certain”.
At any rate, I agree fully that all climate Scenarios need to be restarted at 2013, with only those that include the recent past displayed. OR explanations given as to how we get from “here” to “there” with the other Scenarios.
Except one: that going forward we have a Scenario that tracks the last 25 years AND takes us to other places (than the current trend) in 2100.
Sometimes we are a little hard on the climate modelers. They are doing the best they can do. It isn’t the models per se that are the problem. It is how they are being used by the propagandists at the IPCC and environmental groups. An analogy may be appropriate.
Say we wanted to model a horse. At the current time we have the technology to properly model a tail wagging. We simply do not have enough information (knowledge) to model a complete horse. However, idealistic groups wish to claim that horses flop around and swat flies. They show the models to support their claims. The fact that the modelers only could model the tail gets lost.
Of course, the modelers really should be standing up and complaining. Maybe some of them are and they are being silenced. The basic problem of insufficient knowledge is not getting the attention it should be getting. It is up to some honest scientists like Dr. Brown, Dr. Spencer and small group of others to bring this to the attention of the world. Where are the rest of the scientists?
I believe the chaos argument is fraught with peril (even though I have used it in the past). It is completely true that chaotic systems can have periods of non-chaotic behavior. This is true even in weather. Places like the tropics often have long periods of time where the weather from one day to the next varies little. It is only occasionally interrupted by a tropical storm. The same can be argued for climate only with longer time periods.
Areas around attractors can be quite stable. If we are not experiencing any of the forces that drive a system from that attractor state then it should be possible to predict future times. I would argue that the current cyclic ocean patterns have been driving our climate for over 100 years (possibly longer) with a slight underlying warming trend (possibly regression to the mean). There is really little chaotic behavior to be seen from a climate perspective.
This doesn’t mean some force might not be right around the corner throwing everything into a tizzy. However, it shouldn’t stop us from trying to understand what might happen if those chaotic forces don’t appear.
Monkeying with a Super Computer keyboard, does not a scientist make!
Richard M, the climate modelers I’ve encountered have defended their models and have defended the way they are being used. It’s not just the IPCC and environmental groups. It’s rank-and-file climate modelers. They appear to have removed their models from direct evaluation using standard physical methods, and seem to prefer things that way.
Stephen Richards! Wow. Thanks, so much, for your generous and kind words. They were especially encouraging on a thread such as this one, for, while I understood the essence of Brown’s fine comment/post, I could only understand about half the content.
***********************************************
Below are some (couldn’t include them all, there were so MANY great comments above — and likely more when I refresh this page!) WUWT Commenters’ HIGHLIGHTS (you are a wonderfully witty bunch!)
************************************************
“These go to eleven.”
— McComber Boy (today, 5:57AM)– that Spinal Tap clip was HILARIOUS. Thanks!
************************
“… like a crowd of drunks with arms on each others shoulders, …
we’re all going in this direction because…..”
[Mike M. 6:10AM June 19, 2013]
We CAN! lol (“Yes, we can!” — barf)
**********************************
“… if you shoot a circular pattern of sixes it does not average out to a 10.”
[Jim K. 9:51AM June 19, 2013]
LOL. Waahll, Ahhlll be. Waddaya know. [:)]
******************************************************
For a good laugh see: “Global Warming Modeller’s Cookbook” (by Jim F. at 10:03AM today)
*************************************************
“‘… E’s passed on! This parrot is no more! He has ceased to be! ‘E’s expired and gone to meet ‘is maker! ‘E’s a stiff! Bereft of life, ‘e rests in peace! If you hadn’t nailed ‘im to the perch ‘e’d be pushing up the daisies! … THIS IS AN EX-PARROT!!…..” [Jimbo quoting Monty Python]
LOL. You always find the greatest stuff, Jimbo.
******************************************************
In Sum (this bears repetition and emphasis (as do 90% of the above comments — can’t acknowledge ALL the great insights, though!):
“All the climate models were wrong. Every one of them. [Click in chart to embiggen]
“You cannot average a lot of wrong models together and get a correct answer.”
[D. B. Stealey, 6:14PM, June 18, 2013]
As I understand it, it is like rolling a dice multiple times and then averaging the results. If your roll it often enough you get an average of 3.5. Then betting that the dice “on average” would come up with 3 or 4. Casino owners love such statistically challenged gamblers.
Ed Davey’s at it again. Apparently we are all “crackpots”.
http://www.telegraph.co.uk/earth/energy/10129372/Ed-Davey-Climate-change-deniers-are-crackpots.html
Please stop attacking Nick Stokes. He doesn’t have to come here but he does.
He is one of the few of his opinion who can put a polite conversation across and is willing to.
I don’t agree with him over the significance of everyone drawing mean lines on ensembles of multi-model graphs; I think everyone who does it is wrong.
But I applaud his courage and willingness to debate when he comes here and points out that it has become the climate industry norm. A bad norm, I think, but he is right when he says it is the norm.
That’s why I look at the impact on AR5 and not the politics of those who use such graphs in presentations.
Excellent. I second many comments above. Wonderful liberating, clear judgement, well said. And it had to be said. It was long due.
And again why has this not been already done? (eliminate models which show unreasonable results?!)
Which begs the question: are the persons doing this modelling interested in maintaining the climate hysteria or interested in scientific research?
http://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
“If the observations in the above graph were on the UPPER (warm) side of the models, do you really believe the modelers would not be falling all over themselves to see how much additional surface warming they could get their models to produce?”
Mike MacCracken the Director of the Climate Institute shares his knowledge -or lack of- on Yahoo climatesceptic group…
“The emerging answer seems to be that a cold Arctic (and so strong west to east jet stream) is favored when there is a need to transport a lot of heat to the Arctic—that being the relatively more effective circulation pattern. With a warmer Arctic, the jet stream can meander more and have somewhat bigger (that is latitudinally extended) waves, so it gets warm moist air further poleward in the ridges and cold air further equatorward in the troughs. In addition, the wavier the jet stream, the slower moving seem to be the waves, so one can get stuck in anomalous patterns for a bit longer (so get wetter, or drier, as the case may be).
There was an interesting set of talks on this by Stu Ostro, senior meteorologist for The Weather Channel, and Jennifer Francis, professor at Rutgers, on Climate Desk Live (see http://climatedesk.org/category/climate-desk-live/). Basically, such possibilities are now getting more attention as what is happening is looked at more closely and before things get averaged away in taking the few decade averages to get at the changes in the mean climate—so, in essence, by looking at the behavior of higher moments of weather statistics than the long-term average. And doing this is, in some ways, requiring a relook at what has been some traditional wisdom gleaned from looking at changes in the long-term average.
Mike MacCracken”
That MacCracken would take the half backed undemonstrated inference by Francis as science is pathetic. GIGO!
hm, just read about the 97% number in the thread and it struck me: looks like 97% of the models are junk…
I think this is a brilliant article. While I now know that the mean has no statistical significance, I think there is no doubt it has acquired a political significance over the years. And to the degree that the climate deviates away from this mean it hurts the cause of the climate change activists. I think all of us who follow this should realize that being right on the statistics and science does not always equate to being on the winning side politically.
StephenP says:
June 19, 2013 at 12:08 pm
Ed Davey’s at it again. Apparently we are all “crackpots”.
========================================
“”and while many accept we will see periods when warming temporarily plateaus, all the scientific evidence is in one direction””
====================================
He can admit to this, but not admit the opposite……..when cooling temporarily plateaus
http://www.foresight.org/nanodot/wp-content/uploads/2009/12/histo3.png
angech says (June 19, 2013 at 7:21 am): “The chance of the earth warming up from year to year is 50.05 percent. ]. Why ? because the current long range view shows that we are still very slowly warming over the last 20,000 years. ”
The Holocene began about 11,500 years ago with the end of the latest ice age. Since the Holocene Optimumabout 8,000 years ago, proxy temps have generally trended down. So you could say we’re not just “recovering” from the Little Ice Age, we’re also recovering from the “Holocene Minimum”. Personally, I hope the recovery continues past the current temp plateau. 🙂
angech says:
June 19, 2013 at 7:21 am
…. All the current models are biased to warming ie self fulfilling models.
The chance of the earth warming up from year to year is 50.05 percent. ]. Why ? because the current long range view shows that we are still very slowly warming over the last 20,000 years….
>>>>>>>>>>>>>>>>>>>>>>>
That is another one of the FALSE ASSumptions. The Earth is now in a long term cooling mode but that does not fit the political agenda.
10,000 yrs GISP (Greenland Ice Core) graph – Data from Richard B. Alley of the U.Penn. was elected to the National Academy of Sciences, chaired the National Research Council on Abrupt Climate Change. for well over a decade and in 1999 was invited to testify about climate change by Vice President Al Gore. In 2002, the NAS (alley chair) published a book “Abrupt Climate Change”
140,000 yrs Vostok graph (Present time on the left) data source NOAA and petit et al 1999
graph last four interglacials VostoK (Present time on the left) data source petit et al 1999
NH solar energy overlaid on the Greenland and Vostok Ice core data from John Kehr’s post NH Summer Energy: The Leading Indicator “Since the peak summer energy levels in the NH started dropping 9,000 years ago, the NH has started cooling….That the NH has been cooling for the past 6,000 years has found new supporting evidence in a recent article (Jakobsson, 2010) …”
John points out more evidence from another paper in his post Norway Experiencing Greatest Glacial Activity in the past 1,000 year and a newer paper in his post Himalaya Glaciers are Growing
This comment of John’s is the take home
The Climate is cooling even if the Weather warms over the short term.
As far as I can tell from the geologic evidence we have been darn lucky the temperature has been as mild and as even as it has been since ‘Abrupt Climate Change’ is part of the geologic history of the earth.
“tendencies of abrupt onset and great persistence” sure sounds like the Dr. Brown’s Strange Attractors
More on Strange Attractors
http://www.stsci.edu/~lbradley/seminar/attractors.html
Latitude says:
June 19, 2013 at 6:55 am “thank you……over and out”.
Compare it with a multiple-choice exam. If you don’t know anything of the subject, by pure chance you may get some items correct. But you fail because of your total score. What should we think of your good-luck achievements? Something special with a few percent correct answers? Tell it your teacher and he will say that you failed. This happens to everybody. Why should should we be kind to the climate modellers?
Alan D McIntire @ur momisugly June 19, 2013 5:25 am
….Robert G. Brown uses a quantum mechanics analogy to make his point. The vast majority of us have no knowledge of quantum mechanics nor do we have any way to make meaningful measurements in the field. In contrast, we have all spent a lifetime experiencing climate, so we all have at least a rudimentary knowledge of climate….
>>>>>>>>>>>>>>>>
I have no problem with Dr. Brown’s use of a quantum mechanics analogy. It makes me go out and learn something new. Dr. Brown was careful to give enough information that a layman could do a search for more information if he became confused but his explanation was good enough that you could follow the intent of his explanation with just a sketchy knowledge of physics.
I also think the analogy was very good because you are talking about the physics used to describe a system that is less complex than climate but one that is politically neutral (Medicine is not) The other positive point about Dr. Brown’s analogy is it showed just how complicated the physics gets on a ‘Simple Atom’ and therefore emphasizes how much more complex the climate is and how idiotic it is to think we can use these models to say we are looking at CAGW.
…….
John Archer says
“…Being a layman confers no free pass in the responsibility stakes and an appeal to argument from authority should be the very last resort for the thinking man. Always do your own as much as you can. ”
John is correct. We now know we can not rely on the MSM to give us anything but propaganda so if we do not want to be a
Conman’s MarkPoliticians Patsy we have to do our own research and thinking. (Now if only we had someone to vote FOR…)So “The Flying Spaghetti Monster” has been grounded.8-)
Why does what they did with the spaghetti models remind me of “Mike’s Nature Trick”?
If one graph starts to go south on you, just graft in another!
Nick Stokes says:
June 18, 2013 at 7:00 pm
Alec Rawls says: June 18, 2013 at 6:35 pm
“Nick Stokes asks what scientists are talking about ensemble means. AR5 is packed to gills with these references.”
But are they means of a controlled collection of runs from the same program? That’s different. I’m just asking for something really basic here. What are we talking about? Context? Where? Who? What did they say?
Nick, this response is so disingenuous it should be embarrassing. Simply running a web search for the term “climate model ensemble means” turns up numerous examples of its use, and none of the top-listed sites and papers where it occurs is a skeptic site or critical paper. Tebaldi (2007) for instance in Ensemble of Climate Models, though she does note that the models are not independent, does state the “model mean is better than a single model,” though what it might be better for is debatable. Others suggest weighting individual models by their proximity to reality, in effect a “weighted mean” approach, since no results are discarded as irrelevant.
Really,disagreeing with any scientist’s argument is the universal right of all other scientists. Why not simply argue the science? What have citations, references or context to with whether models are meaningful, either individually or in ensemble?