Real Science Debates Are Not Rare

Guest Post by Dr. Robert G. Brown

The following is an “elevated comment” appearing originally in the comments to “A Rare Debate on the ‘Settled Science’ of Climate Change”, a guest essay by Steve Goreham. It is RG Brown’s reply to the Steven Mosher comment partially quoted at the beginning of the essay. This essay has been lightly edited by occasional WUWT contributor Kip Hansen with the author’s permission and subsequently slightly modified with a postscript by RGB.

rgbatduke

October 3, 2014 at 8:41 am

“…debates are rare because science is not a debate, or more specifically, science does not proceed or advance by verbal debates in front of audiences. You can win a debate and be wrong about the science. Debates prove one thing. Folks who engage in them don’t get it, folks who demand them don’t get it and folks who attend them don’t get it”.

Steven Mosher – comment

Um, Steven [Steven Mosher], it is pretty clear that you’ve never been to a major physics meeting that had a section presenting some unsettled science where the organizers had set up two or more scientists with entirely opposing views to give invited talks and participate in a panel just like the one presented. This isn’t “rare”, it is very nearly standard operating procedure to avoid giving the impression that the organizers are favoring one side or the other of the debate. I have not only attended meetings of this sort, I’ve been one of the two parties directly on the firing line (the topic of discussion was a bit esoteric — whether or not a particular expansion of the Green’s function for the Helmholtz or time-independent Schrodinger equation, which comes with a restriction that one argument must be strictly greater than the other in order for the expansion to converge, could be used to integrate over cells that de facto required the expansion to be used out of order). Sounds a bit, err, “mathy”, right, but would you believe that the debate grew so heated that we were almost (most cordially 🙂 shouting at each other by the end? And not just the primary participants — members of the packed-room audience were up, gesticulating, making pithy observations, validating parts of the math.

You’re right that you can “win the debate and be wrong about the science”, however, for two reasons. One is that in science, we profoundly believe that there is an independent objective standard of truth, and that is nature itself, the world around us. We attempt to build a mathematical-conceptual map to describe the real terrain, but (as any general semantician would tell you) the map is not the terrain, it is at best a representation of the terrain, almost certainly an imperfect one. Many of the maps developed in physics are truly excellent. Others are perhaps flawed, but are “good enough” — they might not lead someone to your cufflinks in the upstairs left dresser drawer, but they can at least get someone to your house. Others simply lead you to the wrong house, in the wrong neighborhood, or lead you out into the middle of the desert to die horribly (metaphorically speaking). In the end, scientific truth is determined by correspondence with real-world data — indeed, real world future data — nothing more, nothing less. There’s a pithy Einstein quote somewhere that makes the point more ably than I can (now there was a debate — one totally unknown patent clerk against an entire scientific establishment vested in Newtonian-Galilean physics 🙂 but I am too lazy to look it up.

Second, human language is often the language of debates and comes with all of the emotionalism and opportunity for logical fallacy inherent in an imprecise, multivalued symbol set. Science, however, ultimately is usually about mathematics, logic and requires a kind of logical-mathematical consistency to be a candidate for a possible scientific truth in the sense of correspondence with data. It may be that somebody armed with a dowsing rod can show an extraordinary ability to find your house and your cufflinks when tested some limited number of times with no map at all, but unless they can explain how the dowsing rod works and unless others can replicate their results it doesn’t become anything more than an anecdotal footnote that might — or might not — one day lead to a startling discovery of cuff-linked ley lines with a sound physical basis that fit consistently into a larger schema than we have today. Or it could be that the dowser is a con artist who secretly memorizes a map and whose wife covertly learned where you keep your cufflinks at the hairdresser. Either way, for a theory to be a candidate truth, it cannot contain logical or mathematical contradictions. And even though you would think that this is not really a matter for debate, as mathematics is cut and dried pure (axiomatically contingent) truth — like I said, a room full of theoretical physicists almost shouting over whether or not the Green’s function expansion could converge out of order — even after I presented both the absolutely clear mathematical argument and direct numerical evidence from a trivial computation that it does not.

Humans become both emotionally and financially attached to their theories, in other words. Emotionally because scientists don’t like being proven wrong any more than anybody else, and are no more noble than the average Joe at admitting it when they are wrong, even after they come to realize in their heart of hearts that it is so. That is, some do and apologize handsomely and actively change their public point of view, but plenty do not — many scientists went to their graves never accepting either the relativistic or quantum revolutions in physics. Financially, we’ve created a world of short-term public funding of science that rewards the short-run winners and punishes — badly — the short-run losers. Grants are typically from 1 to 3 years, and then you have to write all over again. I quit research in physics primarily because I was sick and tired of participating in this rat race — spending almost a quarter of your grant-funded time writing your next grant proposal, with your ass hanging out over a hollow because if you lose your funding your career is likely enough to be over — you have a very few years (tenure or not) to find new funding in a new field before you get moved into a broom closet and end up teaching junk classes (if tenured) or have to leave to proverbially work at Walmart (without tenure).

Since roughly six people in the room where I was presenting were actively using a broken theory to do computations of crystal band structure, my assertion that the theory they were using was broken was not met with the joy one might expect even though the theory I had developed permitted them to do almost the same computation and end up with a systematically and properly convergent result. I was threatening to pull the bread from the mouths of their children, metaphorically speaking (and vice versa!).

At this point, the forces that give rise to this sort of defensive science are thoroughly entrenched. The tenure system that was intended to prevent this sort of thing has been transformed into a money pump for Universities that can no longer survive without the constant influx of soft and indirect cost money farmed every year by their current tenured faculty, especially those in the sciences. Because in most cases that support comes from the federal government, that is to say our taxes, there is constant pressure to keep the research “relevant” to public interests. There is little money to fund research into (say) the formation of fractal crystal patterns by matter that is slowly condensing into a solid (like a snowflake) unless you can argue that your research will result in improved catalysis, or a way of building new nano-materials, or that condensed matter of this sort might form the basis for a new drug, or…

Or today, of course, that by studying this, you will help promote the understanding of the tiny ice crystals that make up clouds, and thereby promote our understanding of a critical part of the water cycle and albedo feedback in Climate Science and thereby do your bit to stave off the coming Climate Apocalypse.

I mean, seriously. Just go to any of the major search engines and enter “climate” along with anything you like as part of the search string. You would be literally amazed at how many disparate branches of utterly disconnected research manage to sneak some sort of climate connection into their proposals, and then (by necessity) into their abstracts and/or paper text. One cannot study poison dart frogs in the Amazon rainforest any more just because they are pretty, or pretty cool, or even because we might find therapeutically useful substances mixed into the chemical poisons that they generate (medical therapy being a Public Good even more powerful that Climate Science, quite frankly, and everything I say here goes double for dubious connections between biology research and medicine) — one has to argue somewhere that Climate Change might be dooming the poor frogs to extinction before we even have a chance to properly explore them for the next cure to cancer. Studying the frogs just because they are damn interesting, knowledge for its own sake? Forget it. Nobody’s buying.

In this sense, Climate Science is the ultimate save. Let’s face it, lots of poison dart frogs probably don’t produce anything we don’t already know about (if only from studying the first few species decades ago) and the odds of finding a really valuable therapy are slender, however much of a patent-producing home run it might be to succeed. The poor biologists who have made frogs their life work need a Plan B. And here Climate is absolutely perfect! Anybody can do an old fashioned data dredge and find some population of frogs that they are studying that is changing, because ecology and the environment is not static. One subpopulation of frogs is thriving — boo, hiss, cannot use you — but another is decreasing! Oh My Gosh! We’ve discovered a subpopulation of frogs that is succumbing to Climate Change! Their next grant is now a sure thing. They are socially relevant. Their grant reviewers will feel ennobled by renewing them, as they will be protecting Poison Dart Frogs from the ravages of a human-caused changing climate by funding further research into precisely how it is human activity that is causing this subpopulation to diminish.

This isn’t in any sense a metaphor, nor is it only poison dart frogs. Think polar bears — the total population is if anything rapidly rising, but one can always find some part of the Arctic where it is diminishing and blame it on the climate. Think coral reefs — many of them are thriving, some of them are not, those that are not may not be thriving for many reasons, some of those reasons may well be human (e.g. dumping vast amounts of sewage into the water that feeds them, agricultural silt overwashing them, or sure — maybe even climate change. But scientists seeking to write grants to study coral reefs have to have some reason in the public interest to be funded to travel all over the world to really amazing locations and spend their workdays doing what many a tourist pays big money to do once in a lifetime — scuba or snorkel over a tropical coral reef. Since there is literally no change to a coral reef that cannot somehow be attributed to a changing environment (because we refuse to believe that things can just change in and of themselves in a chaotic evolution too complex to linearize and reduce to simple causes), climate change is once again the ultimate save, one where they don’t even have to state that it is occurring now, they can just claim to be studying what will happen when eventually it does because everybody knows that the models have long since proven that climate change is inevitable. And Oh My! If they discover that a coral reef is bleaching, that some patch of coral, growing somewhere in a marginal environment somewhere in the world (as opposed to on one of the near infinity of perfectly healthy coral reefs) then their funding is once again ensured for decades, baby-sitting that particular reef and trying to find more like it so that they can assert that the danger to our reefs is growing.

I do not intend to imply by the above that all science is corrupt, or that scientists are in any sense ill-intentioned or evil. Not at all. Most scientists are quite honest, and most of them are reasonably fair in their assessment of facts and doubt. But scientists have to eat, and for better or worse we have created a world where they are in thrall to their funding. The human brain is a tricky thing, and it is not at all difficult to find a perfectly honest way to present one’s work that nevertheless contains nearly obligatory references to at least the possibility that it is relevant, and the more publicly important that relevance is, the better. I’ve been there myself, and done it myself. You have to. Otherwise you simply won’t get funded, unless you are a lucky recipient of a grant to do e.g. pure mathematics or win a no-strings fellowship or the Nobel Prize and are hence nearly guaranteed a lifetime of renewed grants no matter how they are written.

This is the really sad thing, Steve [Steven Mosher]. Science is supposed to be a debate. What many don’t realize is that peer review is not about the debate. When I review a paper, I’m not passing a judgment as a participant on whether or not its conclusion is correct politically or otherwise (or I shouldn’t be — that is gatekeeping, unless my opinion is directly solicited by an editor as the paper is e.g. critical of my own previous work). I am supposed to be determining whether or not the paper is clear, whether its arguments contain any logical or mathematical inconsistencies, whether it is well enough done to pass muster as “reasonable”, if it is worthy of publication, now not whether or not it is right or even convincing beyond not being obviously wrong or in direct contradiction of known facts. I might even judge the writing and English to some extent, at least to the point where I make suggestions for the authors to fix.

In climate science, however, the ClimateGate letters openly revealed that it has long since become covertly corrupted, with most of the refereeing being done by a small, closed, cabal of researchers who accept one another’s papers and reject as referees (well, technically only “recommend” rejection as referees) any paper that seriously challenges their conclusions. Furthermore, they revealed that this group of researchers was perfectly willing to ruin academic careers and pressure journals to fire any editor that dared to cross them. They corrupted the peer review process itself — articles are no longer judged on the basis of whether or not the science is well presented and moderately sound, they have twisted it so that the very science being challenged by those papers is used as the basis for asserting that they are unsound.

Here’s the logic:

a) We know that human caused climate change is a fact. (We heard this repeatedly asserted in the “debate” above, did we not? It is a fact that CO2 is a radiatively coupled gas, completely ignoring the actual logarithmic curve Goreham presented, it is a fact that our models show that that more CO2 must lead to more warming, it is a fact that all sorts of climate changes are soundly observed, occurred when CO2 was rising so it is a fact that CO2 is the cause, count the logical and scientific fallacies at your leisure).

b) This paper that I’m reviewing asserts that human caused climate change is not a fact. It therefore contradicts “known science”, because human caused climate change is a fact. Indeed, I can cite hundreds of peer reviewed publications that conclude that it is a fact, so it must be so.

c) Therefore, I recommend rejecting this paper.

It is a good thing that Einstein’s results didn’t occur in Climate Science. He had a hard enough time getting published in physics journals, but physicists more often than not follow the rules and accept a properly written paper without judging whether or not its conclusions are true, with the clear understanding that debate in the literature is precisely where and how this sort of thing should be cleared up, and that if that debate is stifled by gatekeeping, one more or less guarantees that no great scientific revolutions can occur because radical new ideas even when correct are, well, radical. In one stroke they can render the conclusions of entire decades of learned publications by the world’s savants pointless and wrong. This means that physics is just a little bit tolerant of the (possible) crackpot. All too often the crackpot has proven not only to be right, but so right that their names are learned by each succeeding generation of physicist with great reverence.

Maybe that is what is missing in climate science — the lack of any sort of tradition of the maverick being righter than the entire body of established work, a tradition of big mistakes that work amazingly well — until they don’t and demand explanations that prove revolutionary. Once upon a time we celebrated this sort of thing throughout science, but now science itself is one vast bureaucracy, one that actively repels the very mavericks that we rely on to set things right when we go badly astray.

At the moment, I’m reading Gleick’s lovely book on Chaos [Chaos: The Making of a New Science], which outlines both the science and early history of the concept. In it, he repeatedly points out that all of the things above are part of a well-known flaw in science and the scientific method. We (as scientists) are all too often literally blinded by our knowledge. We teach physics by idealizing it from day one, linearizing it on day two, and forcing students to solve problem after problem of linearized, idealized, contrived stuff literally engineered to teach basic principles. In the process we end up with students that are very well trained and skilled and knowledgeable about those principles, but the price we pay is that they all too often find phenomena that fall outside of their linearized and idealized understanding literally inconceivable. This was the barrier that Chaos theory (one of the latest in the long line of revolutions in physics) had to overcome.

And it still hasn’t fully succeeded. The climate is a highly nonlinear chaotic system. Worse, chaos was discovered by Lorenz [Edward Norton Lorenz] in the very first computational climate models. Chaos, right down to apparent period doubling, is clearly visible (IMO) in the 5 million year climate record. Chaotic systems, in a chaotic regime, are nearly uncomputable even for very, simple, toy problems — that is the essence of Lorenz’s discovery as his first weather model was crude in the extreme, little more than a toy. What nobody is acknowledging is that current climate models, for all of their computational complexity and enormous size and expense, are still no more than toys, countless orders of magnitude away from the integration scale where we might have some reasonable hope of success. They are being used with gay abandon to generate countless climate trajectories, none of which particularly resemble the climate, and then they are averaged in ways that are an absolute statistical obscenity as if the linearized average of a Feigenbaum tree of chaotic behavior is somehow a good predictor of the behavior of a chaotic system!

This isn’t just dumb, it is beyond dumb. It is literally betraying the roots of the entire discipline for manna.

One of the most interesting papers I have to date looked at that was posted on WUWT was the one a year or three ago in which four prominent climate models were applied to a toy “water world” planet, one with no continents, no axial tilt, literally “nothing interesting” happening, with fixed atmospheric chemistry.

The four models — not at all unsurprisingly — converged to four completely different steady state descriptions of the planetary weather.

And — trust me! — there isn’t any good reason to think that if those models were run a million times each that any one of them would generate the same probability distribution of outcomes as any other, or that any of those distributions are in any sense “correct” representations of the actual probability distribution of “planetary climates” or their time evolution trajectories. There are wonderful reasons to think exactly the opposite, since the models are solving the problem at a scale that we know is orders of magnitude to [too] coarse to succeed in the general realm of integrating chaotic nonlinear coupled systems of PDEs in fluid dynamics.

Metaphor fails me. It’s not like we are ignorant (any more) about general properties of chaotic systems. There is a wealth of knowledge to draw on at this point. We know about period doubling, period three to chaos, we know about fractal dimension, we know about the dangers of projecting dynamics in a very high dimensional space into lower dimensions, linearizing it, and then solving it. It would be a miracle if climate models worked for even ten years, let alone thirty, or fifty, or a hundred.

Here’s the climate model argument in a nutshell. CO2 is a greenhouse gas. Increasing it will without any reasonable doubt cause some warming all things being equal (that is, linearizing the model in our minds before we even begin to write the computation!) The Earth’s climate is clearly at least locally pretty stable, so we’ll start by making this a fundamental principle (stated clearly in the talk above) — The Earth’s Climate is Stable By Default. This requires minimizing or blinding ourselves to any evidence to the contrary, hence the MWP and LIA must go away. Check. This also removes the pesky problem of multiple attractors and the disappearance and appearance of old/new attractors (Lorenz, along with Poincaré [Jules Henri Poincaré], coined the very notion of attractors). Hurst-Kolmogorov statistics, punctuated equilibrium, and all the rest is nonlinear and non-deterministic, it has to go away. Check. None of the models therefore exhibit it (but the climate does!). They have been carefully written so that they cannot exhibit it!

Fine, so now we’re down to a single attractor, and it has to both be stable when nothing changes and change, linearly, when underlying driving parameters change. This requires linearizing all of the forcings and trivially coupling all of the feedbacks and then searching hard — as pointed out in the talk, very hard indeed! — for some forlorn and non-robust combination of the forcing parameters, some balance of CO2forcing, aerosol anti-forcing, water vapor feedback, and luck that balances this teetering pen of a system on a metaphorical point and tracks a training set climate for at least some small but carefully selected reference period, naturally, the single period where the balance they discover actually works and one where the climate is actively warming. Since they know that CO2 is the cause, the parameter sets they search around are all centered on “CO2 is the cause” (fixed) plus tweaking the feedbacks until this sort of works.

Now they crank up CO2, and because CO2 is the cause of more warming, they have successfully built a linearized, single attractor system that does not easily admit nonlinear jumps or appearances and disappearances of attractors so that the attractor itself must move monotonically to warmer when CO2 is increasing. They run the model and — gasp! — increasing CO2 makes the whole system warmer!

Now, they haven’t really gotten rid of the pesky attractor problem. They discover when they run the models that in spite of their best efforts they are still chaotic! The models jump all over the place, started with only tiny changes in parametric settings or initial conditions. Sometimes a run just plain cools, in spite of all the additional CO2. Sometimes they heat up and boil over, making Venus Earth and melting the polar caps. The variance they obtain is utterly incorrect, because after all, they balanced the parameter space on a point with opposing forcings in order to reproduce the data in the reference period and one of many prices they have to pay is that the forcings in opposition have the wrong time constants and autocorrelation and the climate attractors are far too shallow, allowing for vast excursions around the old slowly varying attractor instead of selecting a new attractor from the near-infinity of possibilities (one that might well be more efficient at dissipating energy) and favoring its growth at the expense of a far narrower old attractor. But even so, new attractors appear and disappear and instead of getting a prediction of the Earth’s climate they get an irrelevantly wide shotgun blast of possible future climates (that is, as noted above, probably not even distributed correctly, or at least we haven’t the slightest reason to think that it would be). Anyone who looked at an actual computed trajectory would instantly reject it as being a reasonable approximation to the actual climate — variance as much as an order of magnitude too large, wrong time constants, oversensitive to small changes in forcings or discrete events like volcanoes.

So they bring on the final trick. They average over all of these climates. Say what? Each climate is the result of a physics computation. One with horrible and probably wrong approximations galore in the “physics” determining (for example) what clouds do in a cell from one timestep to the next, but at least one can argue that the computation is in fact modeling an actual climate trajectory in a Universe where that physics and scale turned out to be adequate. The average of the many climates is nothing at all. In the short run, this trick is useful in weather forecasting as long as one doesn’t try to use it much longer than the time required for the set of possible trajectories to smear out and cover the phase space to where the mean is no longer meaningful. This is governed by e.g. the Lyupanov exponents of the chaotic processes. For a while, the trajectories form a predictive bundle, and then they diverge and don’t. Bigger better computers, finer grained computations, can extend the time before divergence slowly, but we’re talking at most weeks, even with the best of modern tools.

In the long run, there isn’t the slightest reason — no, not even a fond hope — that this averaging will in any way be predictive of the weather or climate. There is indeed a near certainty that it will not be, as it isn’t in any other chaotic system studied so why should it be so in this one? But hey! The overlarge variance goes away! Now the variance of the average of the trajectories looks to the eye like it isn’t insanely out of scale with the observed variance of the climate, neatly hiding the fact that the individual trajectories are obviously wrong and that you aren’t comparing the output of your model to the real climate at all, you are comparing the average of the output of your model to the real climate when the two are not the same thing!

Incidentally, at this point the assertion that the results of the climate models are determined by physics becomes laughable. If I average over the trajectories observed in a chaotic oscillator, does the result converge to the actual trajectory? Seriously dudes, get a grip!

Oh, sorry, it isn’t quite the final trick. They actually average internally over climate runs, which at least is sort of justifiable as an almost certainly non-convergent sort of Monte Carlo computation of the set of accessible/probable trajectories, even though averaging over the set when the set doesn’t have the right probability distribution of outcomes or variance or internal autocorrelation is a bit pointless, but they end up finding that some of the models actually come out, after all of this, far too close to the actual climate, which sadly is not warming and hence which then makes it all too easy for the public to enquire why, exactly, we’re dropping a few trillion dollars per decade solving a problem that doesn’t exist.

So they then average over all of the average trajectories! That’s right folks, they take some 36 climate models (not the “twenty” erroneously cited in the presentation, I mean come on, get your facts right even if the estimate for the number of independent models in CMIP5 is more like seven). Some of these run absurdly hot, so hot that if you saw even the average model trajectory by itself you would ask why it is being included at all. Others as noted are dangerously close to a reality that — if proven — means that you lose your funding (and then, Walmart looms). So they average them together, and present the resulting line as if that is a “physics based” “projection” of the future climate. Because they keep the absurdly hot, they balance the nearly realistically cool and hide them under a safely rapidly warming “central estimate”, and get the double bonus that by forming the envelope of all of the models they can create a lower bound (and completely, utterly unfounded) “error estimate” that is barely large enough to reach the actual climate trajectory, so far.

Meh. Just Meh. This is actively insulting, an open abuse of the principles of science, logic, and computer modeling all three. The average of failed models is not a successful model. The average of deterministic microtrajectories is not a deterministic microtrajectory. A microtrajectory numerically generated at a scale inadequate to solve a nonlinear chaotic problem is most unlikely to represent anything like the actual microtrajectory of the actual system. And finally, the system itself realizes at most one of the possible future trajectories available to it from initial conditions subject to the butterfly effect that we cannot even accurately measure at the granularity needed to initialize the computation at the inadequate computational scale we can afford to use.

That’s what Goreham didn’t point out in his talk this time — but should. The GCMs are the ultimate shell game, hiding the pea under an avalanche of misapplied statistical reasoning that nobody but some mathematicians and maverick physicists understand well enough to challenge, and they just don’t seem to give a, uh, “flip”. With a few very notable exceptions, of course.

Rgb

Postscript (from a related slashdot post):

1° C is what one expects from CO2 forcing at all, with no net feedbacks. It is what one expects as the null hypothesis from the very unbelievably simplest of linearized physical models — one where the current temperature is the result of a crossover in feedback so that any warming produces net cooling, any cooling produces net warming. This sort of crossover is key to stabilizing a linearized physical model (like a harmonic oscillator) — small perturbations have to push one back towards equilibrium, and the net displacement from equilibrium is strictly due to the linear response to the additional driving force. We use this all of the time in introductory physics to show how the only effect of solving a vertical harmonic oscillator in external, uniform gravitational field is to shift the equilibrium down by Δy = mg/k. Precisely the same sort of computation, applied to the climate, suggests that ΔT ≈ 1° C at 600 ppm relative to 300 ppm. The null hypothesis for the climate is that it is similarly locally linearly stable, so that perturbing the climate away from equilibrium either way causes negative feedbacks that push it back to equilibrium. We have no empirical foundation for assuming positive feedbacks in the vicinity of the local equilibrium — that’s what linearization is all about!

That’s right folks. Climate is what happens over 30+ years of weather, but Hansen and indeed the entire climate research establishment never bothered to falsify the null hypothesis of simple linear response before building enormously complex and unwieldy climate models, building strong positive feedback into those models from the beginning, working tirelessly to “explain” the single stretch of only 20 years in the second half of the 20th century, badly, by balancing the strong feedbacks with a term that was and remains poorly known (aerosols), and asserting that this would be a reliable predictor of future climate.

I personally would argue that historical climate data manifestly a) fail to falsify the null hypothesis; b) strongly support the assertion that the climate is highly naturally variable as a chaotic nonlinear highly multivariate system is expected to be; and c) that at this point, we have extremely excellent reason to believe that the climate problem is non-computable, quite probably non-computable with any reasonable allocation of computational resources the human species is likely to be able to engineer or afford, even with Moore’s Law, anytime in the next few decades, if Moore’s Law itself doesn’t fail in the meantime. 30 orders of magnitude is 100 doublings — at least half a century. Even then we will face the difficulty if initializing the computation as we are not going to be able to afford to measure the Earth’s microstate on this scale, and we will need theorems in the theory of nonlinear ODEs that I do not believe have yet been proven to have any good reason to think that we will succeed in the meantime with some sort of interpolatory approximation scheme.

rgb

Author: Dr. Robert G. Brown is a Lecturer in Physics at Duke University where he teaches undergraduate introductory physics, undergraduate quantum theory, graduate classical electrodynamics, and graduate mathematical methods of physics. In addition Brown has taught independent study courses in computer science, programming, genetic algorithms, quantum mechanics, information theory, and neural network.

Moderation and Author’s Replies Note: This elevated comment has been posted at the request of several commenters here. It was edited by occasional WUWT contributor Kip Hansen with the author’s approval. Anything added to the comment was denoted in [square brackets]. There are only a few corrections of typos shown by strikeout [correction]. When in doubt, refer to the original comment here. RGB is currently teaching at Duke University with a very heavy teaching schedule and may not have time to interact or answer your questions.

# # # # #

5 2 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

532 Comments
Inline Feedbacks
View all comments
MattS
October 6, 2014 11:27 pm

“count the logical and scientific fallacies at your leisure”
No thanks, I can’t count to infinity.

Brian H
October 6, 2014 11:35 pm

Edit: smilies are not usable as closing parentheses.
Climate experts make a living on linear extrapolation of preferred segments of non-linear functions. The segments are chosen to produce the most profitable extrapolations.

suissebob
October 6, 2014 11:53 pm

It’s always a pleasure to read anything Dr. Robert G. Brown has to say.

Robert B
October 6, 2014 11:54 pm

“…are no more noble than the average Joe at admitting it when they are wrong, even after they come to realize in their heart of hearts that it is so.”
There has been a considerable effort to equate Climate Scientists with those who gave us modern technical marvels so that the population wouldn’t initially have doubts. It then became a hard sell to get them to realise that they were duped. We had a quite a few average Joes make there way to my town of 20 000 people in a main agricultural area, using their iPhone. They didn’t have any doubts even as the highway became a dirt track heading into the desert.

Jimbo
October 7, 2014 12:06 am

Others as noted are dangerously close to a reality that — if proven — means that you lose your funding (and then, Walmart looms).

After the leak of AR5 draft which showed the famous divergence graph of projections V observations I asked a simple question on WUWT – I paraphrase.

“Why don’t the IPCC select say the 5 models that came closest to observations, look under the hood and find out why they came up closest to reality?”

Someone replies to me mentioning something about the implication for climate sensitivity. It’s possible it was just by chance the 5 came closest, but it would be good to know what it is about those 5.

“It is difficult to get a man to understand something if his salary depends upon his not understanding it”.
Upton Sinclair

David A
Reply to  Jimbo
October 7, 2014 3:39 am

Jimbo, this, in my view, goes straight to the heart of common sense as well as the basic scientific method.
I have asked several times what is different about the five models that come closer to the observations? I suspect, as the IPCC ignored the five closest to observations models, and went to the modeled mean to estimate the cost of inaction, that the models closest to reality of observations either had a greatly reduced climate sensitivity, or they input disparate cooling factors such as volcanic eruptions or particulates, which were assumed and would be found to be way above what is known to exist in the real world. If the latter was the case, then logic would dictate that greatly reduced climate sensitivity was the likely answer.
RGB has had some excellent posts on the scientific absurdity of the IPCC using the modeled mean as a basis of their estimate of negative consequence caused by increased CO2 in the atmosphere. The IPCC “modeled” harm, does not really begin until plus 2 C from pre 1950s time frame. They are now attempting to abandon the plus 2 C (as observations show that this is unlikely anytime soon), as a requisite for the social action they demand.
It is very sad that they get away with not disclosing the “under the hood” facts about their computer models.

Jimbo
Reply to  David A
October 7, 2014 6:52 am

As a non-scientist I would be flabbergasted if the climate scientists never thought about looking at the projections that came closest to observations and ask questions. Had the IPCC adopted this since AR1 and refined its models accordingly the debate might have ended by now. 😉 WALMART!

richardscourtney
Reply to  David A
October 7, 2014 7:07 am

Jimbo
You say

As a non-scientist I would be flabbergasted if the climate scientists never thought about looking at the projections that came closest to observations and ask questions. Had the IPCC adopted this since AR1 and refined its models accordingly the debate might have ended by now. 😉 WALMART!

Actually, each and every model’s performance should be assessed. Those which provide projections most distant from observations may be most informative about model behaviour(s): nobody can know prior to the assessments.
But nobody challenged any climate model and, instead, as Robert Brown says in his excellent article above, a meaningless average of model outputs was adopted. To quote myself in an IPCC side-meeting early this century,
“I don’t know what you call this, but it is not science”.
Richard

rgbatduke
Reply to  David A
October 9, 2014 6:44 am

It is very sad that they get away with not disclosing the “under the hood” facts about their computer models.

It’s worse than that. They openly disclose them — in chapter 9 of AR5, in a single three or four paragraph stretch that nobody who matters will ever read, or understand if they do read. Then they write arrant nonsense in the Summary for Policy Makers, disclosing none of the indefensible statistical inconsistency of their processing of the individually failing models before using them as the basis for the personal opinion of a few, carefully selected writers, expressed as “confidence” about many things that they could not quantitatively defend if their lives depended on it using the axioms and practice of statistics on behalf of the whole body of participants, including those that don’t agree at all with the stated conclusions and who would utterly reject the assertions of confidence as having a statistical/empirical foundation.
As in high confidence my ass! “Confidence” in statistics is a term with a fairly specific meaning in terms of p-values! You show me the computations of one, single p-value, and explain its theoretical justification in terms of (say) the Central Limit Theorem. Be sure to list all of the Bayesian priors so we can subject them to a posterior analysis! Be sure to explain to the policy makers that forming the mean of the models is pure voodoo, disguising their weakness by presenting the conclusions as the result of a vote (of uncritically selected idiots!) In the SPM — not in chapter 9.
rgb

rgbatduke
Reply to  Jimbo
October 7, 2014 1:45 pm

Yeah, Jimbo. Sheer common sense. And why don’t they just throw out the worst (say) thirty of the thirty six models when they are making their projections?
The answer is, of course, pretty obvious. But it is a sad, sad answer. It doesn’t even have to be malicious. These guys are all colleagues. Who wants to be the one who has to go to talk to Joe, the author of the 36th worst model (dead last, not even close!) and tell them that regretfully they’ve decided to drop it out of all of the projections and predictions in AR5, or AR6, or whatever? It’s a career ender for Joe.
I’m not sure even Wal Mart could employ the vast horde of future unemployed climate scientists that will be unleashed on the market if the climate starts to actively cool, or even remains flat for another decade before changing again up or down.
rgb

Truthseeker
Reply to  rgbatduke
October 7, 2014 3:54 pm

Which is why hitting the “reset” button is the only thing that has any chance of working …
You are probably right about Wal Mart. They have commercial realities to consider and climate “scientists” have been avoiding reality like it was a plague …

Reply to  rgbatduke
October 7, 2014 4:09 pm

However in the business world this is what happens all the time. No matter how much I like someone and respect someone the results are the only determining factor to success. There is also no shame in failing and ending up at Walmart and working your way to the top again. This is the same as facing your errors in science and see them as instructive to a better understanding and further discovery.

sturgishooper
Reply to  rgbatduke
October 7, 2014 5:18 pm

Only when there is a President Ted Cruz or Rand Paul does the earth stand a chance of ridding itself of these odious parasites sapping the life blood of the planet.

Reply to  rgbatduke
October 7, 2014 8:26 pm

Mickey Manniacal, today’s Wally world greeter welcomes you to Wally world! Here are our coupons for the day. May you enjoy your shopping experience and remember to drive safely home…
A thousand times a day.
For twenty eight to thirty two hours per week.
I can enjoy that thought.
But I don’t like to shop at Wally World much more and that would certainly finish my ever shopping there again. Even if Trenbarf, Santa’r, and Jonesy sang a barbershop quartet with him.
I mean, what are his other skills? Especially after checking out what his students think of him (after discounting the fanatic fans smoked laudatories).
Certainly wouldn’t be math… Here is MM’s personal mall cubicle where he cheerfully helps people get their taxes filed on time and he guarantees accuracy or he pays all fines…
Now there might be a position fending off bears and hordes of mosquitos while buggering larch trees up in Siberia…

David A
Reply to  rgbatduke
October 7, 2014 9:31 pm

Regarding the IPCC misuse of the models, RGB said…” The answer is, of course, pretty obvious. But it is a sad, sad answer. It doesn’t even have to be malicious. These guys are all colleagues…”
Yes, one’s daily bread is a motivating factor, but the politicians rule the IPCC roost. And the desire for power over others is malicious, in my view. Indeed, power over others can be philosophically supported as the very definition of evil. These “rule the world” Blackbeard’s write the summary’s, and they need the extremely wrong models to move the modeled mean to a point where projected harms can at least have a smidgen of real potential.

JohnWr
Reply to  rgbatduke
October 8, 2014 6:51 am

Creating a pool of unhappy people who know where the bodies are buried is not in the ‘winners’ interest.

DayHay
Reply to  rgbatduke
October 8, 2014 4:33 pm

This actually HAS TO HAPPEN if the feedback loop carrying the error signal is to have a positive (read correct) effect on future outcomes. I mean, everyone always says you learn the most when you screw up, right? So unfortunately, the time has come for some climate scientists to get an education.

Jimbo
Reply to  rgbatduke
October 9, 2014 6:36 am

Here is what I suspect. Badly performing models are REQUIRED in order to keep the scare running. If climate sensitivity is low and future models lowered surface temperature projections then the scare would be over and the IPCC would have to close down. Climastrologists would see their funding shrivel and thousands would have to find an honest living IMHO. As long as the hypothesis is exaggerated and surface temperatures remain flat, or cool, then the day of reckoning cannot be put off forever.
Tar and feathers, loss of status, funding and self-importance is not a nice prospect. These people will go to their graves ‘den y ing’ they are wrong.

Jimbo
Reply to  rgbatduke
October 9, 2014 6:37 am

PS I am aware that AR5 projections have already been lowered. They may eventually match observations! 🙂

Matthew R Marler
October 7, 2014 12:07 am

…debates are rare because science is not a debate, or more specifically, science does not proceed or advance by verbal debates in front of audiences. You can win a debate and be wrong about the science. Debates prove one thing. Folks who engage in them don’t get it, folks who demand them don’t get it and folks who attend them don’t get it
Plain and simply, this is an aspect of the history of science about which Steven Mosher is totally ignorant. The Eddington Expedition touched off a long series of debates, as did the quantum mechanical revolution (the numerous Solvay conferences with the most illuminating debates between Einstein and Bohr.) Everything in science has been plentifully debated (except perhaps Newton’s alchemical experiments because he kept them a secret.) The debates about the diverse explanations of the causes of Legionnaires’ disease and AIDS are more contemporary examples, as is the ongoing (maybe resolved) debates about the worldwide decline of amphibians.

Jimbo
Reply to  Matthew R Marler
October 7, 2014 2:30 am

Oh dear. Mosher said:

“debates are rare because science is not a debate, or more specifically, science does not proceed or advance by verbal debates in front of audiences.”

Oh really!

Guardian
Scientific debates: a noble tradition
….Joseph Lister v germ theory den*****ts….
As a result, a public debate was organised between Joseph Lister and the most prominent germ den*****ts at the time. Lister was instrumental in introducing antiseptic surgery in hospitals, but he wasn’t an experienced debater, so seemed outmatched by the combined voices of 15 den*****ts in front of an audience of dozens in a public operating theatre. However, the den*****ts, in a fit of hubris, willingly smeared themselves with drain-water and rancid meat to demonstrate their confidence that germs didn’t exist, and gradually succumbed to violent sickness throughout the debate. The one exception was a particularly vocal pastor (who objected to the term pasteurisation co-opting his title) who cut his finger on a lectern while gesticulating. He refused to let Lister treat it, and eventually died of hospital gangrene.

National Academy of Sciences
The “Great Debate” of 1920
The Royal Society
Constructive debate on the diverse issues of biodiversity
Einstein vs. Newton debate
The DNA Debate: The latest episode of Royal Society
Climate change [p2]

Jimbo
Reply to  Jimbo
October 7, 2014 6:34 am

Correction: I think the quote from the Guardian I gave is wrong. It was apparently made up as revealed in the last paragraph of the article. I just love the Goroniad. 🙂 This will teach me to check deeper. I hope the last 2 examples will suffice.

Reply to  Jimbo
October 7, 2014 6:38 am

Yes, this story sounds fanciful. As rashly as those gentlemen may have acted by inoculating themselves, it would have taken at least a couple of days for them to actually succumb.

jorgekafkazar
Reply to  Jimbo
October 7, 2014 9:49 am

Ja, the Grauniad quote is phony, as is to be expected, obviously so. If “denialists” hadn’t given it away, the notion that drainwater, etc., could result in illness intradermally in minutes would. The lectern story could possibly be true; despite hundreds of years of use, lecterns are just simply covered with razor sharp decorations that could easily cut a pastor to ribbons while gesticulating at us with his finger, as they are wont to do. But since this putative pastor is unnamed, he and his finger are also doubtless products of the Grauniad writer’s fervid imagination.

milodonharlani
Reply to  Jimbo
October 8, 2014 1:26 pm

There was a debate between Lister & microbial naysayers, but not surprisingly the cartoonish Guardian has it wrong.
The 1920 Great Debate on the size of the universe is however a good example. Here’s another, more recent (2004) such public debate, on competing K/T extinction hypotheses:
http://www.geolsoc.org.uk/chicxulub
Public debate is not rare in science.

AlexS
October 7, 2014 12:17 am

“that at this point, we have extremely excellent reason to believe that the climate problem is non-computable, quite probably non-computable with any reasonable allocation of computational resources the human species is likely to be able to engineer or afford, even with Moore’s Law, anytime in the next few decades, if Moore’s Law itself doesn’t fail in the meantime. ”
Pretty much. We don’t even know what are the inputs of climate and obviously that implies also we can’t even know how to weight those we know.

jimmyjoe
October 7, 2014 12:22 am

RJB – Wow, just wow. Thanks for taking the time to put this down in writing.

Claude Harvey
October 7, 2014 12:33 am

Best description of the problem I have seen to date and it lands precisely where I landed shortly after beginning to look into just what all the global warming hubbub was about some 15 years ago. How any honest scientist can look at a chart of the past 500 years of reconstructed global temperature (Al Gore’s famous version will do nicely) and not conclude that every word you have written here is true is simply beyond me.

Claude Harvey
Reply to  Claude Harvey
October 7, 2014 12:36 am

Oops! Make that “500,000 years” of reconstructed….

labMunkey
October 7, 2014 12:34 am

I’d like to offer a slight qualification to the post above, Re; scientists and their actions, and also if i may, an observation.
I believe that there is a whole class of science and scientists that are missed when one speaks about science- especially in this context. The presumption, and usual framing of the issue is such that one could be forgiven for thinking science only happens in universities. It does not. In fact, most of the science that happens on this planet does not occur in universities at all, but in industry.
I saw figures on this once (which I’ve singularly been unable to find, apologies) that suggested there are roughly ten industry scientists for every academic. Why is this significant? Well it’s all down to reproducibility….
There was a study performed by Scientific American and Nature (iirc) which looked into how reproducible, or to put it another way, how accurate academic research was. I was at a Cambridge University Debate (ironically) when this was brought up- the subject of the debate was ‘is research better performed in Academia or Industry?’.The academic side put up a valiant show of all the advances that happen in academia. The discoveries that would not have been possible with a share-holder and market-orientated focus and how valuable the output is.
The industry representative opened with the fact that in the previously-mentioned study, less than 1 third of all academic research papers tested were reproducible. Or to put it another way, 2 thirds were junk. I think the figure may be slightly higher for climate science….
For clarity, I am an industry scientist. It is, for us, an open secret that you don’t trust any academic research. It may be a good starting point, it may even have some good data in it, but more often than not it’s either slightly misleading, or often just plain wrong.
We know this because we’ve tried to replicate it, and failed. Wasting time and money in the process.
It has to be a huge worry when academic institutes churn out volumes after volumes of useless science, with no checks or balances. Peer review is broken, and the ever increasing drive to publish and publish on cutting edge research, leaves replication of someone else’s results a far lower priority. Somewhere below deleting your emails.
In industry, at least, in my industry (biotech), there are checks. There are controls. Balances, whole departments dedicated to finding the slightest issue in your work and subsequent paper work. There are then regulatory bodies which can audit you at pretty much the drop of a hat. ON top of this, there are serious repercussions for not submitting accurate work. Far more serious ones exist for those who deliberately mislead, and these include massive fines and prison time (and there are examples of when elements in industry have not met these standards, and the consequences have happened).
In academia, you have peer review, which we all know is easily subvert-able.
So all this is to say two things-
1) i’d be very interested to see how many industry scientists were included in the ‘scientists believe in global warming’ surveys.
2) is it not becoming ever more clear that academic research as it currently exists is broken and needs some sort of over-site to fix it? Especially in climate research.

richardscourtney
Reply to  labMunkey
October 7, 2014 1:08 am

labMunkey
In your excellent post you say

For clarity, I am an industry scientist. It is, for us, an open secret that you don’t trust any academic research. It may be a good starting point, it may even have some good data in it, but more often than not it’s either slightly misleading, or often just plain wrong.
We know this because we’ve tried to replicate it, and failed. Wasting time and money in the process.

Yes! I have reproduced it for emphasis, and I could cite several (some funny) anecdotal examples from my decades of involvement in industrial research.
The underlying problem seems to be that academics are rewarded for publishing papers: quantity of publications is important but quality of work is ignored. In industry the quality of work decides whether a research study should continue or not, and quality is decided on the basis of progress towards an objective and/or benefits of ‘spin-offs’ from the work.
Richard

Tom in Florida
Reply to  richardscourtney
October 7, 2014 5:33 am

You are talking about financial profits. That is why true capitalism works to the benefit of us all.

richardscourtney
Reply to  richardscourtney
October 7, 2014 6:25 am

Tom in Florida
No. I was talking about how science is assessed in industry as compared to how it is assessed in academia.
I was NOT talking about political philosophies which are not the subject of this thread.
Richard

rgbatduke
Reply to  richardscourtney
October 7, 2014 1:49 pm

The underlying problem seems to be that academics are rewarded for publishing papers: quantity of publications is important but quality of work is ignored. I

Not entirely, but it is one of several reasons I gave up that rat race.
Say “ignored unless it is so spectacular that it smacks you in the face”, in which case it is rewarded, typically ten years too late to do you any real good.
rgb

thingadonta
Reply to  labMunkey
October 7, 2014 6:56 pm

Academia acts as a group competing for limited financial resources, the same as companies in the stock market do, however academic groups have an unfair advantage in that they are not answerable to the market in anywhere near the same way.
This is the thorny issue of allowing some level of research not purely determined by market forces to exist, but without actually being accountable to market forces to begin with. In fact they are only accountable to government, which inevitably means they will pander to government.
The problematic issue, is that once you tie ‘off market’ research funding into competition and the way markets generally operate, academia then starts to act just like any other group competing with other groups, which creates problems. They tend to exist only for themselves, and only those ideas which support the group are the ones that become acceptable. So research and ideas are only acceptable if it supports the group’s ideas, agendas, and what generally benefits the broader group. It regresses to tribalism.
Making academia more accountable is a necessity and in everyone’s best interest. Some have suggested reforms to the peer review process, which is part of this. Many other reforms are also required, which is a topic for another time.

Richard T
Reply to  labMunkey
October 7, 2014 7:25 pm

Industry science carries a heavy “burden” for those performing that science — accountability.

dp
October 7, 2014 12:41 am

Drink post? Seems a bit over the top. Loquacious as a minimum. Mosher is succinctly irrelevant but not much more. It does not require tens of paragraphs to make that point.

Reply to  dp
October 7, 2014 1:02 am

I disagree. It is not an answer to Mosher (who cares about Mosher?). It is a cry from the depths of the real scientist’s soul, sinking in the vertiginous currents of the New Dark Age.

jorgekafkazar
Reply to  Alexander Feht
October 7, 2014 10:00 am

Good answer, AF. RGB’s comments and posts of any length are always worth reading. And yours, as well.

Jimbo
October 7, 2014 12:43 am

Dr. Richard Betts of the UK’s Met Office made some interesting comments in August. I was amazed at his first sentence though – he could have fooled me. [my bold]

Richard Betts – at 5:38 PM
climate modeller – Met Office – 22 August 2014
“Bish, as always I am slightly bemused over why you think GCMs are so central to climate policy.
Everyone* agrees that the greenhouse effect is real, and that CO2 is a greenhouse gas.
Everyone* agrees that CO2 rise is anthropogenic
Everyone** agrees that we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know. The old-style energy balance models got us this far. We can’t be certain of large changes in future, but can’t rule them out either.
…..
*OK so not quite everyone, but everyone who has thought about it to any reasonable extent
**Apart from a few who think that observations of a decade or three of small forcing can be extrapolated to indicate the response to long-term larger forcing with confidence”
http://www.bishop-hill.net/blog/2014/8/22/its-the-atlantic-wot-dunnit.html

Yet they projected temperatures and still got it wrong. Is Betts saying that politicians should disregard the Met Office / IPCC climate projections when formulating policy?

UK Government – 27 September 2013
Response from Secretary of State Edward Davey to the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5): The Latest Assessment of Climate Science
…..Without urgent action to cut greenhouse gas emissions this warming will continue, with potentially dangerous impacts upon our societies and economy. This strengthens the case for international leaders to work for an ambitious, legally binding global agreement in 2015 to cut carbon emissions……
==============
UK Government – 31 March 2014
What are the implications of climate change for the UK?
….Increased economic losses and people affected by extreme heat events: impacts on health and well-being, labour productivity, crop production and air quality……
===============
UK Government – March 2013
1. Policy context
What are the key policy outcomes for the policy programme/area?
Climate models indicate that many parts of the UK1 are likely to experience more heavy
rainfall (leading to flooding), rising sea level and faster coastal erosion, more heat-waves,
droughts and extreme weather events as this century progresses
. Information on the
science of climate change is available on the Government Office for Science pages on
climate change. The Climate Change Risk Assessment (CCRA) set out the key risks to the
UK from these impacts.
Climate Change can be divided in two policy areas: Climate Change Mitigation and
Climate Change Adaptation.
Climate change mitigation deals with limiting the extent of future climate change by
reducing greenhouse gas emissions and removing them from the atmosphere.
…..Defra’s role in Climate Change policy…..
https://www.
gov.uk/government/uploads/system/uploads/attachment_data/file/221074/pb13912-evidenceplan-climate-change.pdf

Windsong
October 7, 2014 12:45 am

Superb. If only our elected officials would read this.

Tom in Florida
Reply to  Windsong
October 7, 2014 5:35 am

But the only ones who would buy in are the honest ones. Good luck in finding enough of those to make a difference.

October 7, 2014 12:54 am

It is comforting to know, Dr. Brown, that some of us, on this long-suffering planet, are still able not only to think clearly but to express their thoughts with the same clarity. Thank you for this comfort.
Alas, you are preaching reason — again and again the history repeats itself — to those who learn with their mother’s milk to regard any human activity (including science) as a mutual verbal, cultural, and financial manipulation designed to produce a resulting vector buttering their bread on both sides.
Call it a Brownian social motion, if you will. As long as science remains an “institution,” heavily influenced by governments, it will be a part of the political circus.
Climate science? Even more so, since the green religion serves as a substitute of waning traditional irrational beliefs, while a very large part of the population is genetically selected and predisposed to follow irrational emotional stimuli given from above by the preachers-manipulators.
Solution? What is the solution of human condition?

jorgekafkazar
Reply to  Alexander Feht
October 7, 2014 10:02 am

My solution is to eat lots of chocolate.

Dr. Strangelove
October 7, 2014 12:55 am

rgb
That GCMs are useless for forecasting global temperatures is well-known long ago. Their errors or uncertainties are 20 times larger than their 100-year forecasts. They are no better than random guesses. Reminds me of von Neumann’s flying elephant model.

October 7, 2014 1:02 am

Wow! Dr. Brown, that is a superb and accurate description of the current state of climate science! Thanks a lot for that!

Editor
October 7, 2014 1:09 am

Thanks, RGB. Another excellent comment about the inconsistencies and implausibilities of model-based climate science.
Regards

Martin A
October 7, 2014 1:19 am

Are the computer models reliable?
Computer models are an essential tool
in understanding how the climate will
respond to changes in greenhouse gas
concentrations, and other external effects,
such as solar output and volcanoes.
Computer models are the only reliable
way to predict changes in climate. Their
reliability is tested by seeing if they are able
to reproduce the past climate which gives
scientists confidence that they can also
predict the future.
But computer models cannot predict the
future exactly. They depend, for example, on
assumptions made about the levels of future
greenhouse gas emissions.
(Warming Climate change – the facts, Met Office publication, 2009)

David A
Reply to  Martin A
October 7, 2014 3:50 am

What facts in this often wrong statement are you referring to?

rogerknights
Reply to  David A
October 7, 2014 4:54 am

I think “the facts” was the subtitle, but the quoter didn’t make that clear because he didn’t capitalize the words.

Mr Green Genes
October 7, 2014 1:22 am

Thank you Dr. Brown. That is a remarkable piece of work.
You mention the ‘Walmart impact’ as applied to scientists who do not toe the line. Sadly, this applies equally to politicians, who will therefore only countenance funding grants to research which goes along with the consensus. And so the cycle continues.

Jimbo
Reply to  Mr Green Genes
October 7, 2014 1:49 am

You mention consensus, and here is a lesson from the recent past on consensus and mavericks. Science is littered with them and sometimes they push science forward.

Guardian – 5 October 2011
Nobel Prize in Chemistry for dogged work on ‘impossible’ quasicrystals
Daniel Shechtman, who has won the chemistry Nobel for discovering quasicrystals, was initially lambasted for ‘bringing disgrace’ on his research group
…Daniel Shechtman, 70, a researcher at Technion-Israel Institute of Technology in Haifa, received the award for discovering seemingly impossible crystal structures in frozen gobbets of metal that resembled the beautiful patterns seen in Islamic mosaics.
Images of the metals showed their atoms were arranged in a way that broke well-establised rules of how crystals formed, a finding that fundamentally altered how chemists view solid matter…..
http://www.theguardian.com/science/2011/oct/05/nobel-prize-chemistry-work-quasicrystals

thingadonta
October 7, 2014 1:22 am

Interesting discussion from an experienced physicist.
I would add one angle to the discussion, as an earth and natural sciences scientist, that a physicist might not.
I think that much of the debate around climate revolves around unconscious and untested Malthusian assumptions-that is, the assumption, that biological organisms inevitably tend towards collapse through overuse and depletion of resources-that this is at the heart of pretty much all the climate debate.
It is the main reason that the climategate affair occurred, it is the main reason the IPCC sticks to its models. It is the main reason the gatekeepers believe it is ok to bend and break the rules. It is the main reason there is a fanatical push for ‘consensus’, what they are really pushing is fanatical attachment to a Malthusian fundamentalism.
And I think the Malthusians-of which I mean the Club of Rome, Mann et al., those who changed the IPCC 1995 report to reflect their assumptions, the Rio delegation in Rio in 1991, etc etc-all these are basing their approach on untested and latent hidden assumptions relating to Malthus. There are paradoxes associated with Malthus, and once one blindly accepts one narrow view of these paradoxes, one behaves in a manner consistent with this rejection of paradox, by bulldozing over doubts and alternate theories and data, because one has accepted this in first accepting ultra-Malthusian thinking to begin with. Ones behaviour follows from the blind attachment to the Malthusian model to begin with.
It also follows, that addressing the ‘issue’ would also necessarily involve addressing Malthusian assumptions. If these remain untested and unchallenged, you are simply addressing a belief system, not a science. The hidden assumptions surrounding Malthus and biological and market forces and adaptability need to be addressed before one gets anywhere in the debate.

mpainter
Reply to  thingadonta
October 7, 2014 3:25 am

Malthus in a nutshell:
Population, when unchecked, tends to outgrow the means of subsistence.
This principle is recognized as axiomatic by those who have a foundation in the life sciences. Those who deprecate this principle are revealing their lack of such founding.
This principle falters when applied to humankind because of our unique ability to transform our environment to our advantage.
But all other species of life are subject to this universal and profound principle.

David A
Reply to  mpainter
October 7, 2014 4:06 am

Humans are not the only species to adapt to environmental changes. Adaption is the key to sustaining any population. Humans are just at the apex of ability to adapt. I highly recommend these two posts on that ability. http://chiefio.wordpress.com/2009/05/08/there-is-no-shortage-of-stuff/ E.M. Smith is, among other talents, an economist, and in this post speaks about another economist named Thomas Malthus. Also, this post is a good follow up. http://chiefio.wordpress.com/2009/03/20/there-is-no-energy-shortage/

Jeff Alberts
Reply to  mpainter
October 7, 2014 7:18 am

David A, mpainter didn’t say “Humans are the only species to adapt”, he said “our unique ability to transform our environment to our advantage”. That’s the opposite of adapting to environmental changes, that’s adapting the environment to us. When it’s hot, we turn on the AC, when it’s cold, we throw another log on the fire. We largely live in climate-controlled domiciles, so that the outside weather is of little concern. Your response is based on a mis-reading, and therefore irrelevant to his point.

Rud Istvan
Reply to  mpainter
October 7, 2014 7:38 am

MPainter, there are human limits also, despite ingenuity. Explained in my ebook Gaia’s Limits. There is a soft limit, food. The book defines soft. And there is a hard limit, liquid tranportation fuels.
The book defines hard, and epalins how, why, and when (to acceptable limitsmof precision in decades. You might find it an educational read, if a bit of a data slog.

Editor
Reply to  mpainter
October 7, 2014 7:51 am

Reply to mpainter ==.> Malthus doesn’t apply to humans primarily because we are able to create and modify our own means of subsistence — advancing from hunter-gatherers, to agriculturalists, to enhancing crops and food animals, to GMO modification of plants to be salt-, drought, pest-tolerant, hydroponics and possibly growing animal tissue in factories.
And “you ain’t seen nothin’ yet!” — the future holds unfathomable advances yet to come.

Reply to  mpainter
October 7, 2014 8:45 am

mpainter:
The Malthusian idea is overblown as a “founding” principle of life sciences. The only way it is correct is if the “unchecked” part of your definition means the population is assumed to reproduce at an exponential rate and there is nothing to prevent that growth. Nice on paper. However, in the real world it is trivial to find numerous examples of organisms that do not reproduce out of control, do not overrun their environment, etc. As a result, in practice the Malthusian principle amounts to little more than a (largely-useless and semi-tautological) assertion that “a population will grow exponentially unless there are factors that prevent it from growing exponentially.”
So the Malthusian idea in the life sciences is about as useful as the CO2 warms the planet “all other things being equal” line in climate science.

jorgekafkazar
Reply to  mpainter
October 7, 2014 10:10 am

But most terrestrial populations are naturally checked. Excess prey soon yields more predators, and balance is restored. .

mpainter
Reply to  mpainter
October 7, 2014 2:09 pm

Rud Istvan:
Indeed there are limits but how do you identify them? One needs to be able to foretell future developments and I have little faith that anyone has such infallible vision.

mpainter
Reply to  mpainter
October 7, 2014 2:20 pm

Climate reflections:
That is the point-population does not grow exponentially because growth is checked. It is the examination of those checks which is an important part of ecological studies. Malthus did this for humankind and so founded the science of demographics as well as making some contribution to economic thinking.

thingadonta
Reply to  mpainter
October 7, 2014 5:45 pm

“Population, when unchecked…”
This is the paradox. There are many ways that populations get ‘checked’ that don’t involve an inevitable Malthusian collapse, both in nature and with humans.

David A
Reply to  mpainter
October 7, 2014 10:05 pm

Jeff Alberts says…”We largely live in climate-controlled domiciles, so that the outside weather is of little concern. Your response is based on a mis-reading, and therefore irrelevant to his point…”
Well Jeff, yes, that is a more accurate quote, and so valid. However my post is very cogent to the subject. After all, putting on a coat is adapting. Moving towards natural gas, and fracking is adapting. In a sense I am agreeing with the post, and you may find the linked articles strongly supportive of evidence that any attempt to attribute Malthus principle to humans is likely doomed to failure, as in all the failed predictions of doom.
My concern is the attempt to elevate what Malthus said to some great observation, as it has been so misused, and nature tends to have its own set of checks and balances, and rarely does one species eat itself to extinction. Animals do adapt in different ways. Snakes gather together underground in mass to hibernate and survive the winter, beavers build marvelous homes, and animals have been known to change how they live and what they eat, etc. as conditions and climate have changed.

Reply to  thingadonta
October 7, 2014 8:07 am

And I think the Malthusians-of which I mean the Club of Rome, Mann et al., those who changed the IPCC 1995 report to reflect their assumptions, the Rio delegation in Rio in 1991, etc etc-all these are basing their approach on untested and latent hidden assumptions relating to Malthus.

I would have included the greatest Malthusian of all, Paul Ehrlich. He’s still widely admired. Oreskes cites him as a visionary in her recent science fiction book. Mann has a cover blurb by him on his book and calls him a personal hero.

timg56
Reply to  Canman
October 7, 2014 4:01 pm

Which, in my opinion tells us all we need to knowabou Oreskes and Mann.

Richard of NZ
Reply to  thingadonta
October 7, 2014 4:39 pm

I feel that all too often assumptions are codified as “fact”.
Consider the following:
Without making any assumptions answer the following multiple-choice question
2+2=
1. 4
2. 10
3. 11
4. All of the above
5. None of the above
I would hazard a guess that most people would answer 1 because the assumption that numbers use the base 10 has been codified as “fact” has become ingrained and most people don’t even realise that they are making an assumption. The correct answer is 4, as that allows for any base to be used (except base 2 as the number 2 does not exist in base 2).

thingadonta
Reply to  Richard of NZ
October 7, 2014 5:51 pm

Richard of NZ.
I like it, An untested assumption. Who said nature has to use base 10?

Reply to  Richard of NZ
October 8, 2014 10:58 am

“I would hazard a guess that most people would answer 1 because the assumption that numbers use the base 10 has been codified as “fact” has become ingrained and most people don’t even realise that they are making an assumption.”
Indeed, the assumption is warranted, however, and I wouldn’t consider it a true assumption. Rather, I would say that people are making a presumption. In nearly every situation, humans interact with numerical representations using Base 10 nomenclature. It is the rule, rather than the exception. So I do not believe you could reasonably expect that answer 4 is the correct answer, based upon the presumption that the number system in use relies upon Base 10.

Reply to  Richard of NZ
October 8, 2014 1:50 pm

The correct answer is 4, as that allows for any base to be used (except base 2 as the number 2 does not exist in base 2).
Not sure I understand. 2+2 = 10 in base 4, and 11 in base 3. You need base 5 or higher for 4 to be the correct answer without making any assumptions

Stephen Richards
October 7, 2014 1:24 am

Bravo Dr Brown. Applaudisez. As I have said before, I wish I had studied my physics with Dr B.
Clear, precise and physics.
Mosher, is of course, not a scientist. He appeared to become indoctrinated, rather easily, when he managed to wiggle his way into BEST with Zeke Howyerfather.
OK so UKMET off should be along any minute to tell us why their model(s) have been VV&T’d.

Steve (Paris)
October 7, 2014 1:34 am

Excellent. Simply excellent.

October 7, 2014 1:37 am

“The climate is a highly nonlinear chaotic system…..” The global climate is not so chaotic as the local weather. Look at the global temperature. the monthly means varies between 13 °C (Jan) and 17 °C (Jul). This is a seasonal effect due to the unequal distribution of land and oceans on the northern and the southern hemispheres. The variation of the annual means is mainly due to the ENSO phenomena, the variation of the 30 yr means is mainly caused by the AMO etc. So the “chaos” of the climate is a question of the resolution resolution in time and in temperature. For instance, you can calculate within an error of 5 K how the surface of the earth will cool down, if you turn off the sun. That’s no chaotic process. But nobody is interested in this.

David A
Reply to  Paul Berberich
October 7, 2014 4:11 am

Paul, I think that beyond local changes such as centuries of drought in Calif over the last one thousand years, the large swings between glacials and interglacials are a major consideration in global climate change.

Spence_UK
Reply to  Paul Berberich
October 7, 2014 3:06 pm

Paul: you are quite wrong.
Firstly, ENSO and AMO remain utterly unpredictable, ENSO even on a 3 month timescale, despite climate scientists best efforts; showing ENSO is, to our best understanding, sensitive to initial conditions and chaotic in nature. ENSO and AMO, far from casting doubt, actually underline the importance of chaos to climate.
Noting that ENSO is correlated to global temperatures does not render climate magically predictable. It means you have identified a correlation within the patterns of climate, nothing more, nothing less, most certainly not a means to prediction.
The second important aspect mentioned by Robert in his comment is the problem of fractal dynamics. It is the fractal dynamics that prevents things “averaging out”. Sure, the average temperature at the moment swings between 13 and 17 deg C (plus minus a chunk since absolute temperature is hard to assess). But the interglacial swings – part of the fractal dynamics – were far greater than this, of the order of 8-12 deg C, and the Milankovitch cycles don’t come close to explaining a swing of that size. Fractal dynamics do, and they are confirmed by the even larger swings on the multi-megayear timescales.

pete
Reply to  Paul Berberich
October 7, 2014 7:05 pm

The mean global temperature is as useful as the mean temperature of your kitchen. It is utterly irrelevant if the mean is a balmy 20 degrees if your freezer has malfunctioned filling one side with ice, while your oven has caught fire. If climate change was in fact real, you would conceivably see such wild regional effects with little change in constructed mean temperatures, as the energy distribution across the planet would be significantly altered.
The global mean is not a proxy for heat content in any way, shape or form. It is a constructed static that has no physical relevance and is entirely dependent on the statistical method employed to construct it. The fact that it is so easily adjustable makes it an alarmists favourite tool.

October 7, 2014 1:41 am

Rhetoric points are counted in a debate, the winner being the one who could get most points. In this sense, may be, Steve Mosher was right to mean that most debates are useless.
A debate can be won with totally wrong arguments.
But it is not the subject: the question is to get the better scientific arguments to help progressing, e.g. understanding better the taxonomy of a frog or the bending of a radiation, and deriving laws that are applicable for actual situations within a given context, e.g. engineering a response to the threat of tsunamis or sending a man to the moon.
When out of 54 models reviewed by IPCC (in AR5) all but one calculate a temperature anomaly that is above the present one (by up to 0.6 °C), we can only wonder if these guys are thinking that the real World is wrong because their theories must be right. Here we get outside the scientific or technical issue, and enter into prophecies.
When scientific issues are dealt with in a scientific manner, it may be more appropriate to speak about “dispute” rather than “debate”: at the end of the game (and it may take a long time) one theory will prevail, …until it will be superseded by a more refined one. As R. Brown rightly describes, this is what happens in scientific societies.
The IPCC is no scientific society; it is a panel of experts appointed to provide advice to governments. Well infiltrated by green advocacy groups, without any democratic foundation, and with unprecedented resources and power, these experts define a modern creed based on their estimate of the significance of those findings that they deem relevant to sustain their beliefs. In place since now 26 years their continuing livelihood depends on not questioning their prophecies; this makes them dependant, the opposite of what is expected from experts.
Such modern clergy has no interest in debates, because a creed doesn’t need any proof.

Konrad.
Reply to  Michel
October 7, 2014 2:18 am
hunter
Reply to  Konrad.
October 7, 2014 4:11 am

+10

Reg Nelson
October 7, 2014 1:46 am

A truly frightening thing to consider: What if the Kyoto Protocol was universally agreed and acted on in 1997?
The recent “Pause” would be renamed “The Great Warming Reduction”. It would be heralded as proof that their Carbon (Dioxide) claims were correct after all. There would be a further call to action, to reduce CO2 to pre-industrial levels.
Recent temperatures (post 1997) would be adjusted down, not up. Antarctic Sea Ice Extent growth would be front page news. Polar Bear populations would be growing.
And perhaps, just perhaps, Michael Mann would actually receive a real Nobel Peace Prize.
Scary thoughts.

Jimbo
Reply to  Reg Nelson
October 7, 2014 2:44 am

+1 This is why some of us argue we should ‘do nothing’.
What we are seeing now is the Cattle Killing Cult of the Xhosa. ‘We must act now, do more and more until we are no more’.

WUWT – 2009
“Historic parallels in our time: the killing of cattle -vs- carbon”
http://wattsupwiththat.com/2009/06/20/historic-parallels-in-our-time-the-killing-of-of-cattle-vs-carbon
=========
Abstract
The Xhosa Cattle-Killing Movement in History and Literature
http://onlinelibrary.wiley.com/doi/10.1111/j.1478-0542.2009.00637.x/abstract
/

jeanparisot
Reply to  Reg Nelson
October 7, 2014 5:33 am

Reg,
The political power of that coincidence would have transformed American politics. They’d use the correlation to elevate “science” to a state religion. It was a massive bet by the left, and in this chaotic system they had a good chance of making it work – esp since they control the measurements. Luckily, God is laughing at them.
I’ve argued several times that the greens should just declare victory and move on. In their hubris, they can’t consider it.

pete
Reply to  Reg Nelson
October 7, 2014 7:08 pm

This is, IMO, why there was such a rush to implement the alarmist agenda. If the planet continued to warm according to their metrics, we hadnt acted fast enough and needed to do more. If the warming paused according to their metrics, then it would be evidence that what we were doing was working but we need to do more to avoid the future catastrophic warming.
They were looking for a no-lose situation, and I am sure they are not stupid enough to think there wouldnt be another turn in the cycle.

Jimbo
Reply to  pete
October 9, 2014 6:57 am

pete
This is, IMO, why there was such a rush to implement the alarmist agenda.

This is why after 18 years of no surface warming, an increase in Antarctic sea ice extent, global sea ice back up from lows of a few years back and a bump upwards in Arctic sea ice extent, they are in a panic and keep stating alarmist phrases like…….

‘I got it wrong on climate change – it’s far, far worse’
Nicholas Stern – Guardian – 26 January 2013

What Lord Stern won’t tell you is below his investments and financial interests in carbon schemes which can be found at the Registry of Parliamentary Interests HERE.

2: Remunerated employment, office, profession etc.
Chairman, Grantham Research Institute on Climate Change and the Environment; Chairman, Centre for Climate Change Economics and Policy)
Member, International Advisory Panel, Global Carbon Capture and Storage Institute (Australia)
…..
Member, International Advisory Board, Abengoa SA (Spain)…….

NOTE:
[Abengoa SA (Spain) is engaged in concentrated solar power, 2nd generation biofuels, biomass and wave energy.]
http://www.abengoa.com/web/en/innovacion/areas_de_innovacion/