Guest Post by Dr. Robert G. Brown
The following is an “elevated comment” appearing originally in the comments to “A Rare Debate on the ‘Settled Science’ of Climate Change”, a guest essay by Steve Goreham. It is RG Brown’s reply to the Steven Mosher comment partially quoted at the beginning of the essay. This essay has been lightly edited by occasional WUWT contributor Kip Hansen with the author’s permission and subsequently slightly modified with a postscript by RGB.
rgbatduke
October 3, 2014 at 8:41 am
“…debates are rare because science is not a debate, or more specifically, science does not proceed or advance by verbal debates in front of audiences. You can win a debate and be wrong about the science. Debates prove one thing. Folks who engage in them don’t get it, folks who demand them don’t get it and folks who attend them don’t get it”.
Steven Mosher – comment
Um, Steven [Steven Mosher], it is pretty clear that you’ve never been to a major physics meeting that had a section presenting some unsettled science where the organizers had set up two or more scientists with entirely opposing views to give invited talks and participate in a panel just like the one presented. This isn’t “rare”, it is very nearly standard operating procedure to avoid giving the impression that the organizers are favoring one side or the other of the debate. I have not only attended meetings of this sort, I’ve been one of the two parties directly on the firing line (the topic of discussion was a bit esoteric — whether or not a particular expansion of the Green’s function for the Helmholtz or time-independent Schrodinger equation, which comes with a restriction that one argument must be strictly greater than the other in order for the expansion to converge, could be used to integrate over cells that de facto required the expansion to be used out of order). Sounds a bit, err, “mathy”, right, but would you believe that the debate grew so heated that we were almost (most cordially 🙂 shouting at each other by the end? And not just the primary participants — members of the packed-room audience were up, gesticulating, making pithy observations, validating parts of the math.
You’re right that you can “win the debate and be wrong about the science”, however, for two reasons. One is that in science, we profoundly believe that there is an independent objective standard of truth, and that is nature itself, the world around us. We attempt to build a mathematical-conceptual map to describe the real terrain, but (as any general semantician would tell you) the map is not the terrain, it is at best a representation of the terrain, almost certainly an imperfect one. Many of the maps developed in physics are truly excellent. Others are perhaps flawed, but are “good enough” — they might not lead someone to your cufflinks in the upstairs left dresser drawer, but they can at least get someone to your house. Others simply lead you to the wrong house, in the wrong neighborhood, or lead you out into the middle of the desert to die horribly (metaphorically speaking). In the end, scientific truth is determined by correspondence with real-world data — indeed, real world future data — nothing more, nothing less. There’s a pithy Einstein quote somewhere that makes the point more ably than I can (now there was a debate — one totally unknown patent clerk against an entire scientific establishment vested in Newtonian-Galilean physics 🙂 but I am too lazy to look it up.
Second, human language is often the language of debates and comes with all of the emotionalism and opportunity for logical fallacy inherent in an imprecise, multivalued symbol set. Science, however, ultimately is usually about mathematics, logic and requires a kind of logical-mathematical consistency to be a candidate for a possible scientific truth in the sense of correspondence with data. It may be that somebody armed with a dowsing rod can show an extraordinary ability to find your house and your cufflinks when tested some limited number of times with no map at all, but unless they can explain how the dowsing rod works and unless others can replicate their results it doesn’t become anything more than an anecdotal footnote that might — or might not — one day lead to a startling discovery of cuff-linked ley lines with a sound physical basis that fit consistently into a larger schema than we have today. Or it could be that the dowser is a con artist who secretly memorizes a map and whose wife covertly learned where you keep your cufflinks at the hairdresser. Either way, for a theory to be a candidate truth, it cannot contain logical or mathematical contradictions. And even though you would think that this is not really a matter for debate, as mathematics is cut and dried pure (axiomatically contingent) truth — like I said, a room full of theoretical physicists almost shouting over whether or not the Green’s function expansion could converge out of order — even after I presented both the absolutely clear mathematical argument and direct numerical evidence from a trivial computation that it does not.
Humans become both emotionally and financially attached to their theories, in other words. Emotionally because scientists don’t like being proven wrong any more than anybody else, and are no more noble than the average Joe at admitting it when they are wrong, even after they come to realize in their heart of hearts that it is so. That is, some do and apologize handsomely and actively change their public point of view, but plenty do not — many scientists went to their graves never accepting either the relativistic or quantum revolutions in physics. Financially, we’ve created a world of short-term public funding of science that rewards the short-run winners and punishes — badly — the short-run losers. Grants are typically from 1 to 3 years, and then you have to write all over again. I quit research in physics primarily because I was sick and tired of participating in this rat race — spending almost a quarter of your grant-funded time writing your next grant proposal, with your ass hanging out over a hollow because if you lose your funding your career is likely enough to be over — you have a very few years (tenure or not) to find new funding in a new field before you get moved into a broom closet and end up teaching junk classes (if tenured) or have to leave to proverbially work at Walmart (without tenure).
Since roughly six people in the room where I was presenting were actively using a broken theory to do computations of crystal band structure, my assertion that the theory they were using was broken was not met with the joy one might expect even though the theory I had developed permitted them to do almost the same computation and end up with a systematically and properly convergent result. I was threatening to pull the bread from the mouths of their children, metaphorically speaking (and vice versa!).
At this point, the forces that give rise to this sort of defensive science are thoroughly entrenched. The tenure system that was intended to prevent this sort of thing has been transformed into a money pump for Universities that can no longer survive without the constant influx of soft and indirect cost money farmed every year by their current tenured faculty, especially those in the sciences. Because in most cases that support comes from the federal government, that is to say our taxes, there is constant pressure to keep the research “relevant” to public interests. There is little money to fund research into (say) the formation of fractal crystal patterns by matter that is slowly condensing into a solid (like a snowflake) unless you can argue that your research will result in improved catalysis, or a way of building new nano-materials, or that condensed matter of this sort might form the basis for a new drug, or…
Or today, of course, that by studying this, you will help promote the understanding of the tiny ice crystals that make up clouds, and thereby promote our understanding of a critical part of the water cycle and albedo feedback in Climate Science and thereby do your bit to stave off the coming Climate Apocalypse.
I mean, seriously. Just go to any of the major search engines and enter “climate” along with anything you like as part of the search string. You would be literally amazed at how many disparate branches of utterly disconnected research manage to sneak some sort of climate connection into their proposals, and then (by necessity) into their abstracts and/or paper text. One cannot study poison dart frogs in the Amazon rainforest any more just because they are pretty, or pretty cool, or even because we might find therapeutically useful substances mixed into the chemical poisons that they generate (medical therapy being a Public Good even more powerful that Climate Science, quite frankly, and everything I say here goes double for dubious connections between biology research and medicine) — one has to argue somewhere that Climate Change might be dooming the poor frogs to extinction before we even have a chance to properly explore them for the next cure to cancer. Studying the frogs just because they are damn interesting, knowledge for its own sake? Forget it. Nobody’s buying.
In this sense, Climate Science is the ultimate save. Let’s face it, lots of poison dart frogs probably don’t produce anything we don’t already know about (if only from studying the first few species decades ago) and the odds of finding a really valuable therapy are slender, however much of a patent-producing home run it might be to succeed. The poor biologists who have made frogs their life work need a Plan B. And here Climate is absolutely perfect! Anybody can do an old fashioned data dredge and find some population of frogs that they are studying that is changing, because ecology and the environment is not static. One subpopulation of frogs is thriving — boo, hiss, cannot use you — but another is decreasing! Oh My Gosh! We’ve discovered a subpopulation of frogs that is succumbing to Climate Change! Their next grant is now a sure thing. They are socially relevant. Their grant reviewers will feel ennobled by renewing them, as they will be protecting Poison Dart Frogs from the ravages of a human-caused changing climate by funding further research into precisely how it is human activity that is causing this subpopulation to diminish.
This isn’t in any sense a metaphor, nor is it only poison dart frogs. Think polar bears — the total population is if anything rapidly rising, but one can always find some part of the Arctic where it is diminishing and blame it on the climate. Think coral reefs — many of them are thriving, some of them are not, those that are not may not be thriving for many reasons, some of those reasons may well be human (e.g. dumping vast amounts of sewage into the water that feeds them, agricultural silt overwashing them, or sure — maybe even climate change. But scientists seeking to write grants to study coral reefs have to have some reason in the public interest to be funded to travel all over the world to really amazing locations and spend their workdays doing what many a tourist pays big money to do once in a lifetime — scuba or snorkel over a tropical coral reef. Since there is literally no change to a coral reef that cannot somehow be attributed to a changing environment (because we refuse to believe that things can just change in and of themselves in a chaotic evolution too complex to linearize and reduce to simple causes), climate change is once again the ultimate save, one where they don’t even have to state that it is occurring now, they can just claim to be studying what will happen when eventually it does because everybody knows that the models have long since proven that climate change is inevitable. And Oh My! If they discover that a coral reef is bleaching, that some patch of coral, growing somewhere in a marginal environment somewhere in the world (as opposed to on one of the near infinity of perfectly healthy coral reefs) then their funding is once again ensured for decades, baby-sitting that particular reef and trying to find more like it so that they can assert that the danger to our reefs is growing.
I do not intend to imply by the above that all science is corrupt, or that scientists are in any sense ill-intentioned or evil. Not at all. Most scientists are quite honest, and most of them are reasonably fair in their assessment of facts and doubt. But scientists have to eat, and for better or worse we have created a world where they are in thrall to their funding. The human brain is a tricky thing, and it is not at all difficult to find a perfectly honest way to present one’s work that nevertheless contains nearly obligatory references to at least the possibility that it is relevant, and the more publicly important that relevance is, the better. I’ve been there myself, and done it myself. You have to. Otherwise you simply won’t get funded, unless you are a lucky recipient of a grant to do e.g. pure mathematics or win a no-strings fellowship or the Nobel Prize and are hence nearly guaranteed a lifetime of renewed grants no matter how they are written.
This is the really sad thing, Steve [Steven Mosher]. Science is supposed to be a debate. What many don’t realize is that peer review is not about the debate. When I review a paper, I’m not passing a judgment as a participant on whether or not its conclusion is correct politically or otherwise (or I shouldn’t be — that is gatekeeping, unless my opinion is directly solicited by an editor as the paper is e.g. critical of my own previous work). I am supposed to be determining whether or not the paper is clear, whether its arguments contain any logical or mathematical inconsistencies, whether it is well enough done to pass muster as “reasonable”, if it is worthy of publication, now not whether or not it is right or even convincing beyond not being obviously wrong or in direct contradiction of known facts. I might even judge the writing and English to some extent, at least to the point where I make suggestions for the authors to fix.
In climate science, however, the ClimateGate letters openly revealed that it has long since become covertly corrupted, with most of the refereeing being done by a small, closed, cabal of researchers who accept one another’s papers and reject as referees (well, technically only “recommend” rejection as referees) any paper that seriously challenges their conclusions. Furthermore, they revealed that this group of researchers was perfectly willing to ruin academic careers and pressure journals to fire any editor that dared to cross them. They corrupted the peer review process itself — articles are no longer judged on the basis of whether or not the science is well presented and moderately sound, they have twisted it so that the very science being challenged by those papers is used as the basis for asserting that they are unsound.
Here’s the logic:
a) We know that human caused climate change is a fact. (We heard this repeatedly asserted in the “debate” above, did we not? It is a fact that CO2 is a radiatively coupled gas, completely ignoring the actual logarithmic curve Goreham presented, it is a fact that our models show that that more CO2 must lead to more warming, it is a fact that all sorts of climate changes are soundly observed, occurred when CO2 was rising so it is a fact that CO2 is the cause, count the logical and scientific fallacies at your leisure).
b) This paper that I’m reviewing asserts that human caused climate change is not a fact. It therefore contradicts “known science”, because human caused climate change is a fact. Indeed, I can cite hundreds of peer reviewed publications that conclude that it is a fact, so it must be so.
c) Therefore, I recommend rejecting this paper.
It is a good thing that Einstein’s results didn’t occur in Climate Science. He had a hard enough time getting published in physics journals, but physicists more often than not follow the rules and accept a properly written paper without judging whether or not its conclusions are true, with the clear understanding that debate in the literature is precisely where and how this sort of thing should be cleared up, and that if that debate is stifled by gatekeeping, one more or less guarantees that no great scientific revolutions can occur because radical new ideas even when correct are, well, radical. In one stroke they can render the conclusions of entire decades of learned publications by the world’s savants pointless and wrong. This means that physics is just a little bit tolerant of the (possible) crackpot. All too often the crackpot has proven not only to be right, but so right that their names are learned by each succeeding generation of physicist with great reverence.
Maybe that is what is missing in climate science — the lack of any sort of tradition of the maverick being righter than the entire body of established work, a tradition of big mistakes that work amazingly well — until they don’t and demand explanations that prove revolutionary. Once upon a time we celebrated this sort of thing throughout science, but now science itself is one vast bureaucracy, one that actively repels the very mavericks that we rely on to set things right when we go badly astray.
At the moment, I’m reading Gleick’s lovely book on Chaos [Chaos: The Making of a New Science], which outlines both the science and early history of the concept. In it, he repeatedly points out that all of the things above are part of a well-known flaw in science and the scientific method. We (as scientists) are all too often literally blinded by our knowledge. We teach physics by idealizing it from day one, linearizing it on day two, and forcing students to solve problem after problem of linearized, idealized, contrived stuff literally engineered to teach basic principles. In the process we end up with students that are very well trained and skilled and knowledgeable about those principles, but the price we pay is that they all too often find phenomena that fall outside of their linearized and idealized understanding literally inconceivable. This was the barrier that Chaos theory (one of the latest in the long line of revolutions in physics) had to overcome.
And it still hasn’t fully succeeded. The climate is a highly nonlinear chaotic system. Worse, chaos was discovered by Lorenz [Edward Norton Lorenz] in the very first computational climate models. Chaos, right down to apparent period doubling, is clearly visible (IMO) in the 5 million year climate record. Chaotic systems, in a chaotic regime, are nearly uncomputable even for very, simple, toy problems — that is the essence of Lorenz’s discovery as his first weather model was crude in the extreme, little more than a toy. What nobody is acknowledging is that current climate models, for all of their computational complexity and enormous size and expense, are still no more than toys, countless orders of magnitude away from the integration scale where we might have some reasonable hope of success. They are being used with gay abandon to generate countless climate trajectories, none of which particularly resemble the climate, and then they are averaged in ways that are an absolute statistical obscenity as if the linearized average of a Feigenbaum tree of chaotic behavior is somehow a good predictor of the behavior of a chaotic system!
This isn’t just dumb, it is beyond dumb. It is literally betraying the roots of the entire discipline for manna.
One of the most interesting papers I have to date looked at that was posted on WUWT was the one a year or three ago in which four prominent climate models were applied to a toy “water world” planet, one with no continents, no axial tilt, literally “nothing interesting” happening, with fixed atmospheric chemistry.
The four models — not at all unsurprisingly — converged to four completely different steady state descriptions of the planetary weather.
And — trust me! — there isn’t any good reason to think that if those models were run a million times each that any one of them would generate the same probability distribution of outcomes as any other, or that any of those distributions are in any sense “correct” representations of the actual probability distribution of “planetary climates” or their time evolution trajectories. There are wonderful reasons to think exactly the opposite, since the models are solving the problem at a scale that we know is orders of magnitude to [too] coarse to succeed in the general realm of integrating chaotic nonlinear coupled systems of PDEs in fluid dynamics.
Metaphor fails me. It’s not like we are ignorant (any more) about general properties of chaotic systems. There is a wealth of knowledge to draw on at this point. We know about period doubling, period three to chaos, we know about fractal dimension, we know about the dangers of projecting dynamics in a very high dimensional space into lower dimensions, linearizing it, and then solving it. It would be a miracle if climate models worked for even ten years, let alone thirty, or fifty, or a hundred.
Here’s the climate model argument in a nutshell. CO2 is a greenhouse gas. Increasing it will without any reasonable doubt cause some warming all things being equal (that is, linearizing the model in our minds before we even begin to write the computation!) The Earth’s climate is clearly at least locally pretty stable, so we’ll start by making this a fundamental principle (stated clearly in the talk above) — The Earth’s Climate is Stable By Default. This requires minimizing or blinding ourselves to any evidence to the contrary, hence the MWP and LIA must go away. Check. This also removes the pesky problem of multiple attractors and the disappearance and appearance of old/new attractors (Lorenz, along with Poincaré [Jules Henri Poincaré], coined the very notion of attractors). Hurst-Kolmogorov statistics, punctuated equilibrium, and all the rest is nonlinear and non-deterministic, it has to go away. Check. None of the models therefore exhibit it (but the climate does!). They have been carefully written so that they cannot exhibit it!
Fine, so now we’re down to a single attractor, and it has to both be stable when nothing changes and change, linearly, when underlying driving parameters change. This requires linearizing all of the forcings and trivially coupling all of the feedbacks and then searching hard — as pointed out in the talk, very hard indeed! — for some forlorn and non-robust combination of the forcing parameters, some balance of CO2forcing, aerosol anti-forcing, water vapor feedback, and luck that balances this teetering pen of a system on a metaphorical point and tracks a training set climate for at least some small but carefully selected reference period, naturally, the single period where the balance they discover actually works and one where the climate is actively warming. Since they know that CO2 is the cause, the parameter sets they search around are all centered on “CO2 is the cause” (fixed) plus tweaking the feedbacks until this sort of works.
Now they crank up CO2, and because CO2 is the cause of more warming, they have successfully built a linearized, single attractor system that does not easily admit nonlinear jumps or appearances and disappearances of attractors so that the attractor itself must move monotonically to warmer when CO2 is increasing. They run the model and — gasp! — increasing CO2 makes the whole system warmer!
Now, they haven’t really gotten rid of the pesky attractor problem. They discover when they run the models that in spite of their best efforts they are still chaotic! The models jump all over the place, started with only tiny changes in parametric settings or initial conditions. Sometimes a run just plain cools, in spite of all the additional CO2. Sometimes they heat up and boil over, making Venus Earth and melting the polar caps. The variance they obtain is utterly incorrect, because after all, they balanced the parameter space on a point with opposing forcings in order to reproduce the data in the reference period and one of many prices they have to pay is that the forcings in opposition have the wrong time constants and autocorrelation and the climate attractors are far too shallow, allowing for vast excursions around the old slowly varying attractor instead of selecting a new attractor from the near-infinity of possibilities (one that might well be more efficient at dissipating energy) and favoring its growth at the expense of a far narrower old attractor. But even so, new attractors appear and disappear and instead of getting a prediction of the Earth’s climate they get an irrelevantly wide shotgun blast of possible future climates (that is, as noted above, probably not even distributed correctly, or at least we haven’t the slightest reason to think that it would be). Anyone who looked at an actual computed trajectory would instantly reject it as being a reasonable approximation to the actual climate — variance as much as an order of magnitude too large, wrong time constants, oversensitive to small changes in forcings or discrete events like volcanoes.
So they bring on the final trick. They average over all of these climates. Say what? Each climate is the result of a physics computation. One with horrible and probably wrong approximations galore in the “physics” determining (for example) what clouds do in a cell from one timestep to the next, but at least one can argue that the computation is in fact modeling an actual climate trajectory in a Universe where that physics and scale turned out to be adequate. The average of the many climates is nothing at all. In the short run, this trick is useful in weather forecasting as long as one doesn’t try to use it much longer than the time required for the set of possible trajectories to smear out and cover the phase space to where the mean is no longer meaningful. This is governed by e.g. the Lyupanov exponents of the chaotic processes. For a while, the trajectories form a predictive bundle, and then they diverge and don’t. Bigger better computers, finer grained computations, can extend the time before divergence slowly, but we’re talking at most weeks, even with the best of modern tools.
In the long run, there isn’t the slightest reason — no, not even a fond hope — that this averaging will in any way be predictive of the weather or climate. There is indeed a near certainty that it will not be, as it isn’t in any other chaotic system studied so why should it be so in this one? But hey! The overlarge variance goes away! Now the variance of the average of the trajectories looks to the eye like it isn’t insanely out of scale with the observed variance of the climate, neatly hiding the fact that the individual trajectories are obviously wrong and that you aren’t comparing the output of your model to the real climate at all, you are comparing the average of the output of your model to the real climate when the two are not the same thing!
Incidentally, at this point the assertion that the results of the climate models are determined by physics becomes laughable. If I average over the trajectories observed in a chaotic oscillator, does the result converge to the actual trajectory? Seriously dudes, get a grip!
Oh, sorry, it isn’t quite the final trick. They actually average internally over climate runs, which at least is sort of justifiable as an almost certainly non-convergent sort of Monte Carlo computation of the set of accessible/probable trajectories, even though averaging over the set when the set doesn’t have the right probability distribution of outcomes or variance or internal autocorrelation is a bit pointless, but they end up finding that some of the models actually come out, after all of this, far too close to the actual climate, which sadly is not warming and hence which then makes it all too easy for the public to enquire why, exactly, we’re dropping a few trillion dollars per decade solving a problem that doesn’t exist.
So they then average over all of the average trajectories! That’s right folks, they take some 36 climate models (not the “twenty” erroneously cited in the presentation, I mean come on, get your facts right even if the estimate for the number of independent models in CMIP5 is more like seven). Some of these run absurdly hot, so hot that if you saw even the average model trajectory by itself you would ask why it is being included at all. Others as noted are dangerously close to a reality that — if proven — means that you lose your funding (and then, Walmart looms). So they average them together, and present the resulting line as if that is a “physics based” “projection” of the future climate. Because they keep the absurdly hot, they balance the nearly realistically cool and hide them under a safely rapidly warming “central estimate”, and get the double bonus that by forming the envelope of all of the models they can create a lower bound (and completely, utterly unfounded) “error estimate” that is barely large enough to reach the actual climate trajectory, so far.
Meh. Just Meh. This is actively insulting, an open abuse of the principles of science, logic, and computer modeling all three. The average of failed models is not a successful model. The average of deterministic microtrajectories is not a deterministic microtrajectory. A microtrajectory numerically generated at a scale inadequate to solve a nonlinear chaotic problem is most unlikely to represent anything like the actual microtrajectory of the actual system. And finally, the system itself realizes at most one of the possible future trajectories available to it from initial conditions subject to the butterfly effect that we cannot even accurately measure at the granularity needed to initialize the computation at the inadequate computational scale we can afford to use.
That’s what Goreham didn’t point out in his talk this time — but should. The GCMs are the ultimate shell game, hiding the pea under an avalanche of misapplied statistical reasoning that nobody but some mathematicians and maverick physicists understand well enough to challenge, and they just don’t seem to give a, uh, “flip”. With a few very notable exceptions, of course.
Rgb
Postscript (from a related slashdot post):
1° C is what one expects from CO2 forcing at all, with no net feedbacks. It is what one expects as the null hypothesis from the very unbelievably simplest of linearized physical models — one where the current temperature is the result of a crossover in feedback so that any warming produces net cooling, any cooling produces net warming. This sort of crossover is key to stabilizing a linearized physical model (like a harmonic oscillator) — small perturbations have to push one back towards equilibrium, and the net displacement from equilibrium is strictly due to the linear response to the additional driving force. We use this all of the time in introductory physics to show how the only effect of solving a vertical harmonic oscillator in external, uniform gravitational field is to shift the equilibrium down by Δy = mg/k. Precisely the same sort of computation, applied to the climate, suggests that ΔT ≈ 1° C at 600 ppm relative to 300 ppm. The null hypothesis for the climate is that it is similarly locally linearly stable, so that perturbing the climate away from equilibrium either way causes negative feedbacks that push it back to equilibrium. We have no empirical foundation for assuming positive feedbacks in the vicinity of the local equilibrium — that’s what linearization is all about!
That’s right folks. Climate is what happens over 30+ years of weather, but Hansen and indeed the entire climate research establishment never bothered to falsify the null hypothesis of simple linear response before building enormously complex and unwieldy climate models, building strong positive feedback into those models from the beginning, working tirelessly to “explain” the single stretch of only 20 years in the second half of the 20th century, badly, by balancing the strong feedbacks with a term that was and remains poorly known (aerosols), and asserting that this would be a reliable predictor of future climate.
I personally would argue that historical climate data manifestly a) fail to falsify the null hypothesis; b) strongly support the assertion that the climate is highly naturally variable as a chaotic nonlinear highly multivariate system is expected to be; and c) that at this point, we have extremely excellent reason to believe that the climate problem is non-computable, quite probably non-computable with any reasonable allocation of computational resources the human species is likely to be able to engineer or afford, even with Moore’s Law, anytime in the next few decades, if Moore’s Law itself doesn’t fail in the meantime. 30 orders of magnitude is 100 doublings — at least half a century. Even then we will face the difficulty if initializing the computation as we are not going to be able to afford to measure the Earth’s microstate on this scale, and we will need theorems in the theory of nonlinear ODEs that I do not believe have yet been proven to have any good reason to think that we will succeed in the meantime with some sort of interpolatory approximation scheme.
rgb
Author: Dr. Robert G. Brown is a Lecturer in Physics at Duke University where he teaches undergraduate introductory physics, undergraduate quantum theory, graduate classical electrodynamics, and graduate mathematical methods of physics. In addition Brown has taught independent study courses in computer science, programming, genetic algorithms, quantum mechanics, information theory, and neural network.
Moderation and Author’s Replies Note: This elevated comment has been posted at the request of several commenters here. It was edited by occasional WUWT contributor Kip Hansen with the author’s approval. Anything added to the comment was denoted in [square brackets]. There are only a few corrections of typos shown by strikeout [correction]. When in doubt, refer to the original comment here. RGB is currently teaching at Duke University with a very heavy teaching schedule and may not have time to interact or answer your questions.
# # # # #
A Tour de Force!
An excellent post. Many thanks. I also commend the sticky reccomendations above.
Mosher, I will give you the same advice you’ve been giving Nick over at CA: you hurt your credibility when you refuse to admit you are wrong.
If science does not proceed or advance by verbal debates in front of audiences, then how does it proceed and advance?. It proceeds and advances by successful prediction. Not by verbal theater in front of an audience, but rather by actually doing science.
——————————
At the time that a prediction is made it is not known whether or not it will be successful.
Most predictions are unsuccessful.
Since science only proceeds by successful prediction, many people discover that they were not doing science after all for all the time that they invested into their unsuccessful predictions.
Bummer!
What a superb analysis of how science has become corrupted by our flawed funding models and the cash-cow of ‘global warming’ aka ‘climate change’. Thank-you, Dr. Brown, for the clarity and expertise with which you intertwine your descriptions of chaos theory with allusions to its contributors and history.
Dear Dr. Brown, were I in your presence I would give you a standing ovation! All of your points are dead on, and very well written. I would like to share your message with the world. Here is an article about an unusual grant process that I find interesting.
http://scitation.aip.org/content/aip/magazine/physicstoday/news/news-picks/house-science-committee-continues-unusual-review-of-nsf-grants-a-news-pick-post
Mosher has discredited himself with the post above. It is through openly embracing discussions and debates with qualified scientists of differing points of view, and honestly questioning your own theories when data does not agree, that true knowledge is advanced. This doesn’t include the political spin jobs that you get from politicians and PR people, I agree that those types of debates are worthless. Here is an example. In January Dr. Steve Koonin hosted several highly qualified climate scientists, from both sides of the debate, to discuss their views. The event is available for viewing here.
http://www.aps.org/policy/statements/upload/climate-seminar-transcript.pdf
It is a very worthwhile read, if you are interested in the real data and thought processes behind the attribution of humans to warming. Very telling, from both sides. This is a discussion, and many more just like them, that needs to happen until agreement between sides is reached, if not by the participants then at least by a significant majority of public observers. This is the essence of scientific discovery! I’ll tell you why. Personal bias is one of the worst corrupters of science. Open discussion and debate is the best, and perhaps the only, way to detect and drive out personal bias from skewing conclusions. This is a VERY big deal.
I am not compensated a dime for my opinion. My motivation is purely a devout belief and respect in the honest application of the scientific method, which seems is being trampled on.
One other comment. Another part of my motivation for action, is that I will defend America’s liberties and freedoms of choice to the end. Mandating changes in energy sources as proposed requires clearing a high bar for hard evidence. Short of such evidence any changes with a cost must be voluntary. I agree that there may be some chance for cagw to occur due to burning fossil fuels, we don’t know. I have heard the analogy with insurance, that people purchase fire insurance for their home even when the risk of fire is very low. But here is the thing. It is VOLUNTARY! Each homeowner (no mortgage) is free to weigh the odds, examine the cost of the insurance and decide whether it is worth the cost. So it is with climate. If CO2 really is warming the world, then when it warms enough that the common man can clearly see and feel that heat, then there will automatically be a consensus, people will voluntarily make changes. The evidence to date is not nearly sufficient, no matter how loudly some scream that it is. Just because there is a possibility a “trip point” could be crossed isn’t good enough to take away people’s freedom of choice, slight chance for the end of human civilization not withstanding. This is a value choice, and everyone has differing values. Wars are fought over differing values, perhaps this will be no exception.
I respectfully disagree. Mosher is wrong on this occasion (in my opinion) but he has a record of integrity that means he is not entirely discredited.
He’s just too full of himself.
It’s a great debate. No one has discredited himself here. Mosher adds much to WUWT All’s to the reader’s benefit.
Excellent read rgb.
As a life scientist who has an appreciation of the long sweep of the near billion year history of life on Earth, I especially appreciate your comments on those studying corals and frogs. These are animal species that have survived tremendous climatic variation over hundreds of millions of years. That prestigious journals and funding agencies expend their limited resources on studies of local impacts of very minor weather variation continues to astonish me.
Life arose on earth about 3.8 billion years ago. Even mulitcellular life is older than a billion years, possibly developing around 2.1 Ba.
“the near billion year history of life on Earth”
Writing this off as a brain fart. 3.2b and counting…
You are confusing the estimated age of the planet with the estimated age of life on said planet.
I think if check it out, you will quickly discover the estimated age of the earth to be 4.5 billion years and the first signs of life to date to about 4 billion years. For example:
http://en.m.wikipedia.org/wiki/Timeline_of_evolutionary_history_of_life
OK, make that continuous life on the planet. There is speculation that the earth was in a global ice age (snowball earth) less than a billion years ago :
http://en.m.wikipedia.org/wiki/Snowball_Earth
Luboš express well why continuous life on the planet for, as RGB says “the near billion year history of life on Earth” , (which probably should say “the near billion year history of [continuous] life on Earth” ) shows why ECS to CO2 forcing must be on the low side :
http://motls.blogspot.ca/2014/10/paper-tcr-ecs-climate-sensitivity-13-16.html
Thanks. You’re right of course. Should have said “complex life”… but even that doesn’t give credit to earlier forms. I recently had the privilege of observing stromatolites at the bottom of the Grand Canyon..
RGB is a physicist and he describes well how physicists are trained and the difficulties that ensue.
Scientists with training in physics and maths have little experience in disentangling the interactions between multiple variables to build an understanding of what is going on. In climate science as in geology averaging data sets for example simply destroys information .While I would agree entirely about the catastrophic schoolboy errors of scientific judgment made by the modelers ( what they do is exactly like taking a temperature trend from say Feb – June and projecting it ahead in a straight line for 10 years or so) I disagree strongly with his characterization of the climate system as chaotic. He says:
“We teach physics by idealizing it from day one, linearizing it on day two, and forcing students to solve problem after problem of linearized, idealized, contrived stuff literally engineered to teach basic principles. In the process we end up with students that are very well trained and skilled and knowledgeable about those principles, but the price we pay is that they all too often find phenomena that fall outside of their linearized and idealized understanding literally inconceivable. This was the barrier that Chaos theory (one of the latest in the long line of revolutions in physics) had to overcome.
And it still hasn’t fully succeeded. The climate is a highly nonlinear chaotic system. Worse, chaos was discovered by Lorenz [Edward Norton Lorenz] in the very first computational climate models. Chaos, right down to apparent period doubling, is clearly visible (IMO) in the 5 million year climate record. Chaotic systems, in a chaotic regime, are nearly uncomputable even for very, simple, toy problems ”
However to call the climate chaotic i.e’ to imply that useful forecasts cannot be made is neither useful nor true.
The climate system can be understood by using the sort of approach used in the Geological Sciences to correlate events and recognize evolving patterns in time and space.
It is not possible to forecast the future unless we have a good understanding of the relation of the climate of the present time to the current phases of the different interacting natural quasi-periodicities which fall into two main categories.
a) The orbital long wave Milankovitch eccentricity,obliquity and precessional cycles which are modulated by
b) Solar “activity” cycles with possibly multi-millennial, millennial, centennial and decadal time scales.
The Milankovitch cycles are stable over hundreds of millions of years and the solar periodicities too are stable enough for forecasting for useful periods of time..
The convolution of the a and b drivers is mediated through the great oceanic current and atmospheric pressure systems to produce the earth’s climate and weather.
After establishing where we are relative to the long wave periodicities to help forecast decadal and annual changes, we can then look at where earth is in time relative to the periodicities of the PDO, AMO and NAO and ENSO indices and based on past patterns make reasonable forecasts for future decadal periods.
That is not to say that they can be computed using some mathematical formular. Climate science like Geology is fundamentally an historical science – what you do is build a narrative using all available data get a feel for the processes invoved and the patterns in time and space of the variables of interest then project the patterns forward in time with due regard also given to any secular changes taking place apart from the factors included in a) and b).
Using this approach a perfectly reasonable narrative of climate change for the last say 3 million years can be elucidated and forecasts for the next few thousand years can be made with reasonable expectation of success. See
http://climatesense-norpag.blogspot.com
All together too much time and effort is currently being expended in the literature and the blogosphere on trying to fine tune estimates of climate sensitivity or balancing the radiative budget or locating the supposed missing heat – essentially basing the investigations on the same assumptions used in the useless GCMs.
The question of most interest now is really – where are we with regard to the current peak in the 1000 year cycle.? I’ve made my suggestion. I hope others will take an interest and produce other points of view.
“but would you believe that the debate grew so heated that we were almost (most cordially 🙂 shouting at each other”
I know that disagreements over the metaphysical implications of counterfactual conditionals can lead to unseemly brawls in philosopher’s bars, so, yes, I can easily believe physicists could get a bit worked up over that issue. Not that I understood the issue.
Issawi’s law of social motion: “In any dispute the intensity of feeling is inversely proportional to the value of the stakes at issue.”
This is a statement which is fine in and of itself, it is an accepted fact that climate changes. Unfortunately a dozen sharks are jumped in order to conflate that fact to AGW being a fact. AGW is neither a fact, nor a theory, I am not sure if it is even a hypothesis, since it is so broad and simplistic as to be neither provable or unprovable.
Amazing article, identifying the psychology and funding driving climate science and the inherent nonsensical scientific foundations of AGW. Below I’ll just highlight a few of the many concise observations made.
I appreciate rgb’s critique of consensus science but have been puzzled why critics have deigned to not come up with better science. The chronic excuses are that processes are irreversible, nonlinear, chaotic … and beyond our abilities. But we do nibble about the edges of such processes with black boxes. Students are still expected to solve for the dissipation of an electric current between two boundary potentials with no knowledge of what’s inside be it chaotic, nonlinear, … Or the rate at which entropy increases given an energy flux through a black box positioned between two thermal reservoirs. Trivial problems, yes. Why so? Because they can be reduced to surface integrals by virtue of conservative or non-divergent fluxes. Thermodynamic states are defined by path independence or exact differentials for state properties. Does this imply a lack of information needed to find their past or future paths? Is this missing information encapsulated in increased entropy? Is climate sensitivity a surface parameter?
“Twenty-first-century theoretical physics is coming out of the chaos revolution. It will
be about complexity and its principal tool will be the computer. Its final expression remains
to be found. Thermodynamics, as a vital part of theoretical physics, will partake in the
transformation.” – M. Baranger (Chaos, Complexity, and Entropy)
quondam
I have no intention of starting an off-topic discussion but I provide an answer to your question so it is not ignored.
You write
Sorry, but that is a misunderstanding.
The real reason for lack of “better science” is lack of an acceptable theory of climate.
The idea that radiative forcing defines climate was adopted but e.g. the ‘pause’ provides doubt to it. So, amendments to that radiative forcing conjecture are being applied as methods to avoid abandoning it. The conjecture is now at the stage phlogiston theory had before it was displaced by the oxygenation theory of combustion.
However, there is no clear theory to replace the radiative forcing conjecture. Cyclic behaviour is one idea and solar influence is another, but there is no real evidence for any simple explanation of climate variation (which is not to deny Milankovitch Cycles).
I have argued for decades that global climate is a chaotic system with two main strange attractors which determine glacial and interglacial states (this idea is supported by the climate ‘flickers’ during transitions between these states), and others have also reached similar conclusions. However, there is insufficient knowledge to start constructing even a toy model of such a chaotic system so this idea is as unacceptable as all other conjectures concerning global climate behaviour(s).
It requires much effort to explain that there is lack of an acceptable theory of climate. The “chronic excuses” you mention are ways to say “We don’t know because we lack of an acceptable theory of climate” in a manner acceptable to the public.
I sincerely hope this response is sufficient.
Richard
Quondam , Richardscourtney You are really asking how to do climate science — see my post at 7/10:18 pm above for the answer .Y’all need a change of mindset – to get some idea of what I am talking about check in a library Vol 1 of The Geologic Time Scale Gradstein et al 2012 and see how the geological time scale is cobbled together from different types of data from different fields. Also when looking at time series thinking of geological correlation concepts such as type sections (type time series) and golden spikes is very helpful.
Thus e.g when thinking about the past thousand years the Hockey Stick type time series should be replaced with
Fig 9 at http://climatesense-norpag.blogspot.com
which at this time is the most useful reconstruction for identifying N H temperature trends in the latest important millennial cycle – .From Christiansen and Ljungqvist 2012 (Fig 5)
http://www.clim-past.net/8/765/2012/cp-8-765-2012.pdf
quondam
October 8, 2014 at 4:12 am
“Thermodynamic states are defined by path independence or exact differentials for state properties. Does this imply a lack of information needed to find their past or future paths? Is this missing information encapsulated in increased entropy? Is climate sensitivity a surface parameter?”
Climate models rely on statistical description. They fail. For a statistical description many instances of the described processes have to happen in one grid box.
This condition is violated all the time. Convective fronts, hurricanes etc. can span a small continent. A statistical description is impossible on the scale of the simulation.
There is no statistical description of thunderstorms. And there is no physical simulation of thunderstorms on a microscale. That would require gridboxes on the meter scale I guess. And to build such a simulation we would first have to find out what happens in a thunderstorm. Hey you can ignore lightnings and Elfs and sprites and you’re simulating a kind of PG 13 rated version of a thunderstorm but not a real thunderstorm.
Oh, your meter-sized gridboxes would also have to have time steps of microseconds.
Oh my! So from this point of view, physicists are offering excuses for being unable to integrate chaotic systems indefinitely into the future at infinite precision? Look, you really, really need to buy a copy of Gleick’s book:
seconds — the time required for sound to travel 1 mm.
http://www.amazon.com/Chaos-Making-Science-James-Gleick/dp/0143113453#
and read it. It’s perfectly accessible to a lay person. Maybe then you will know why your reply here is really amazingly funny. Sort of like saying “I have been puzzled why it is taking physicists so long to master FTL travel so we can all build starships and spread out across the Universe. The chronic excuses are that FTL travel violates a half dozen physical and mathematical principles .. and is beyond our abilities.” Or “I don’t know why those computer scientists haven’t managed to come up with P solutions to NP-complete problems. Their chronic excuse is that P is probably not equal to NP so that doing so is … beyond our abilities.”
Seriously, it isn’t that skeptics are lazy, as you seem to imply. It is that the solution that is being sold as trustworthy by the crowd of True Climate Model Believers almost certainly cannot be solved at all in the way they are trying to solve it. Other replies down below indicate why — it’s the bit in the top article about “30 orders of magnitude”. This breaks down as follows:
1 mm — approximate Kolmogorov scale for viscosity in air, the scale of the smallest eddies that are relevant in relaxation processes in turbulent air, the scale of the fluctuations that nucleate the self-organization of larger scale structures.
The spatiotemporal scale at which we might reasonably be able to solve an actual physics computation of the relevant microscopic dynamics:
Cover the Earth’s atmosphere in cubes roughly 1mm across (smaller than the K-scale is OK, larger not so much). Integrate in timesteps of roughly 1 microsecond. Don’t forget to do the same thing to the ocean. Somewhere along the way, work out how to do the radiative dynamics when the mean free path of LWIR photons is order of 1 meter, hence directly statistically integrable on this grid with minimal approximation.
The spatiotemporal scale being used:
100km x 100 km x 1 km. That is, each grid cell in the higher grid resolution models contains 10^8 x 10 ^8 x 10^6 = 10^22 cubic millimeters.
5 minutes = 300 seconds: The approximate time for sound to propagate 100 km (they simply ignore the vertical direction and hence the model is non-physical vertically and will not correctly describe vertical transport or relaxation processes).
Note that 300/3 x 10^{-6} = 10^8, and 10^8*10^22 = 10^30.
I wasn’t kidding. The computations are being carried out at a scale 10^30 times away from the scale where we might actually expect the solution to work. Although with nonlinear dynamics, it might need to be finer than this.
Note further. In timesteps of 10^{-6} seconds, to advance the solution 30 years requires roughly 10^15 timesteps. If it took only a 1 microsecond or actual compute time to advance the solution on the entire grid by 1 timestep, it would still take 1 billion seconds to complete, which is (wait for it) 30 years! That is, to beat the Earth to the solution we’d have to complete a computational timestep in less time than the time the timestep represents in physical units and the integrated dynamics. And if you think a timestep would take a computational microsecond, I have some real estate you might like to look at in New York, a big bridge, y’know…
As for why you need this granularity, it is because there are structures that end up being important to the energy flow in the system that are simply erased by too coarse a computational scale. At best one has to make a blind guess for their integrated effect and hope that the resulting mean field theory works.
But a nonlinear chaotic mean field theory that works is very, very close to being a pure mathematical/physical oxymoron. The discovery of chaos was the discover that mean field theory breaks down in systems with sufficient complexity. It is the moral equivalent of the discovery of Godel’s theorem in mathematics and logic — it means that people (like Hilbert) who fail to axiomatize all of mathematics don’t fail because they are lazy, they fail because it cannot be done. It doesn’t matter how hard you try. You might as well try to solve NP-complete problems in P time, or invent a physical system that violates the second law of thermodynamics. At the very least, the onus of proof is entirely on anyone claiming to have discovered a way to integrate a mean-field climate model that works to prove it, both with some actual plausible mathematical arguments (which can begin by explaining why you succeeded in this insanely difficult problem when nobody else has succeeded with mere toy problems that are far simpler) and by — you bet — demonstrating predictive skill.
But let me rise to the challenge. Here it is, the moment you’ve been waiting for:
The Skeptics’ Global Warming Model!
(fanfare, please)
Start with any of the many, many papers that estimate the direct radiative warming one expects from CO_2 only. There’s a nice paper in the American Journal of Physics — a teaching journal — in 2012 by Wilson that presents a nice summary of the literature and the results of simple Modtran computations. I can do no better than to quote the authors near the end of the paper:
We’ll use a value of 1 C as the no-feedback warming to be expected from doubling CO_2 from 300 ppm to 600 ppm. Let us now build the simplest possible model! It is known
for any given partial pressure of CO_2 relative to the reference partial pressure at the start. We can set the constant by using the no-feedback expected doubling:


is reasonable. Thus:


. My model does not account for this sort of deterministic/natural “noise” in the climate, so of course it won’t do very well in tracking it.
, to get the best fit across the entire span. Without really doing a nonlinear regression, we can match the temperature change pretty precisely with:

that the atmospheric warming this represents is logarithmic in the CO_2 content. That is, we expect to get a temperature anomaly of
which is really convenient because then I can write:
in centigrade. This is my climate model. All of it. My basic assumption is that CO_2 is a greenhouse gas. It is expected to cause a direct radiative warming of the earth as its concentration increases. The Earth climate system is stable, so I’m going to make a linear response hypothesis — on average, in the vicinity of equilibrium, the response to any small perturbation is to try to restore equilibrium. The additional forcing is over an order of magnitude smaller than the annual variation in forcing due to the Earth’s eccentric orbit, well over two orders of magnitude smaller than the TOA insolation, and far smaller than the daily or even hourly variations with e.g. cloud, snow and ice cover, or variations due to water vapor (the important GHG).
I cannot solve the Navier-Stokes equations for the planetary climate system in any believable way. Nobody can. The simplest assumption is therefore that the feedbacks from the entire collective system are neutral, neither strongly positive nor strongly negative. Any other result would cause the system to nonlinearly and cumulatively respond to the near infinity of natural forcings in a biased way, which would result in a biased random walk (or heck, a plain old random walk) and the system would be unstable to its own noise, like a loudspeaker turned up too high. We could hardly miss such a thing, were it to occur — it would look like a plunge into glaciation or “overnight” emergence from glaciation.
So let’s compare my climate model to the data. HADCRUT4 on WFT clearly shows 0.5C warming from the general decade 1940-1950 (to avoid cherrypicking an end point, and at appropriate precision) up to the general decade 2004-2014. Mauna Loa on WFT one has to extrapolate back a bit to reach the start decade, but in numbers of similar precision the ratio
My model is accurate over a span of roughly seventy years to within 0.1 C, less than the acknowledged error in HADCRUT4 even without worrying about its neglect of things like UHI that might very reasonably make HADCRUT4 and upper bound (and probably biased) estimate of the temperature anomaly.
With this absolutely bone simple model I outshoot all of the models computed in CMIP5 over the exact same span. It won’t work very well on a longer hindcast, of course — because all one learns from looking at data before 1945 is that there was almost as much warming in the span from 1910 to 1945 as there was from 1945 to the present, even without a commensurate increase in
But neither do any of the CMIP5 models!. In fact, they do no better than my model does — in figure 9.8a of AR5 you can see the multimodel mean skating straight over this early 20th century warming (and you can see how its constituent models don’t even come close to the measured temperature, being consistently high and having all sorts of obviously wrong internal dynamics and timescales and fluctuation scales.
Now, unlike the IPCC modelers, I’m perfectly willing to acknowledge that my model could be improved (and thereby brought into better agreement with the data). For example, the estimate of 1.0C per doubling of CO_2 is pretty crude, and could just as easily have been 1.2 C or 1.5 C. A simple improvement is to solve for my single parameter,
which corresponds to a TCS of 1.3 C from all sources of feedback and forcing!.
My model is now dead on — how could it not be — but it is also dead on the midpoint of the most often quoted range for CO_2-only forcing, 1 to 1.5 C. Indeed, it’s scary good.
I would never assert that the prediction is is that good, because I’m not an idiot. I happen to think that natural variation alone can easily produce temperature deltas of 0.4-0.5 C on precisely the same timescale without any help from CO_2 at all. One such warming episode is clearly visible in the thermometric climate record from the first half of the 20th century. There is also a compelling correspondence between the late 20th century rise and e.g. ENSO events and PDO phase, further supporting an assertion that natural variation is probably so large that my linear response model could easily be off by 100% either way with the difference being natural. That is, the total warming from CO_2 including feedbacks but not including natural variation and noise could be as little as 0 C — flat out neutral, insensitive altogether — to 2 to 2.5 C — the warming “should” have been twice as great including all feedbacks but natural variation cancelled it. To put it another way, almost all of the late 20th century warming could have nothing to do with CO_2, or the warming we observe there could have been even greater if it weren’t for partial cancellation due to natural cooling.
But there is little evidence for either one of these — certainly no evidence so compelling that I should feel it necessary to make a choice between them. Until there is, the only rational thing to do is keep it simple, and assume that my simplest possible physics based model is correct until it is falsified by the passage of time. In the meantime, it stands as strong evidence against large positive feedbacks.
Suppose there were, as has so very often been asserted, a strong, positive, water vapor feedback.
Then where the hell is it?
The data is precisely fit by CO_2 alone with no feedback, not over the paltry 15 to 20 years (that aren’t even “climate” according to climate scientists) in which the late 20th century actually warmed, but across the entire range of 70-odd years which is surely enough to represent a meaningful increasing CO_2 climate trend. Indeed, if one smooths the temperature curve in a 20 or 30 year running average, the agreement with my model curve if anything improves — the irrelevant noise goes away.
There isn’t any room for positive feedback. If it occurred, surely we would have observed more than 0.5 C of warming, because that’s exactly what is predicted in a no feedback model.
rgb
RGB Since you have convincingly shown again the complete uselessness of the GCM approach to forecasting and the small influence of anthropogenic CO2 on climate would you not agree that it is time to move to a completely different method of climate forecasting. as referred to in my comments above at
7/ 10:18pm and 8/ 6:28 AM and in the series of posts at http://climatesense-norpag.blogspot.com
which provide forecasts of the timing and amount of a possible coming cooling?
Dr. Brown, thank you for another well thought out addition to this post. I have one area that I think may need greater clarity. You stated…”I cannot solve the Navier-Stokes equations for the planetary climate system in any believable way. Nobody can. The simplest assumption is therefore that the feedbacks from the entire collective system are neutral, neither strongly positive nor strongly negative. Any other result would cause the system to nonlinearly and cumulatively respond to the near infinity of natural forcing in a biased way, which would result in a biased random walk (or heck, a plain old random walk) and the system would be unstable to its own noise, like a loudspeaker turned up too high. We could hardly miss such a thing, were it to occur — it would look like a plunge into glaciation or “overnight” emergence from glaciation.,,,”
If the initial affect of additional CO2 is warming, but, with regard to CO2 the earth’s climate system has been shown, both in near time, and in the paleo record, to be NOT sensitive to CO2, (the record indicates we have gone through very long warmer and cooler periods with much higher CO2) then would not the center of your “neutral” bounds be a negative response equal to the forcing?
Dr. Brown,
In addition to your model, many climate modelers show how good their model is in hind casting the past century temperature trend. Of course that is a necessary, but by far insufficient condition to make that a model has any predictive skills. The main point is that near all multi-million dollar climate models working on multi-million dollar computers are beaten by a simple spreadsheet on a desktop or laptop, where the main influences: GHGs, human and volcanic aerosols and solar are used as the driving inputs based on a similar influence of the individual forcing. See:
http://www.economics.rpi.edu/workingpapers/rpi0411.pdf
Dear David A,
I suspect you won’t return to look for this answer, but I’ll give it anyway. I suppose that my reply would have to be a qualified — pretty highly qualified, weak, even wussie, “Yes”. Or in parlance, “sorta”, “maybe”, “it’s plausible”.
The reason I can give no better answer is that two principles are in conflict. One is the logical fallacy ceteris paribus — trying to connect past behavior (especially across geological time) to the present on the basis of proxy data being leveraged to the very limits of its probable information content is fraught with many perils — we think, for example, that the state of the sun itself was cooler, that CO_2 was higher, that the land mass was different, that the moon was closer, that the tides were higher, that the axial tilt and eccentricity were different, that the mean distance of the Earth from the sun was different. We cannot solve a believable climate model given moderately accurate measurements and knowledge of all of these things in the present, and trying to use what we know or believe about the present in climate models to hindcast the past has so far met with notable failure (as has both recently and repeatedly been reported on WUWT pages).
This makes it difficult to assess the implications of things like the Ordovician-Silurian glaciation (with CO_2 likely over 10x higher than it is today). There are lots of ways one might try to explain this, but in the end they are all storytelling, not science, because the data to support one explanation over the other is truly lost. The sun was cooler. The continents were in a different shape. CO_2 was much higher. The orbital parameters were (no doubt) completely different and we don’t know what they were and cannot possibly infer them from present data. We can do only a little bit better with the many other ice ages spread out over the paleo record, plenty of them occurring with CO_2 over 1000 ppm, as most of the last 600 million years the Earth has had CO_2 levels over 1000 ppm. The current super-low levels seem to be a relatively recent development. Even the recent paleo record — the Pleistocene — exhibits a stunning global bistability with equally stunning internal instability in both the glacial and interglacial phases.
However, the fundamental principle of statistical modeling, if there is such a thing is ceteris paribus! To be more precise, it is maximum entropy, selecting as a probable future the one that matches our ignorance most closely. We don’t know how to balance out the feedbacks in a nonlinear chaotic system as they probably aren’t computable by means of any sort of linearization including the sort that is casually done in every climate model ever written. Every such linearization is an assumption that short spatiotemporal scales can be treated in the mean field approximation in a chaotic system of nonlinear PDEs. That assumption is proven wrong in pretty much every numerical study of nonlinear PDEs that exhibit chaos ever conducted, but it leaves us in a state of near total ignorance about what we should expect for the sign and magnitude of the feedbacks.
In computational mathematics, all that this teaches us is that we cannot numerically solve nonlinear coupled ODEs as predictive instruments out past a certain number of timesteps before the solutions spread out to fill a large phase space of possibilities and are no longer predictively useful. It also teaches us that monkeying with the granularity of the solution or making mean field approximations leads us to a (and this is something the climate guys just don’t seem to get) different phase space altogether — quite possibly one with no chaos at all. In chaotic systems, the human adage “don’t sweat the small stuff” is precisely contradicted. You have to sweat the small stuff. It’s that damn butterfly, beating its wings and then everything changes.
When scientists deal with a chaotic system such as the climate in nature, then, what to do? In my opinion the best thing to do is construct a linearized statistical model that maximizes our ignorance (in context, admits that we do not know and cannot reasonably compute the feedbacks either in detail or with heuristic arguments that are anything more than “plausible”, not “probably true”) and then beware the Black Swan!
Since I’m having great fun posting fabulous books every engaged in this debate should read, let me throw this one in:
http://www.amazon.com/Black-Swan-Impact-Highly-Improbable-ebook/dp/B00139XTG4
The author really needs to redo his cover art. The Black Swan on the cover should be depicted a la Escher as a fractal image of chaos in the form of a black swan, with a surface boundary with complexity, all the way down (instead of turtles!). Chaos theory is precisely the fundamental theoretical justification for not taking our linearized predictions even if we knew all of the linear responses of the system to forcings in every independent dimension quite precisely at all seriously. In ordinary linearized physics, knowing all of the partial derivatives gives you the gradient, and the gradient tells you what the system will do. Except of course when it doesn’t — for example at critical points. But chaotic systems can be viewed as one vast multicritical point, a multicritical volume of phase space. Knowing the gradient might — note well, might — give you some reason to believe that you can extrapolate/integrate a very short time into the future. Empirically, two to three weeks before one tiny, ignored feature on a single feather of the black swan grows to completely replace it with a swan of a startlingly different shape, only to be replaced in turn a few weeks later by a tiny feature on its beak, and so on. It’s like predicting a kaleidoscope, or the top card drawn from a deck of well-shuffled cards, only worse — the kaleidoscope can turn itself into a different toy altogether, the usual white swan can change color altogether, the nice warm interglacial can suddenly decide to race into glaciation, with no “explanation” needed beyond the chaotic nature of the dynamics themselves.
Again, this eludes climate scientists. They think that the Younger Dryas requires explanation. Well it might have one — ex post facto. It might even have several. But even armed with the explanatory events (if there are any!) beforehand, it was probably not certain, and even if there were no explanatory events at all — no giant asteroid impacts, no sudden drainings of lakes, or at least no more than usual with nothing particularly “special” about them — it might have still just happened. That’s the spooky thing about chaos. Try explaining the trajectory of (say) a double pendulum in the chaotic phase as it dances around like a mad thing in terms of its normal modes. (I have had students build these things a couple of times as extra credit projects in intro mechanics and have one sitting in my office — great fun to play with:
http://video.mit.edu/watch/double-pendulum-6392/
Only the last bit, at the very end, is the linearized principle mode oscillation where the two rods move in phase like a simple oscillator. Right before this mode established itself via damping, the oscillation could probably have been decomposed into a linear combination of this and the other mode where they oscillate in opposite directions.)
This is no more than a metaphor for the climate, of course. The climate is breathtakingly more complex and is a driven oscillator. Here is a video from 1982 demonstrating the time evolution of a van der Pol oscillator — an absolutely trivial nonlinear driven oscillator in only two dimensions. The thing to note is that tiny changes in the driving parameters don’t make small changes in an otherwise systematic “orbit” — they cause the system to completely rearrange itself in any regime where there are multiple attractors. If you looked at only (say) the y-projection of this motion, it would not look trivially like an oscillator.
All of this should make us mistrust any sort of “no Black Swan” conclusion based on the kind of linearization argument I give above and that you correctly point out could be used to argue more strongly for negative feedback. I offer the linearized no-feedback model not because I think it has strong predictive value into either the past or the future but because it is the best we can do, given our ignorance and inability to do better. It is quite literally a null hypothesis, and should be trusted for just as long as it works, while acknowledging that the climate system is fundamentally unpredictable and could go all postal on us in any direction even without any anthropogenic help.
The one thing the catastrophists have dead right is this. For better or worse, the CO_2 we have added and will continue to add is most likely — in the statistically and physical defensible sense that this is predicted by a mix of physics and ignorance — going to produce a degree or so of warming by the time atmospheric CO_2 reaches 600 ppm. This is not a confident prediction — the oceans could eat the heat. Water vapor could double this. Clouds could entirely cancel it. Volcanoes could erupt and alter aerosols in ways that completely change things. The sun could go quiet for a half century or more and have some occult (but nonlinearly amplified!) effect on the climate. Thermohaline circulation could suddenly discover a new mode and the Gulf Stream could suddenly divert 500 miles south and leave the entire northeast coast of the US and all of Europe in the deep freeze and trigger an ice age (seriously, that’s all that would be needed — 100 years, a blink of an eye in geological time, of diverted gulf stream and we’d enter the next glacial episode, IMO).
We can predict none of this. We cannot really measure any of it to any meaningful precision, and climate scientists are apparently just realizing that their assumptions about the ocean in all of the climate models are dead wrong. Oops.
That means that we do not know the effects of the extra degree of warming. We do not even know how to estimate the stability of the climate system. We do not know the partial derivative field for a large-scale convective eddy in the ocean in response to an irregularly distributed change in its surface thermal field and driving. We cannot calculate it. We do not know. We do not know. We do not know.
But it is quite possible that the result will be bad. Or good! Or a wash, neither bad nor good.
What catastrophe theorists should be doing is trying to guesstimate not the direct response which is already accurate enough in my toy model until nature says otherwise, but the probability of a catastrophic response in the nonlinear system. At a guess, we’re 30 to 50 years away from even a semi-phenomenological theory that could handle this, and that could be optimistic, as we aren’t even really trying and haven’t started looking, dicking around with simulations at a silly spatiotemporal length scale and mean field approximations galore that we shouldn’t expect to come close to the behavior of the actual system in any quantitative sense that matters.
Hence my own opinion. Up to now, I suspect that the additional CO_2 has been on average a good thing. Still, if I had my druthers, we would not rocket it up to 600 ppm simply because it will almost certainly cause a rearrangement — possibly a dramatic rearrangement — of the underlying attractors as we do. We cannot count on past observations of climate to be particularly good predictors of future climate, and cannot rule out new hot attractors forming, and we know that there is a cold phase attractor lurking because the Earth is currently bistable on geological time and in a rather protracted excursion into the warm phase. I’m hoping that all we have done (and will do with the rest of the additional degree) is stabilize the system against a return to glaciation. Wouldn’t that be lucky! But even without the CO_2, the climate varies by enough to cause massive drought and heat wave and cold wave and flood and hurricane and disasters a plenty.
For better or worse, we’ve jumped on the back of this particular shark. There is no safe way to get off, and trying to the extent that we are trying is killing millions of people a year because we’ve panicked and are doing it stupidly, trying to fling ourselves from its back and not worrying about whether or not we’ll land in a bed of broken glass. In our panic, we don’t even know that it is a shark — it could be a friendly dolphin, carrying us away to Atlantis and a world with a truly human-optimal climate — so far, the ride has if anything been mostly beneficial and we haven’t so much as caught a glimpse of teeth.
All things being equal, it is probably a good idea not to jab the putative shark with spurs, even if it might turn out to be a sadomasochistic dolphin instead, while at the same time not launching ourselves from the back altogether and damning several billion people to a continuation of a life of abject poverty. Including us, when the effort inevitably triggers a massive depression, because we do not have the technology yet to safely dismount.
rgb
“Skeptics’ Global Warming Model!” – I love a climate model that can be run on a TI-85 calculator and still be as accurate as the computer farms the IPCC modellers use. Priceless.
So how much funding do you need to develop this model further? I’m thinking some field research type grants in the islands of Hawaii are in order for you sir.
PS. The book Chaos is a must read. I read it about 15 years ago. Just an all around excellent description of a concept and field way beyond my level but brought to me in a way I could understand.
Sure. But climate science is precisely about the state of what is inside the box, not about the fact that the surface integrals have to balance. And when what is inside of the box is a truly stupendously large thermal reservoir and what comes out of the box is governed by dynamic chaotic processes inside of the box that can obviously be at least fully bistable, making inferences about what is inside of the box and its probable response to other stuff happening inside the box on the basis of surface integrals is just silly.
Bear in mind that I absolutely love surface integrals and vector calculus and am teaching electrodynamics at this very moment, which is rife with precisely what you assert. But in electrodynamics we also have (usually) linear response assumptions that let us connect things like polarizability to field inside the box from a knowledge of the surface states, and we have other things like uniqueness theorems that work for the linear PDEs that describe the contents.
Open systems, especially open self-organizing nonlinear multicritical systems, are not quite so amenable to this sort of analysis. Oh, one can do it and arrive at some insight — detailed balance is still a condition on open systems in some sort of local equilibrium and not at or particularly near a critical instability — but to figure out the inside of the box one has to be able to go inside the box and look, and what one discovers can easily be non-computable, however much one can explain what one observes ex post facto.
A trivial example. I can easily understand how turbulent rolls form in a container heated in a certain way and cooled in another. I can absolutely understand that heat flows in on one side and flows out on the other when it is in a kind of “steady state”. However, in certain regimes what I might observe is a modulation of that heat flow indicative of a critical instability that completely changes the pattern of turbulence and temperature inside of the container. My surface measurements become nearly irrelevant to determining the state of the box, because the internal state is no longer uniquely determined by the surface observation.
rgb
average joe
October 7, 2014 at 8:58 pm
==========
I’m not going to comment on the following link you provided so as to not direct/steer the topic of this discussion/thread but it is relevant. Thank you.
http://scitation.aip.org/content/aip/magazine/physicstoday/news/news-picks/house-science-committee-continues-unusual-review-of-nsf-grants-a-news-pick-post
What should the debate be about.
The debate I think most skeptics would like to see is primetime debates (focused on “Are the consequences of the human additions of CO2 to the atmosphere catastrophic, or beneficial?”) Such a primetime debate between the “hockey team”, and skeptical scientist like Craig Idso (heading the benefits of CO2 portion of the debate)) and specialist in atmospheric physics, sea level rise, etc, would be great fun to watch, and highly educational for the general public.
Such a focused debate also would likely advance science tremendously, as the resources being diverted to CAGW force a corruption onto current science, and the public policy as a result of this corrupted science has raised the cost of energy globally causing immense economic harm.
A wealthy society is far more capable of doing the real research which business is not inclined to support. So for the above reasons as well, honest focused public debate on the theory of CAGW could greatly advance science, and society.
– – – – – – – – – –
David A,
The thrust of the thread is going to where you are. : )
If climate focused science was an open and totally publically assessable argument of antagonist and protagonists of various theories and also of various research approach strategies, the level of funding by public taxation would be much more severely criticized and effectively scrutinized for reasonable management. The extent to which that kind of situation was discouraged by government focused on the ‘settled science’ proponents who said, conveniently, that there should be no argument / debate is the extent to which the public was overtaxed to support the past 20+ years of research.
The concentration, like now and for the last +20 yrs, of research in a political body is arguably the most inefficient scenario. Consider that a very wealthy society means individuals are wealthier and very highly educated. The argument that such people are un-enlightened so the government must force involuntary tax on them and centralized efforts to do enlighten research is an illogical argument.
John
Simplistic thought:
There’s a difference between scientific debate and political debate. We expect everyone to engage (or be able to engage) with political debate.
But scientific debate does require some technical knowledge that acts as a gatekeeper to the debate.
My simple thought is that policy (political debate) is being shepherded away from people’s democratic right to choose by use of “science”. We mere mortals are barred from the debate.
Is that error due to poor science education?
Is that error due to poor political education?
Or both?
– – – – – – – –
M Courtney,
Interesting aspect.
Before I consider a comment in response, do you intend your comment to be an extended discussion of my comment? Or do you intend it to be a comment in the general topic addressed, but not specifically addressing my comment? If the former, please advise the context that makes it an extended discussion of my comment. If the later, no problem, I am interested. I would sincerely appreciate your clarification.
John
I am curious. What are the standards of the “gatekeepers?”
Can i join?
I hope I know where you may be coming from. In a democratically elected government maybe there should really be some form of questionnaire as to the eligibility of the voters? If this is the case I could consider joining you.
Thank you John. When I was a kid I remember hearing some adults lamenting about the Government saying, “If they could tax the very air you breath, they would.” The desire for power over others is, alas, never absent in any society. Human nature manifests in every form of government. and in every human group of any kind.
The error Mr. Courtney’s spoke of …”policy (political debate) is being shepherded away from people’s democratic right to choose by use of “science”. is accurate but, in my view understated. I would say “peoples liberty to free choice is being “shepherded away” by post normal science, or politicized science.” Libertarians believe in free choice, so long as that choice does not do physical harm or immoral harm (like theft) to another. By the claims of CAGW, power mad politicians seek to gain the moral imperative to establish central authority. Those claims, if they were clearly true, would even gain credibility with libertarians.
I believe that people are sufficiently educated to understand a series of debates structured as I suggest.
I am a pure scientific laymen. I have virtually no math skills beyond basic algebra. (I suffered from the “new math” taught in the early 1970s, and my interests were more philosophical)
Yet the arguments against CAGW are so common sense and basic to the simplest understanding of the scientific process, that the average person can readily grasp the basic concepts…–
The models are all wrong in one direction of predicting to much warming. The failed models all make CO2 the strongest driver of atmospheric T. Simply reducing the C.S. to a doubling of CO2 to a non dangerous and likely beneficial level, as the OBSERVATIONS show, would make ALL the models far more accurate. Their systemic error in ONE direction is informative and indicative of a FUNDEMENTAL error in CAGW assumptions. The catastrophic consequences predicted; acceleration in SL rise, more frequent and more powerful hurricanes, droughts, flood, fires, etc; all forecast relentlessly in the media, and in “scientific” publications, are failing to materialize. The KNOWN benefits of additional CO2 are manifesting around the world, producing 12 to 15 percent more food with no additional land or water required, then would happen in a 280 ppm CO2 world.
Proponents of CAGW run from this debate that moves to the heart of public policy with regard to the failed predictions of CAGW.
M Courtney says: October 8, 2014 at 8:33 am
A reasonable sounding answer can be written for all four possibilities (fourth implicit, niether). Those who wish to subjugate will use any tool and the average person will willingly seek out their chains regardless. There is no war to win but rather a battle to constantly fight. Education is always a mighty weapon.
– – – – – – – – –
David A,
I agree in the sense that science is merely applied reasoning which is a capacity that all humans can exercise or not by their own choice and independently of any level of education. Applied reasoning is not restricted to science. Applied reasoning is not sourced from science. Even a small exposure through public debate of climate focused science would naturally create some more focus on it. So, back to my point that a significant problem with climate focused science has been a restriction on debate by scientists and governments. Naturally, when the basic applied reasoning observes some illogical and unreasonable research that becomes known through debate, then scrutiny on its funding and management would occur as happens in open society. Self-correction of climate focused science is restricted without open public debate.
I do not know what M. Courtney was implying or suggesting. I had asked him to clarify but have not seen him do so yet. I am going to wait for a while to see if he responds to my request for clarification before I venture to comment to him.
John
Sorry John, missed the response.
My comment was just an idea arising from the two aspects of public debate that you and DavidA were talking about.
We really do expect everyone to understand the policy implications of AGW. But we don’t expect everyone to understand and evaluate ‘the Bayesian priors that judge whether the climate models are accurately predicting Arctic ice loss’ for example.
It struck me that noting the difference in the public debates should be obvious to all. Yet it really isn’t addressed at all in the media and wasn’t explicitly addressed in your comment.
I missed you because of these threaded comments… but the debate wouldn’t be had anyway without them.
Still, just searching on “Courtney” doesn’t help me a lot.
It appears 26 times at the moment on this thread… about to be 27.
– – – – – – – – –
M Courtney,
The current sub-thread debate here is on whether there should be a debate on climate focused science and that sub-thread debate is not per se a scientific debate. It is a debate about a part of our culture that may possibly be restricting scientific self-correction of climate focused research funded by public owned institutions. The sub-thread debate does border on a political debate necessarily. It is not a dichotomous situation of scientific debate versus political debate since there is the involvement of public (politically controlled) funds.
I do not think the subject problematic climate focused science research stems from lack of education or miseducation or absence of applied reasoning. Likewise, I do not think the lack of public debate exposing the problem stems from those things either.
I think the basic climate focused science problem, which appears not to be self-correcting, is a basic structural problem in modern science that has developed in the last ~50 years. The structural problem is the centralized and politically sourced control in the human endeavor called climate focused science. In my view, the only way to mitigate the problems of current structure is creation of multiple alternate structures such as private formed consortiums which compete with the problematic public central control of science that currently exists. Those alternate structures cannot be blocked from debating climate focused science even when the politically controlled structure of modern science opposes any debate.
There should be a free marketplace of climate focused science in fierce competition with our current politically controlled source and structure of modern science.
John
PS – I just this minute saw your comments M Courtney on October 9, 2014 at 8:56 am and M Courtney on October 9, 2014 at 8:59 am. Will respond to them separately.
Great post, great discussion. This is why I read WUWT.
Congratulations to rgbatduke for another wonderful contribution.
The image of climate scientists adjusting springs and dampers trying to balance a GCM on a point has got to be the source for another cartoon by Josh.
Very nice article. I recall a physicist suggesting that the speed of light in the early universe may have been different. He required help with the math and the only help he got was covertly. The guy did not want anyone to know he was even helping the physicist lesy his career be over. You don’t screw with the sacrosanct even in science.
I hope that we as the “west” take Dr. Brown’s admonitions about the destructive effects of the anti-science process characteristic of CAGW to heart — otherwise we stand on the threshold of another dark age of “Biblical Proportions”
Pakistani physicist, Pervez Amirali Hoodbhoy, who now cannot teach in his native Pakistan, discussed the collapse of Arab or Muslim Science in his “Physics Today,” essay — “Science and the Islamic world—The quest for rapprochement.” Prof. Hoodbhoy, traces the decline in Arab, or Muslim Science, from its golden age to the writing of the twelfth century Muslim scholar, Abu Hamid al-Ghazali (died 1111) who argued in his book, “The Incoherence of the Philosophers,”— both the Greek and their Muslim followers (such as al-Farabi and Avicenna) — that Aristotelian Reason itself was the enemy of Islam, because it teaches us to discover, question, and innovate.
What Abu Hamid al-Ghazali proclaimed, just as with militant proponents of CAGW — all science was “settled science.” With that revelation, for all intents and purposes Arab Science settled in the 12th century.
In contrast, the world as a whole has benefited from the “western view” — that has prevailed until now — that “no science is settled science – all of our theories and models must be continuously tested against the ultimate measure – I.e. nature itself.
What I like about physicists and their debates is that they seem to call “crap” pretty easily and often on new ideas. They seem to be like slightly polite great white sharks, so when you pony up your crap, you better be able to defend it in the deep end of the pool. Contrast this with climate scientists, where great care is taken NOT to have to defend your ideas, with others joining in from politics saying that you don’t need to defend your ideas either. So why do these people get a pass? Because research is purchased, and you get what your are paying for, or you don’t pay. So pure research seems to be at odds with capitalism, or at least at odds with the profit this quarter.
DatHay says, ” Because research is purchased, and you get what your are paying for, or you don’t pay. So pure research seems to be at odds with capitalism, or at least at odds with the profit this quarter”
Sir, I humbly suggest that central governments are not capitalism. It is central governments, and their eternal quest for more authority, that is supporting the corrupted “science” of CAGW. Of course capitalist, like ANY human group, are corruptible as well. For instance Google just attacked all CAGW skeptics, yet those particular “capitalists” are heavily invested in alternative energy.
If you read several comments made by some of the engineers here they talk of how engineers employed in the private sector have to make things work, a pressure Government funded science does not, in general, have. (One of some notable exceptions being the US atomic Bomb project) However, as some posters here stated, the science of pure research needs funding that does not expect a pay day, be it government funding, or private sans the monetary objective.
“she’s buying a stairway to heaven”
Thank you Dr Brown, another fine comment.
James Gleick’s book is very accessible to a a layman, I have reread it several times.
The temptation to see the world in linear terms, is very high I suspect. As our maps only work on the linear problems.To have to concede that one does not know and cannot model the weather, turbulent flow and huge chunks of reality, takes a certain self confidence that seems to be largely absent in academia today.
As well as being career ending in the bureaus of butt covering that now “manage” our governance.
As far as replying to Steven, umm Mr Mosher needs to define his terms, there appears to be same words different language slippage.
That 5:08 response, while more coherent than the usual snipage, is sad.
Another (belated) thanks to rgb@duke, an American national treasure.
While the maths and physics that he discusses are way beyond me, the principles are not.
Urban “planning” is a good example, that the less scientifically gifted among us can understand. Urban planners claim to predict traffic flows, lifestyle preferences, employment hubs (and who will be employed), etc decades into the future. Unless their “predictions” are buttressed by government fiat, bribes and penalties, they are usually wrong.
In 1950, who could have predicted Silicon Valley? Or the demise of Detroit?
Students of history know that the future is unpredictable, and that there will never be a computer model, no matter how many zillion gigaflops of power that it has, which will change that.
Silicon Valley is generally considered to have been “born” in October 1951, when the first lease in Stanford Industrial Park was signed with Varian Corporation. But some say 1949, when new Stanford President Sterling & Dean Terman decided to go for defense industry bucks, leading to development of integrated circuit & digital tech, then personal computers, then the Internet, to network connectivity (Cisco) & now mobile & social networking, or 1939, when Hewlett & Packard started their company in a garage:
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CC0QFjAE&url=http%3A%2F%2Fwww.wipo.int%2Fedocs%2Fmdocs%2Farab%2Fen%2Fwipo_idb_ip_ryd_07%2Fwipo_idb_ip_ryd_07_1.pdf&ei=Oao2VMrWFfeNsQSR34C4Ag&usg=AFQjCNGvSLAKR-lQsypi1b_DGql8ZhK-Hw&sig2=92KByf0Ao9VlKtLB3-LUfg&bvm=bv.76943099,d.cWc
Before the ’50s, Santa Clara County was called Blossom Valley. And Redwood City, San Mateo County, still boasted “Climate Best by Government Test!” in the late ’60s.
A good essay, clearly triggered by a specious remark about debates. That by itself is pleasing – to see something superficial lead to something so substantial. I hope the folks at the APS who are reviewing their position of global warming etc will take note.
Wow! I just revisited this thread and see that rgb has added muchos comments throughout, even answering questions from some of the lay people. Just that fact is highly commendable.
I think that some of the so called “Climate Scientists” could take a lesson from rgb.
Thank you Dr. Brown,
jpp
RGB If you are still around I would appreciate a reply to my earlier comment which I repeat here. Thanks
“RGB Since you have convincingly shown again the complete uselessness of the GCM approach to forecasting and the small influence of anthropogenic CO2 on climate would you not agree that it is time to move to a completely different method of climate forecasting. as referred to in my comments above at
7/ 10:18pm and 8/ 6:28 AM and in the series of posts at http://climatesense-norpag.blogspot.com
which provide forecasts of the timing and amount of a possible coming cooling?”
Reply to Dr. Page ==> from personal communication with Dr. Brown, I know that he has a very busy teaching schedule at Duke, putting in 14 hours some days. Hopefully, he will find time to reply to your specific questions. Note that his CV and personal email are available on the Duke website under Faculty.
Norman,
Your persistence in asking this question did at least prod me, on a Sunday morning, to look at your website. There is some very interesting material there, but I don’t think you could expect a busy RGB to look through it all in response to your somewhat vague question.
As an RJB, I’ll comment briefly. You have evidence of millennial and 60-year cycles, which you believe peaked around 2004, so we should see cooling (assuming increasing CO2 effects are not strong enough to offset this). Personally I am into solar cycle lengths, so I do agree with you, at least for the next 20 years.
I hesitate (but only briefly) to put words into RGB’s mouth, but I think he might say “this thread is about debate and the failure of mainstream climate predictions, and whilst alternative predictions are of some interest, I would not want to comment on them unless there was some very solid science behind them”.
Best regards,
Rich.