Guest Post by Willis Eschenbach
Under the radar, and un-noticed by many climate scientists, there was a recent study by the National Academy of Sciences (NAS), commissioned by the US Government, regarding climate change. Here is the remit under which they were supposed to operate:
Specifically, our charge was
1. To identify the principal premises on which our current understanding of the question [of the climate effects of CO2] is based,
2. To assess quantitatively the adequacy and uncertainty of our knowledge of these factors and processes, and
3. To summarize in concise and objective terms our best present understanding of the carbon dioxide/climate issue for the benefit of policymakers.
Now, that all sounds quite reasonable. In fact, if we knew the answers to those questions, we’d be a long ways ahead of where we are now.
Figure 1. The new Cray supercomputer called “Gaea”, which was recently installed at the National Oceanic and Atmospheric Administration. It will be used to run climate models.
But as it turned out, being AGW supporting climate scientists, the NAS study group decided that they knew better. They decided that to answer the actual question they had been asked would be too difficult, that it would take too long.
Now that’s OK. Sometimes scientists are asked for stuff that might take a decade to figure out. And that’s just what they should have told their political masters, can’t do it, takes too long. But noooo … they knew better, so they decided that instead, they should answer a different question entirely. After listing the reasons that it was too hard to answer the questions they were actually asked, they say (emphasis mine):
A complete assessment of all the issues will be a long and difficult task.
It seemed feasible, however, to start with a single basic question: If we were indeed certain that atmospheric carbon dioxide would increase on a known schedule, how well could we project the climatic consequences?
Oooookaaaay … I guess that’s now the modern post-normal science method. First, you assume that there will be “climatic consequences” from increasing CO2. Then you see if you can “project the consequences”.
They are right that it is easier to do that than to actually establish IF there will be climatic consequences. It makes it so much simpler if you just assume that CO2 drives the climate. Once you have the answer, the questions get much easier …
However, they did at least try to answer their own question. And what are their findings? Well, they started out with this:
We estimate the most probable global warming for a doubling of CO2 to be near 3’C with a probable error of ± 1.5°C.
No surprise there. They point out that this estimate, of course, comes from climate models. Surprisingly, however, they have no question and are in no mystery about whether climate models are tuned or not. They say (emphasis mine):
Since individual clouds are below the grid scale of the general circulation models, ways must be found to relate the total cloud amount in a grid box to the grid-point variables. Existing parameterizations of cloud amounts in general circulation models are physically very crude. When empirical adjustments of parameters are made to achieve verisimilitude, the model may appear to be validated against the present climate. But such tuning by itself does not guarantee that the response of clouds to a change in the CO2 concentration is also tuned. It must thus be emphasized that the modeling of clouds is one of the weakest links in the general circulation modeling efforts.
Modeling of clouds is one of the weakest links … can’t disagree with that.
So what is the current state of play regarding the climate feedback? The authors say that the positive water vapor feedback overrules any possible negative feedbacks:
We have examined with care ail known negative feedback mechanisms, such as increases in low or middle cloud amount, and have concluded that the oversimplifications and inaccuracies in the models are not likely to have vitiated the principal conclusion that there will be appreciable warming. The known negative feedback mechanisms can reduce the warming, but they do not appear to be so strong as the positive moisture feedback.
However, as has been the case for years, when you get to the actual section of the report where they discuss the clouds (the main negative feedback), the report merely reiterates that the clouds are poorly understood and poorly represented … how does that work, that they are sure the net feedback is positive, but they don’t understand and can only poorly represent the negative feedbacks? They say, for example:
How important the overall cloud effects are is, however, an extremely difficult question to answer. The cloud distribution is a product of the entire climate system, in which many other feedbacks are involved. Trustworthy answers can be obtained only through comprehensive numerical modeling of the general circulations of the atmosphere and oceans together with validation by comparison of the observed with the model-produced cloud types and amounts.
In other words, they don’t know but they’re sure the net is positive.
Regarding whether the models are able to accurately replicate regional climates, the report says:
At present, we cannot simulate accurately the details of regional climate and thus cannot predict the locations and intensities of regional climate changes with confidence. This situation may be expected to improve gradually as greater scientific understanding is acquired and faster computers are built.
So there you have it, folks. The climate sensitivity is 3°C per doubling of CO2, with an error of about ± 1.5°C. Net feedback is positive, although we don’t understand the clouds. The models are not yet able to simulate regional climates. No surprises in any of that. It’s just what you’d expect a NAS panel to say.
Now, before going forwards, since the NAS report is based on computer models, let me take a slight diversion to list a few facts about computers, which are a long-time fascination of mine. As long as I can remember, I wanted a computer of my own. When I was a little kid I dreamed about having one. I speak a half dozen computer languages reasonably well, and there are more that I’ve forgotten. I wrote my first computer program in 1963.
Watching the changes in computer power has been astounding. In 1979, the fastest computer in the world was the Cray-1 supercomputer. In 1979, a Cray-1 supercomputer, a machine far beyond anything that most scientists might have dreamed of having, had 8 Mb of memory, 10 Gb of hard disk space, and ran at 100 MFLOPS (million floating point operations per second). The computer I’m writing this on has a thousand times the memory, fifty times the disk space, and two hundred times the speed of the Cray-1.
And that’s just my desktop computer. The new NASA climate supercomputer “Gaea” shown in Figure 1 runs two and a half million times as fast as a Cray-1. This means that a one-day run on “Gaea” would take a Cray-1 about seven thousand years to complete …
Now, why is the speed of a Cray-1 computer relevant to the NAS report I quoted from above?
It is relevant because as some of you may have realized, the NAS report I quoted from above is called the “Charney Report“. As far as I know, it was the first official National Academy of Science statement on the CO2 question. And when I said it was a “recent report”, I was thinking about it in historical terms. It was published in 1979.
Here’s the bizarre part, the elephant in the climate science room. The Charney Report could have been written yesterday. AGW supporters are still making exactly the same claims, as if no time had passed at all. For example, AGW supporters are still saying the same thing about the clouds now as they were back in 1979—they admit they don’t understand them, that it’s the biggest problem in the models, but all the same but they’re sure the net feedback is positive. I’m not sure clear that works, but it’s been that way since 1979.
That’s the oddity to me—when you read the Charney Report, it is obvious that almost nothing of significance has changed in the field since 1979. There have been no scientific breakthroughs, no new deep understandings. People are still making the same claims about climate sensitivity, with almost no change in the huge error limits. The range still varies by a factor of three, from about 1.5 to about 4.5°C per doubling of CO2.
Meanwhile, the computer horsepower has increased beyond anyone’s wildest expectations. The size of the climate models has done the same. The climate models of 1979 were thousands of lines of code. The modern models are more like millions of lines of code. Back then it was atmosphere only models with a few layers and large gridcells. Now we have fully coupled ocean-atmosphere-cryosphere-biosphere-lithosphere models, with much smaller gridcells and dozens of both oceanic and atmospheric layers.
And since 1979, an entire climate industry has grown up that has spent millions of human-hours applying that constantly increasing computer horsepower to studying the climate.
And after the millions of hours of human effort, after the millions and millions of dollars gone into research, after all of those million-fold increases in computer speed and size, and after the phenomenal increase in model sophistication and detail … the guesstimated range of climate sensitivity hasn’t narrowed in any significant fashion. It’s still right around 3 ± 1.5°C per double of CO2, just like it was in 1979.
And the same thing is true on most fronts in climate science. We still don’t understand the things that were mysteries a third of a century ago. After all of the gigantic advances in model speed, size, and detail, we still can say nothing definitive about the clouds. We still don’t have a handle on the net feedback. It’s like the whole realm of climate science got stuck in a 1979 time warp, and has basically gone nowhere since then. The models are thousands of times bigger, and thousands of times faster, and thousands of times more complex, but they are still useless for regional predictions.
How can we understand this stupendous lack of progress, a third of a century of intensive work with very little to show for it?
For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.
Now we can debate what that fundamental misunderstanding might be.
But I see no other explanation that makes sense. Every other field of science has seen huge advances since 1979. New fields have opened up, old fields have moved ahead. Genomics and nanotechnology and proteomics and optics and carbon chemistry and all the rest, everyone has ridden the computer revolution to heights undreamed of … except climate science.
That’s the elephant in the room—the incredible lack of progress in the field despite a third of a century of intense study.
Now me, I think the fundamental misunderstanding is the idea that the surface air temperature is a linear function of forcing. That’s why it was lethal for the Charney folks to answer the wrong question. They started with the assumption that a change in forcing would change the temperature, and wondered “how well could we project the climatic consequences?”
Once you’ve done that, once you’ve assumed that CO2 is the culprit, you’ve ruled out the understanding of the climate as a heat engine.
Once you’ve done that, you’ve ruled out the idea that like all flow systems, the climate has preferential states, and that it evolves to maximize entropy.
Once you’ve done that, you’ve ruled out all of the various thermostatic and homeostatic climate mechanisms that are operating at a host of spatial and temporal scales.
And as it turns out, once you’ve done that, once you make the assumption that surface temperature is a linear function of forcing, you’ve ruled out any progress in the field until that error is rectified.
But that’s just me. You may have some other explanation for the almost total lack of progress in climate science in the last third of a century, and if so, all cordial comments gladly accepted. Allow me to recommend that your comments be brief, clear and interesting.
w.
PS—Please do not compare this to the lack of progress in something like achieving nuclear fusion. Unlike climate science, that is a practical problem, and a devilishly complex one. The challenge there is to build something never seen in nature—a bottle that can contain the sun here on earth.
Climate, on the other hand, is a theoretical question, not a building challenge.
PPS—Please don’t come in and start off with version number 45,122,164 of the “Willis, you’re an ignorant jerk” meme. I know that. I was born yesterday, and my background music is Tom o’Bedlam’s song:
By a host of furious fancies Whereof I am commander With a sword of fire, and a steed of air Through the universe I wander. By a ghost of rags and shadows I summoned am to tourney Ten leagues beyond the wild world's end Methinks it is no journey.
So let’s just take my ignorance and my non compos mentation and my general jerkitude as established facts, consider them read into the record, and stick to the science, OK?
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“When empirical adjustments of parameters are made to achieve verisimilitude, the model may appear to be validated against the present climate.
Merely corroborative detail, intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative.
W.S. Gilbert
How are the recent estimates of CO2 caused warming significantly different from the original estimates by Arrenhius 100 years ago? Not much progress over pencil and paper.
If CO2 causes positive feedback, why doesn’t water vapor, which is about four times more powerful as a GHG, cause the same? Plus there’s an infinite supply of it.
Cadae says:
March 8, 2012 at 4:27 am
“Many other sciences have the advantage of being able to test their models via direct experimentation against objective reality, but Climate Science does not have clones of the Earth to experiment with nor the tools to experiment with the earth itself.”
This is a fallacy that should have disappeared long ago. If clones of Earth were necesary for experimentation in climate science then clones of the universe would be necessary for the studies in search of the Higgs Boson by CERN scientists.
Experimentation can be passive or active. Passive experimentation requires only that one observe some phenomenon, identify its salient features, and create a record of measurements. All of that has been easily available to climate scientists but they have done nothing except create the ARGO system. They need to get to work on data collection and other empirical matters. They seem to have a deep seated aversion to empirical work.
It has been mentioned here before, but no one has been able to tell me how much of the CO2 increase we are measuring is due to the increase in temperature due to the rebound from that little cold spell that ended sometime between 1780 and 1850 (I’ve seen end dates all over the spectrum there). We may be adding some little bit, but I think our signal is probably being swamped by the oceanic out-gassing of CO2 from that whole solubility/temperature curve.
If as I suspect, CO2 is a symptom of a warming climate rather than the cause, the whole movement is looking at the moon through the wrong end of the telescope, but such is the power of a meme.
You sure the computer is named Gaea and not DeepThought? Regardless of everything else, the models are always going to give the same answer – a 3 degree warming per doubling of CO2. It’s all clear now. It’s the new “42”.
In the darkest recesses of the future hall of super computers of the Met offices of Sweden and Norway (apparently Denmark has been left out), where the echo of the abysmal emptiness that is its space, sits, inside a sticky finger resistant glass cube, an apparatus of wonderous applications: The A B A C U S!
Constructed by a genius, but apparently only for geniuses, such tentalizing mind blowing power might it have, if only they could get it to operate.
They continously need new super computing power, but to do what? I’m glad you asked, to continously make the same A2 and B2 projection, on the high end, in all regions in both countries, that the IPCC AR* reports states will be the regional temperatures in the future.
Apparently all their old super computers were flawed since they all seem to have failed, in the past, to predict present day climate, and for safety, we’re talking way into the future these day, and no pesky 40 years, no, instead it’s the period between 2071-2100. And of course they blame green house gases for the temerpature rise from 1860, to present day, to future days.
What’s interesting to note is that the rest of the EU countries will, according to every climate super computer owning country, follow the same, on the high end, trends of the IPCC AR* A2 and B2 scenarios.
What’s then highly ironic is the obvious fact: They only need one abacus to calculate the predicted projections for the forseable future.
A crash course in the inner workings of the secret manual of an (IPCC stamped) abacus would save us millions in tax payers money.
Since they just seem to manhandle their computers and software to project IPCC “concensus” numbers, I trully wonder if the scandinavian met offices would note any difference if one replaced their “cray(on)” computers with an abacus but with the cray logo on it?
:p
curryja says:
March 8, 2012 at 4:37 am
Judith – I think you will find it by clicking on ‘Charney Report’ in the text?
regards
Nir Shaviv wrote on this very topic recently.
http://sciencebits.com/IPCC_nowarming
More cerebral bell-ringing moments for me, this time brought to us by Willis. Thanks, Willis.
Two questions pop-up. One, if the science was settled in 1979 and a consensus was reached, why are we still paying scientists through the nose for the ongoing “settling”? And two, given how chaotic and complex climate appears to be and how it is impossible to predict or “project” future effects of a presumed scenario on an ever-changing world, does it make the slightest of differences if we try to calculate things with a child’s abacus versus a super-duper computer? Seems to me that on this scale it would be kind of like deciding on whether to treat an advanced case of Ebola with Aspirin, Tylenol, homeopathic tinctures of echinacia or camomile tea. (Might as well go for the tea; tastes better than the other stuff)
The point about the rate of scientific progress is an interesting one – it is certainly not ‘even’, some areas have advanced enormously and others not. I think that this can only partly be explained by the effort expended. In my own area (pharmaceutical research) there is much effort, and continual impressive progress is being made in the understanding of underlying processes. In say, the last fifty years we have gained a huge amount of understanding of genetics, biochemical processes and cell signalling; however our actual ability to treat diseases like cancer has improved rather modestly by comparison.
I think part of the explanation may be that how much more we know than we did before is less important than how much less ignorant we are. For example, our knowledge of something may double – but it may be that our knowledge of all that it is possible to know has actually only increased from 0.1% to 0.2%, so we are still largely ignorant.
By the way, “millions of human-hours” is a bit too PC for me (though less-so than person-hours), it sounds like something a Klingon might say. I am sticking with ‘man-hours’ on the basis of the broader Anglo-Saxon meaning of ‘man’, as in: ‘a man with a womb is a womb-man (often abbreviated to woman)’.
William Martin says:
March 7, 2012 at 11:43 pm
lol Willis ! you know that the NAS Report isn’t about science, it’s about the public purse !
William M. has got it in one.
Remember this started out with “Scientists” like Professor Stephen Schneider warning about a coming ice age in the 1970s then converting to the promoting of man-made global warming fears and most recently the newest modification “Climate Change”
In all cases it is about scaring the public into
1. Giving the politicians MORE power and tax money,
2. Giving the scientists and universities MORE tax payer funded grant money
3. Giving corporations MORE (as in incredible amounts) of government “incentives” Grants and tax deductions.
The only one on the short end of the stick is the poor deluded tax payer who foots the bill. Is it any wonder that the news media has done their best to brainwash the public to the point where all this “environmentalism” crud is now a religion to many?
Specifically, the charge given to us by Willis was
1. To identify the root causes of the stupendous lack of progress in cloud modeling and climate modeling in general despite vast increases in computational speed and power and thirty years of well funded effort.
2. To assess whether or not a fundamental flaw in our understanding lies at the root of the problem.
3. To summarize this in concise and objective terms in a blog response.
A complete assessment of all the issues will be a long a difficult task.
It seemed feasible however to start with a single basic question: If our grants were doubled now, how well could we divert attention from these issues and generate public support for dubious enviromental and economic policies?
Observe how astrophysics (another mostly theoretical science) has progressed since 1979 for comparison. And without the rent-seekers (universities) & resultant massive money-laundering like “climatology”. Astrophysics is one of the few sciences remaining that seems free of being stifled by political-correctness. And look at the amazing results.
This is very interesting indeed. I have been daydreaming a little about what it would have been like if GCMs had never been invented. They have not, it would seem, contributed much progress. Apart from a couple of typos now fixed, the following was also posted by me on Bishop Hill a few days ago:
It is an idle thought I know, but I occasionally wonder how it would be now if the GCMs had never been invented. I think we’d perhaps need a few more surface weather stations to supplement the satellites and keep weather forecasting at its best (a best which is inevitably constrained by the turbulent behaviour the atmosphere on a wide, and very relevant, range of space and timescales).
We might suffer a possibly detectable drop in weather forecasting skill if we travelled back in time to erase the models, but we’d have removed a weapon from the armoury of those who are irresponsible enough to seek to scare us with their tales of imminent doom based on zero observational evidence of anything odd going on in the vehicles of that doom: winds, rains, sea levels etc.
We’d also have had perchance a more extensive examination of the physics and other aspects of the science, free from the wet-blanket effect that computer modellers can bring with their dark arts and all but impenetrable claims and obsessions – for some, the virtual world of their software can become more vivid, more ‘real’, and more congenial than the even messier real one. There is something about computers that can get in the way of thinking and of conversation – something to do with a sense of a black-box that we scarce know how to argue with and which we know is capable of computational chores that we cannot begin to match with pencil and paper and spreadsheats.
Would the IPCC have fizzled out without its very own version of ‘the computer says’ that so helped the Club of Rome in their previous foray into mass scaremongering called ‘Limits tof Growth’? I am inclined to think so, and I think some considerable opprobrium belongs upon computer models and their keepers during the Foot and Mouth fiasco in the UK. Our peculiar, almost superstitious, fear of computers and our readily exercised panic around them was also illustrated by the Year 2000 shenanigans over what could and should have been a matter of routine review and testing.
But the genie is out, we have them and they seem get more funding year after year. What can be done to protect us from further harm from them? This is tricky. Their keepers are scarce likely to go big on any notion that they are so feeble in the face of the immense complexity of the system that they should have no more impact on public policy than a Mystic Meg, or a Whitaker’s Almanack. They needs must urge us to believe that if not yet, then at least just around the corner, great benefits will follow from climate modelling. And of course, they may be right, and we may even get technical breakthoughs that no one has even imagined yet. I think it a good thing that some people should pursue them for research just in case. But we need more public recognition of the dangers of them.
Posted as a comment on this thread at Bishop Hill (Mar 6, 2012 at 11:32 AM): http://www.bishop-hill.net/blog/2012/3/5/new-solar-paper.html?currentPage=3#comments
wws says:
March 8, 2012 at 5:22 am
UFO’s – climate “science” – I think the two have quite a bit in common.
Funny that you mention that. Try “Aliens Cause Global Warming: A Caltech Lecture by Michael Crichton” (Posted on July 9, 2010 by Anthony Watts). Great read. Guaranteed.
Cadae:
In your post at March 8, 2012 at 4:27 am you say;
“Many other sciences have the advantage of being able to test their models via direct experimentation against objective reality, but Climate Science does not have clones of the Earth to experiment with nor the tools to experiment with the earth itself. The closest Climate Science can come to an experimental Earth are computer simulations- but because of the complexity of the Earth, these simulations are entirely inadequate for properly and objectively testing climate hypotheses.”
Sorry, but that completely misses the point. The question at issue is NOT how “climate hypotheses” can be assessed. In his article Willis says;
“And since 1979, an entire climate industry has grown up that has spent millions of human-hours applying that constantly increasing computer horsepower to studying the climate.
And after the millions of hours of human effort, after the millions and millions of dollars gone into research, after all of those million-fold increases in computer speed and size, and after the phenomenal increase in model sophistication and detail … the guesstimated range of climate sensitivity hasn’t narrowed in any significant fashion. It’s still right around 3 ± 1.5°C per double of CO2, just like it was in 1979.”
Developments of the models have achieved nothing. The question at issue is NOT how “climate hypotheses” can be assessed. It is WHY the models’ developments have achieved nothing. And the reason for that failure of achievement is that the models make no attempt to test “climate hypotheses”.
Models could test such hypotheses by comparison with existing real-world observations; e.g.
Does a model designed to represent one climate behaviour (or combination of climate behaviours) provide an output which emulates another climate behaviour?
The clearest example of such a test would be for the model to output spatial temperature and precipitation patterns which match the temperature and precipitation patterns of the real Earth. If a climate hypothesis does not provide such an output then it is not correct so reject it and construct another model. But climate modellers don’t do that although none of the models – not one of them – provides a reasonable emulation of such basic climate variables over the surface of the Earth.
Each modelling team has invested much time, effort and money in its model so refuses to scrap it and keeps adding things to it in hope that somehow one day it will work. Hope is not science.
But, as I explain in my above post at March 8, 2012 at 3:17 am:
“The fact that the models are adjusted to match mean global temperature but fail to match regional temperatures is a direct proof that none of them is emulating the climate system of the real Earth.
And the fact that each model uses a different value of climate sensitivity is a direct proof that they are not applying a unique theory of climate behaviour.
But model falsification seems to play no part in what is disingenuously called ‘climate science’.”
At present the climate models are not being used as scientific tools: they are being used as playthings. And I object to my taxes being used to pay people to play these computer games.
Richard
A fundamental mistake that the climate scientists are making is to assume that the climate can be modeled at all, let alone modeled using traditional mathematics.
“If theoretical science is to be possible at all, then at some level the systems it studies must follow definite rules. Yet in the past throughout the exact sciences it has usually been assumed that these rules must be ones based upon traditional mathematics. But the crucial realization that led me to develop the new kind of science in this book [A New Kind Of Science aka ANKS] is that there is in fact no reason to think that systems like those we see in nature should follow only such traditional mathematical rules.” – Stephen Wolfram, A New Kind of Science, page 1.
The types of systems that Wolfram (and others) have discovered have simple rules yet generate immense complexity, as complex as any complex system. Yet these systems fail all attempts at being modeled by traditional mathematics.
Among the relevant discoveries to climate science that Wolfram has discovered is that these simple systems, which are pervasively prevalent in nature’s climate systems, can and do generate their own internal randomness that make these simple systems that generate complexity impossible to predict; the only way to know what they will do next is to observe their state changes unfold through time. Climate is made up of such simple systems that generate complex behavior and internal randomness and thus can’t be modeled on first principles. This is also one reason why the traditional mathematics fail to describe and never will describe most natural systems including climate with any accuracy. Even using ANKS methods you’d not be able to model the climate accurately as the model is never the real climate, “the map is not the territory”, the models can’t model something that generates it’s own randomness – it’s simply not possible as Wolfram has proven mathematically.
I’m old enough to remember from freshman chemistry lab back in 1963 when WIllis was writing his first computer program that a “probable error” is the half-width of a 50% confidence interval, i.e. 0.67 sigmas. A more conventional 95% CI extends about 2 sigmas, or 3 times the PE. Ie 3 +/- 1.5 PE gives a 95% CI of 3 +/- 4.5 dC. Not very informative!
Another big uncertainty is the effect of human emissions on atmospheric CO2. Both are going up, and there is surely a connection. However, the level of atmospheric CO2 looks a lot more like the annual rate of emissions than like the cumulative emissions to date. Past emissions must therefore eventually get taken up by the environment on a timescale that is more like a decade than a century. There must be a connection between steady-state annual emissions (holding solar etc constant) and steady-state CO2 that depends on this take-up rate, but there doesn’t seem to have been an effort to quantity this.
As for nuclear fusion, Willis, it was achieved more than 60 years ago. I think you mean cold fusion? ! 🙂
Says Willis,
Do you suppose the Sun does not go around the Earth?
Could it be that CO2 is not an important driver of the Earth’s climate?
Hmm. . .
/Mr Lynn
One thing is for sure, Willis isn’t a Texas Sharpshooter. He hits the bullseye without the help of post-shot correction factors.
The AGW crowd consists mostly of neo-Luddites, watermelons, dishonest scientists, fools, charletans, cowards, and bandwagonists. After 33 years of failed predictions, falsified data, modified data, and lies. I have no respect for those who still try to keep a foot in both camps. I’m sorry but Dr. Judith Curry’s dialog is losing meaning. No one on the other side is doing responsible science, just one “worse than we thought” after another. They never even check their premises, or for that matter, ask themselves, “If this were really true how would it have changed the past?”
I attended an HPC workshop last week. It was for the Oil & Gas Industry – we’re in deep trouble, when an oil company infrastructure specialist proudly states how much CO2 they’ve mitigated by their construction and power utilization efficiency.
If we could just clone, say, 10,000 Willis’ …
The NAS report reads like one of the greatest fictions since Edgar Rice Burroughs populated the dead sea bottoms of Barsoom. Theirs was the most convoluted way of saying “we don’t know what we’re doing” and “This is hard – let’s do it wrong” I’ve seen in quite a while. Because this report is outside their charter I hope we taxpayers are not picking up the tab. It’s all been a big waste of a good Gaea so far.
30 years and no one has come up with data collection methods to try to understand clouds better? That’s shameful. I wish I understood more precisely what the climate scientists need in terms of data. It seems like you should be able to simulate clouds from physical first principles.
The AGWCF (CF = “Control Freaks”) crowd couldn’t possibly control water, but they can control CO2 (or give it a try). So they tailor their “science” to fit their control appetite, which is one horrible indictment on their whole sordid affair.
I shall characterize it as “Epic Fail”. So now we have AGWCFEF. You’d think that would put an end to it, but unfortunately, it continues unabated.
Willis,
Good post (as usual – I especially enjoyed the fact that your “jerkitude” is read into the record – gave me quite a laugh). The one point on which I would disagree with you is the fact that Enviro science is the only one infected by lack of foresight. There are others which have their fair share of problems in this arena (such as evolutionism, though perhaps not to the degree climatology has) and even some areas of science, generally real good, have had issues (such as the long and sometimes vitriolic discussions over the Big Bang theory.
All of that said, I have never seen the level of corruption or vicious personal attacks in any other area (except from Muslim Jihadists) that seem to be so endemic in climate science.
Indeed.
Now let’s consider a hypothetical. Let’s suppose that you’re the president of the world, and you have ten trillion dollars to spend on this. You can spend ten trillion on getting a definitive answer to the climate sensitivity question, or you can spend ten trillion and get practical fusion. Which is the smarter investment (forgive me, Judith Curry)?
Richard S Courtney says: “model falsification seems to play no part in what is disingenuously called ‘climate science’.”
I second this statement. I was arguing falsification on RealClimate a couple of weeks ago, (the Bickmore thread) and they have a tin ear, they just don’t care about falsification. According to the consensus, real climate science allows you to simply readjust the model, and try again.
As for how this unscientific process might tend to corrupt, I didn’t dare mention, of course, because of the imperious and sanctimoneous attitudes there. Nor do they have any conception of how far off the rails such carelessness can take public policy, which sort of expects accuracy, rather than statistical “skill.”