Guest Post by Willis Eschenbach
Under the radar, and un-noticed by many climate scientists, there was a recent study by the National Academy of Sciences (NAS), commissioned by the US Government, regarding climate change. Here is the remit under which they were supposed to operate:
Specifically, our charge was
1. To identify the principal premises on which our current understanding of the question [of the climate effects of CO2] is based,
2. To assess quantitatively the adequacy and uncertainty of our knowledge of these factors and processes, and
3. To summarize in concise and objective terms our best present understanding of the carbon dioxide/climate issue for the benefit of policymakers.
Now, that all sounds quite reasonable. In fact, if we knew the answers to those questions, we’d be a long ways ahead of where we are now.
Figure 1. The new Cray supercomputer called “Gaea”, which was recently installed at the National Oceanic and Atmospheric Administration. It will be used to run climate models.
But as it turned out, being AGW supporting climate scientists, the NAS study group decided that they knew better. They decided that to answer the actual question they had been asked would be too difficult, that it would take too long.
Now that’s OK. Sometimes scientists are asked for stuff that might take a decade to figure out. And that’s just what they should have told their political masters, can’t do it, takes too long. But noooo … they knew better, so they decided that instead, they should answer a different question entirely. After listing the reasons that it was too hard to answer the questions they were actually asked, they say (emphasis mine):
A complete assessment of all the issues will be a long and difficult task.
It seemed feasible, however, to start with a single basic question: If we were indeed certain that atmospheric carbon dioxide would increase on a known schedule, how well could we project the climatic consequences?
Oooookaaaay … I guess that’s now the modern post-normal science method. First, you assume that there will be “climatic consequences” from increasing CO2. Then you see if you can “project the consequences”.
They are right that it is easier to do that than to actually establish IF there will be climatic consequences. It makes it so much simpler if you just assume that CO2 drives the climate. Once you have the answer, the questions get much easier …
However, they did at least try to answer their own question. And what are their findings? Well, they started out with this:
We estimate the most probable global warming for a doubling of CO2 to be near 3’C with a probable error of ± 1.5°C.
No surprise there. They point out that this estimate, of course, comes from climate models. Surprisingly, however, they have no question and are in no mystery about whether climate models are tuned or not. They say (emphasis mine):
Since individual clouds are below the grid scale of the general circulation models, ways must be found to relate the total cloud amount in a grid box to the grid-point variables. Existing parameterizations of cloud amounts in general circulation models are physically very crude. When empirical adjustments of parameters are made to achieve verisimilitude, the model may appear to be validated against the present climate. But such tuning by itself does not guarantee that the response of clouds to a change in the CO2 concentration is also tuned. It must thus be emphasized that the modeling of clouds is one of the weakest links in the general circulation modeling efforts.
Modeling of clouds is one of the weakest links … can’t disagree with that.
So what is the current state of play regarding the climate feedback? The authors say that the positive water vapor feedback overrules any possible negative feedbacks:
We have examined with care ail known negative feedback mechanisms, such as increases in low or middle cloud amount, and have concluded that the oversimplifications and inaccuracies in the models are not likely to have vitiated the principal conclusion that there will be appreciable warming. The known negative feedback mechanisms can reduce the warming, but they do not appear to be so strong as the positive moisture feedback.
However, as has been the case for years, when you get to the actual section of the report where they discuss the clouds (the main negative feedback), the report merely reiterates that the clouds are poorly understood and poorly represented … how does that work, that they are sure the net feedback is positive, but they don’t understand and can only poorly represent the negative feedbacks? They say, for example:
How important the overall cloud effects are is, however, an extremely difficult question to answer. The cloud distribution is a product of the entire climate system, in which many other feedbacks are involved. Trustworthy answers can be obtained only through comprehensive numerical modeling of the general circulations of the atmosphere and oceans together with validation by comparison of the observed with the model-produced cloud types and amounts.
In other words, they don’t know but they’re sure the net is positive.
Regarding whether the models are able to accurately replicate regional climates, the report says:
At present, we cannot simulate accurately the details of regional climate and thus cannot predict the locations and intensities of regional climate changes with confidence. This situation may be expected to improve gradually as greater scientific understanding is acquired and faster computers are built.
So there you have it, folks. The climate sensitivity is 3°C per doubling of CO2, with an error of about ± 1.5°C. Net feedback is positive, although we don’t understand the clouds. The models are not yet able to simulate regional climates. No surprises in any of that. It’s just what you’d expect a NAS panel to say.
Now, before going forwards, since the NAS report is based on computer models, let me take a slight diversion to list a few facts about computers, which are a long-time fascination of mine. As long as I can remember, I wanted a computer of my own. When I was a little kid I dreamed about having one. I speak a half dozen computer languages reasonably well, and there are more that I’ve forgotten. I wrote my first computer program in 1963.
Watching the changes in computer power has been astounding. In 1979, the fastest computer in the world was the Cray-1 supercomputer. In 1979, a Cray-1 supercomputer, a machine far beyond anything that most scientists might have dreamed of having, had 8 Mb of memory, 10 Gb of hard disk space, and ran at 100 MFLOPS (million floating point operations per second). The computer I’m writing this on has a thousand times the memory, fifty times the disk space, and two hundred times the speed of the Cray-1.
And that’s just my desktop computer. The new NASA climate supercomputer “Gaea” shown in Figure 1 runs two and a half million times as fast as a Cray-1. This means that a one-day run on “Gaea” would take a Cray-1 about seven thousand years to complete …
Now, why is the speed of a Cray-1 computer relevant to the NAS report I quoted from above?
It is relevant because as some of you may have realized, the NAS report I quoted from above is called the “Charney Report“. As far as I know, it was the first official National Academy of Science statement on the CO2 question. And when I said it was a “recent report”, I was thinking about it in historical terms. It was published in 1979.
Here’s the bizarre part, the elephant in the climate science room. The Charney Report could have been written yesterday. AGW supporters are still making exactly the same claims, as if no time had passed at all. For example, AGW supporters are still saying the same thing about the clouds now as they were back in 1979—they admit they don’t understand them, that it’s the biggest problem in the models, but all the same but they’re sure the net feedback is positive. I’m not sure clear that works, but it’s been that way since 1979.
That’s the oddity to me—when you read the Charney Report, it is obvious that almost nothing of significance has changed in the field since 1979. There have been no scientific breakthroughs, no new deep understandings. People are still making the same claims about climate sensitivity, with almost no change in the huge error limits. The range still varies by a factor of three, from about 1.5 to about 4.5°C per doubling of CO2.
Meanwhile, the computer horsepower has increased beyond anyone’s wildest expectations. The size of the climate models has done the same. The climate models of 1979 were thousands of lines of code. The modern models are more like millions of lines of code. Back then it was atmosphere only models with a few layers and large gridcells. Now we have fully coupled ocean-atmosphere-cryosphere-biosphere-lithosphere models, with much smaller gridcells and dozens of both oceanic and atmospheric layers.
And since 1979, an entire climate industry has grown up that has spent millions of human-hours applying that constantly increasing computer horsepower to studying the climate.
And after the millions of hours of human effort, after the millions and millions of dollars gone into research, after all of those million-fold increases in computer speed and size, and after the phenomenal increase in model sophistication and detail … the guesstimated range of climate sensitivity hasn’t narrowed in any significant fashion. It’s still right around 3 ± 1.5°C per double of CO2, just like it was in 1979.
And the same thing is true on most fronts in climate science. We still don’t understand the things that were mysteries a third of a century ago. After all of the gigantic advances in model speed, size, and detail, we still can say nothing definitive about the clouds. We still don’t have a handle on the net feedback. It’s like the whole realm of climate science got stuck in a 1979 time warp, and has basically gone nowhere since then. The models are thousands of times bigger, and thousands of times faster, and thousands of times more complex, but they are still useless for regional predictions.
How can we understand this stupendous lack of progress, a third of a century of intensive work with very little to show for it?
For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.
Now we can debate what that fundamental misunderstanding might be.
But I see no other explanation that makes sense. Every other field of science has seen huge advances since 1979. New fields have opened up, old fields have moved ahead. Genomics and nanotechnology and proteomics and optics and carbon chemistry and all the rest, everyone has ridden the computer revolution to heights undreamed of … except climate science.
That’s the elephant in the room—the incredible lack of progress in the field despite a third of a century of intense study.
Now me, I think the fundamental misunderstanding is the idea that the surface air temperature is a linear function of forcing. That’s why it was lethal for the Charney folks to answer the wrong question. They started with the assumption that a change in forcing would change the temperature, and wondered “how well could we project the climatic consequences?”
Once you’ve done that, once you’ve assumed that CO2 is the culprit, you’ve ruled out the understanding of the climate as a heat engine.
Once you’ve done that, you’ve ruled out the idea that like all flow systems, the climate has preferential states, and that it evolves to maximize entropy.
Once you’ve done that, you’ve ruled out all of the various thermostatic and homeostatic climate mechanisms that are operating at a host of spatial and temporal scales.
And as it turns out, once you’ve done that, once you make the assumption that surface temperature is a linear function of forcing, you’ve ruled out any progress in the field until that error is rectified.
But that’s just me. You may have some other explanation for the almost total lack of progress in climate science in the last third of a century, and if so, all cordial comments gladly accepted. Allow me to recommend that your comments be brief, clear and interesting.
w.
PS—Please do not compare this to the lack of progress in something like achieving nuclear fusion. Unlike climate science, that is a practical problem, and a devilishly complex one. The challenge there is to build something never seen in nature—a bottle that can contain the sun here on earth.
Climate, on the other hand, is a theoretical question, not a building challenge.
PPS—Please don’t come in and start off with version number 45,122,164 of the “Willis, you’re an ignorant jerk” meme. I know that. I was born yesterday, and my background music is Tom o’Bedlam’s song:
By a host of furious fancies Whereof I am commander With a sword of fire, and a steed of air Through the universe I wander. By a ghost of rags and shadows I summoned am to tourney Ten leagues beyond the wild world's end Methinks it is no journey.
So let’s just take my ignorance and my non compos mentation and my general jerkitude as established facts, consider them read into the record, and stick to the science, OK?
“evolves to maximize entropy”
Hence the Achilles heal
Not entropy dimension, but a 3D entropy function is needed
“Modified Feynman ratchet with velocity-dependent fluctuations”
Motivation is the ability to extract work from a gravitationally bound gas
Consider well insulated pipe loop containing H2
Heat exchanger at bottom, and downward top of the loop
Air Cp = 1.01 — H2 Cp =14.32 — lapse=-g/Cp
Guess what ? — perpetual motion !
Some more on “data-mining”.
The basic ingredient here is “start with the data as they are” – a vital attitude in research, and a grave omission when using mainly models to substantiate CAGW. Many comments on WUWT indicate this. I only learned this term “data-mining” while formulating my first comment in this thread and googling around to update on modern approaches with computers. I am no expert in data-mining at all, but I am fond of basic science, like may visitors here.
As the scientific process is a cyclic one, it is not realistic to give predominance of one of the phases of this cycle over other phases. So data-collecting being one of the steps, may very well and often is preceded by some idea, fascination, theory of even a model. The problem with a model is it is very difficult to test its validity with collecting data. Somehow you have to break down the model in testable elements, but where does this leave your model? For a complicated phenomenon like climate the “start with a model” approach seems way over the top to me.
There wouldn’t be this “climate model hype” if there weren’t computers. Rightly or wrongly people expect magic from their calculating power. But as science is a cyclic process, you cannot neglect the other phases. In a sense “data-mining” (bottom-up) is the opposite of working with models (top-down). It’s a complementary necessity I would think.
We like making pretty charts with our data. A pretty chart is worth so many wonderful words. The problem with pretty charts is we often forget where 0 is. If we are trying to find a planet around a distant star, ignoring zero is quite useful. The question is, what do we do with that planet after we find it. Is it added to the database used to calculate the next shuttle orbit? Does that planet 50 Light years away affect the launch of a rocket?
As another poster has pointed out regularly here, in climate science we seem to not only ignore 0, we ignore the fact that we are using the wrong 0. The great “scientists” are running around making predictions about temperature using Anomalous 0, when we should be looking at 0 in terms of enthalpy. The starting point there IS NOT an anomalous enthalpy though. You have to look at the absolute magnitude of the enthalpies involved. We have started to see some discussion of enthalpies in places like Skeptical Science, but when you look closely they are still playing the anomalous game.
I do expect experts (all of them) to make judgements about when to use anomalies and when not to. If all they are doing is presenting pretty graphs, more power to them. If they are trying to make a prediction about what is going to happen next, they need to use the data to make a prediction. If anomalous analysis allows them to make more accurate predictions, then I am wrong. So far though I haven’t seen any of the anomalous analyses do anything more than say “look the haphazard results match my funky models.
Brad Tittle:
An often overlooked characteristic of the IPCC climate models is that they do not make predictions. They make projections. Unlike predictions, projections are insusceptible to being statistically validated.
Why dont we talk about “forecast” and “hindcast”? Much clearer…….
Projection/prediction is from what language? Does ist have English, Spanish, German,
French way of understanding? The equal spelling, as every translater knows, does NOT
signify that its meaning is identical, you can be embarrassed in English and you can be
embarrassed in Spanish… in most cases of words the meaning is somewhat different
in each language… I am not even sure that a English projection is the same as a German
“Projektion” or a Spanish “proyeccion”…..
Lets use forecast and we all what it is all about….
JS
Thanks for taking the time to respond. In English, “prediction” and “forecast” are synonyms. In the following remarks, I use “prediction.”
It is improper to suggest that any of the IPCC climate models make predictions as none of these models do so. What these models do make are projections. “Projection” is a term from the field of ensemble forecasting but while climatologists often use “prediction” and “projection” as synonyms, the ideas that are referenced by the two words are distinct.
A model that makes predictions is susceptible to being statistically tested with the consequence that it can be either falsified or validated by the evidence, if any. A model that makes only projections is not susceptible to being statistically tested; it cannot be either falsified or validated by the evidence.
In its assessment reports, the IPCC muddies the waters by using the similar-sounding words “prediction” and “projection” as synonyms when they are not. Similarly, it muddies the waters by using the similar-sounding words “validation” and “evaluation” as synonyms when they are not. A consequence is for it to sound to many as though the models have been statistically validated when they are not even susceptible to being statistically tested.
The lack of susceptibility to being statistically tested implies that the methodology of the IPCC’s inquiry into AGW was non-scientific but many believe the opposite. This misunderstanding is a consequence from ambiguity of reference by terms in the language of climatology to the associated ideas. If the IPCC wished, it could eliminate this misunderstanding by disambiguating terms in the language of its reports. A year after I published a peer reviewed article on this topic, I have no evidence that the organization plans to do so.
Reblogged this on Climate Ponderings and commented:
“The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.”
Excellent observation. And … abandoning either assumption cuts the ground out from under the CAGW “conclusion”. To be explicit, not stable means the “natural variation” band must be acknowledged to encompass all current (interglacial) variance, and no, weak, or strongly bounded positive feedback means “sensitivity” is low and runaway tipping points are highly implausible.
Very cogent observation! I had, along with others I think, mainly concentrated on the improper collusion between rent-seeking society “management” and rent-seeking “climatologists”. But the implicit threat of ostracism, and lack of impartial support services, that is implicit (explicit?) in the endorsements of the consensus by societies is indeed where the rubber meets the road.
That would be consistent with the Singer satellite findings, that OLR varies quickly and smoothly with temperature, not lagged. It “short-circuits” any putative positive feedback mechanism.
Has anyone with access to any of the large data pools had a go with the Eureqa software? (Searches for patterns and derives simplest equations for them, no hints or suggestions or assumptive inputs allowed.)
Oops, blew the link above; s/b Eureqa (free download site).
Terry;
The “projections” were, IIRC, originally and accurately characterized as extrapolations of various ensembles of assumptions to see what happened when selected variables and co-efficients within the programmed algorithms were “tweaked”. Said ensembles of assumptions and tweaks were poorly characterized and documented, which is a problem, but IAC the initializations were also arbitrary. All in all, said “projections” could function as predictions only to the extent that the variables, algorithms, and initializations were thoroughly and explicitly vetted in advance, preferably (much preferably) by third parties.
Didn’t happen, of course.
Brian H
Thanks for taking the time to reply. I disagree. A “prediction” has a close relationship to a statistically independent event. In particular, it is an extrapolation from the observed state at the start of this event to the unobserved state at the end of the same event. If you observe that it is cloudy and predict rain in the next 24 hours, you have made a prediction. In my example, “cloudy” is the observed state at the start of the event while “rain in the next 24 hours” is the unobserved state at the end of the same event.
The event that I have described has a duration of 24 hours. By splitting the time line into non-overlapping 24 hour long intervals that collectively cover the time line, one could provide a partial description of a complete set of statistically independent events or “statistical population.” Please note that the independent events are discrete and countable. On the other hand, projections are continuous.
The set of predictions that potentially are made by a model has a one-to-one relationship to the events in a statistical population. The existence of this population is a necessary condition for the associated model to have the potential for making predictions. By the absence of this population, you can assure yourself that none of the models that are referenced by IPCC Working Group 1 in AR4 have the potential for making predictions. On the other hand, all of them have the potential for making projections.
A conditional prediction, that is, one in which the predicted unobserved state at the end of the event depends upon the observed state at the start of the same event, is an example of a predictive inference. A model may make a predictive inference but while an IPCC climate model makes projections it makes no predictive inference. A model that makes no predictive inference conveys no information to a policy maker about the outcomes from his/her policy decisions. Thus, as vehicles for making policy, the IPCC climate models are worthless.
Terry;
I was certainly not trying to justify their procedure! In fact, I fully agree with G&T’s characterization of the models as “video games”. I was just giving the rationale, and trying to highlight the acknowledged non-relationship to reality inherent in the term “projections”.
Note the specification: all the coefficients, algorithms, and variables, and the initialization data, would have to be vetted by 3rd party analysts BEFORE any projection could be tested against reality, and THEN given a tentative valuation. None of that has ever occurred. Yet, in practice, the projections are used, treated, and cited as “predictions”. Well; live by the sword …
Brian H:
I take it that we agree on the inappropriateness of using, treating and citing projections as predictions. I’d be relieved if you would assure me of your additional understanding that the existence of the associated statistical population is a necessity for a predictions to have been made. Climatologists on both side of the issue of CAGW, including Lord Monckton, exhibit ignorance of this necessity.
Definitionally and technically correct; but in the real world any statement with a future date on it, especially when accompanied by such phrases as “likely” and “highly likely”, is a prediction. That it is baseless is merely a characteristic that must be loudly explained to the listeners, lest they take it seriously. If some time has passed since issuance, it may also be helpful and necessary to demonstrate how much at variance the model projection is from observation.
Politically, which is where the decisions affecting Life, Funding, and Everything are made, Big Lie Projections are deadly dangerous, and have to be defeated by all effective means. Explaining loudly to the voting populace that there is no associated statistical population will possibly prove insufficient.
Brian H:
It sounds as though we’re in rough agreement on the definition of a prediction. I’ll point out that in order for the claims made by a predictive model to testable, there must be a large number of statistically independent events, some of them observed. The complete set of these events is an example of a statistical population but no such population has been identified by the IPCC.
For testability, phrases like “likely” and “highly likely” would have to be replaced by numbers, for in testing the model predicted numbers representing the relative frequencies of outcomes must be compared to observed numbers. Also, though climatologists have assumed the outcomes of events of interest to policy makers to be numerical values of the global average surface air temperature (GASAT), there are a couple of hitches in this assumption.
First, by the definition of “climatology,” the GASAT has to be averaged over a specific time period, e.g., three decades, but the IPCC has not identified this period. Second, as each value assigned to the GASAT is a real number, selection of the GASAT results in the existence of outcomes that are of infinite number. Coverage of this space by observed events would require a sample size of infinity but in the real world a sample of infinite size is unobtainable.
Looks interesting Brian.
Gail Combs says:
March 9, 2012 at 10:24 am
Tallbloke, on references to: The Earthshine Project: Measuring the earth’s albedo……
I have this PDF: http://bbso.njit.edu/Research/EarthShine/literature/Palle_etal_2006_EOS.pdf
Can Earth’s Albedo and Surface Temperatures Increase Together?
and this research Article: http://www.hindawi.com/journals/aa/2010/963650/
Automated Observations of the Earthshine
and this PDF: http://bbso.njit.edu/Research/EarthShine/literature/Palle_etal_2008_JGR.pdf
Inter-annual variations in Earth’s reflectance 1999-2007.
Hope that helps.
It does, many thanks Gail.
Terry;
Statisticians, modellers, mathematicians and forecasters are among the professionals excluded from the Hokey Team’s collection of Jackasses of All Sciences, Masters of None. The violations of basic standards and quality controls are so many and so egregious that they are immediately inspired to offer some of their personal stocks of C4 to make a proper start. This is generally not taken well.
The unvalidated climate models need to be put back under the microscope, if they have one!!