Guest Post by Willis Eschenbach
Under the radar, and un-noticed by many climate scientists, there was a recent study by the National Academy of Sciences (NAS), commissioned by the US Government, regarding climate change. Here is the remit under which they were supposed to operate:
Specifically, our charge was
1. To identify the principal premises on which our current understanding of the question [of the climate effects of CO2] is based,
2. To assess quantitatively the adequacy and uncertainty of our knowledge of these factors and processes, and
3. To summarize in concise and objective terms our best present understanding of the carbon dioxide/climate issue for the benefit of policymakers.
Now, that all sounds quite reasonable. In fact, if we knew the answers to those questions, we’d be a long ways ahead of where we are now.
Figure 1. The new Cray supercomputer called “Gaea”, which was recently installed at the National Oceanic and Atmospheric Administration. It will be used to run climate models.
But as it turned out, being AGW supporting climate scientists, the NAS study group decided that they knew better. They decided that to answer the actual question they had been asked would be too difficult, that it would take too long.
Now that’s OK. Sometimes scientists are asked for stuff that might take a decade to figure out. And that’s just what they should have told their political masters, can’t do it, takes too long. But noooo … they knew better, so they decided that instead, they should answer a different question entirely. After listing the reasons that it was too hard to answer the questions they were actually asked, they say (emphasis mine):
A complete assessment of all the issues will be a long and difficult task.
It seemed feasible, however, to start with a single basic question: If we were indeed certain that atmospheric carbon dioxide would increase on a known schedule, how well could we project the climatic consequences?
Oooookaaaay … I guess that’s now the modern post-normal science method. First, you assume that there will be “climatic consequences” from increasing CO2. Then you see if you can “project the consequences”.
They are right that it is easier to do that than to actually establish IF there will be climatic consequences. It makes it so much simpler if you just assume that CO2 drives the climate. Once you have the answer, the questions get much easier …
However, they did at least try to answer their own question. And what are their findings? Well, they started out with this:
We estimate the most probable global warming for a doubling of CO2 to be near 3’C with a probable error of ± 1.5°C.
No surprise there. They point out that this estimate, of course, comes from climate models. Surprisingly, however, they have no question and are in no mystery about whether climate models are tuned or not. They say (emphasis mine):
Since individual clouds are below the grid scale of the general circulation models, ways must be found to relate the total cloud amount in a grid box to the grid-point variables. Existing parameterizations of cloud amounts in general circulation models are physically very crude. When empirical adjustments of parameters are made to achieve verisimilitude, the model may appear to be validated against the present climate. But such tuning by itself does not guarantee that the response of clouds to a change in the CO2 concentration is also tuned. It must thus be emphasized that the modeling of clouds is one of the weakest links in the general circulation modeling efforts.
Modeling of clouds is one of the weakest links … can’t disagree with that.
So what is the current state of play regarding the climate feedback? The authors say that the positive water vapor feedback overrules any possible negative feedbacks:
We have examined with care ail known negative feedback mechanisms, such as increases in low or middle cloud amount, and have concluded that the oversimplifications and inaccuracies in the models are not likely to have vitiated the principal conclusion that there will be appreciable warming. The known negative feedback mechanisms can reduce the warming, but they do not appear to be so strong as the positive moisture feedback.
However, as has been the case for years, when you get to the actual section of the report where they discuss the clouds (the main negative feedback), the report merely reiterates that the clouds are poorly understood and poorly represented … how does that work, that they are sure the net feedback is positive, but they don’t understand and can only poorly represent the negative feedbacks? They say, for example:
How important the overall cloud effects are is, however, an extremely difficult question to answer. The cloud distribution is a product of the entire climate system, in which many other feedbacks are involved. Trustworthy answers can be obtained only through comprehensive numerical modeling of the general circulations of the atmosphere and oceans together with validation by comparison of the observed with the model-produced cloud types and amounts.
In other words, they don’t know but they’re sure the net is positive.
Regarding whether the models are able to accurately replicate regional climates, the report says:
At present, we cannot simulate accurately the details of regional climate and thus cannot predict the locations and intensities of regional climate changes with confidence. This situation may be expected to improve gradually as greater scientific understanding is acquired and faster computers are built.
So there you have it, folks. The climate sensitivity is 3°C per doubling of CO2, with an error of about ± 1.5°C. Net feedback is positive, although we don’t understand the clouds. The models are not yet able to simulate regional climates. No surprises in any of that. It’s just what you’d expect a NAS panel to say.
Now, before going forwards, since the NAS report is based on computer models, let me take a slight diversion to list a few facts about computers, which are a long-time fascination of mine. As long as I can remember, I wanted a computer of my own. When I was a little kid I dreamed about having one. I speak a half dozen computer languages reasonably well, and there are more that I’ve forgotten. I wrote my first computer program in 1963.
Watching the changes in computer power has been astounding. In 1979, the fastest computer in the world was the Cray-1 supercomputer. In 1979, a Cray-1 supercomputer, a machine far beyond anything that most scientists might have dreamed of having, had 8 Mb of memory, 10 Gb of hard disk space, and ran at 100 MFLOPS (million floating point operations per second). The computer I’m writing this on has a thousand times the memory, fifty times the disk space, and two hundred times the speed of the Cray-1.
And that’s just my desktop computer. The new NASA climate supercomputer “Gaea” shown in Figure 1 runs two and a half million times as fast as a Cray-1. This means that a one-day run on “Gaea” would take a Cray-1 about seven thousand years to complete …
Now, why is the speed of a Cray-1 computer relevant to the NAS report I quoted from above?
It is relevant because as some of you may have realized, the NAS report I quoted from above is called the “Charney Report“. As far as I know, it was the first official National Academy of Science statement on the CO2 question. And when I said it was a “recent report”, I was thinking about it in historical terms. It was published in 1979.
Here’s the bizarre part, the elephant in the climate science room. The Charney Report could have been written yesterday. AGW supporters are still making exactly the same claims, as if no time had passed at all. For example, AGW supporters are still saying the same thing about the clouds now as they were back in 1979—they admit they don’t understand them, that it’s the biggest problem in the models, but all the same but they’re sure the net feedback is positive. I’m not sure clear that works, but it’s been that way since 1979.
That’s the oddity to me—when you read the Charney Report, it is obvious that almost nothing of significance has changed in the field since 1979. There have been no scientific breakthroughs, no new deep understandings. People are still making the same claims about climate sensitivity, with almost no change in the huge error limits. The range still varies by a factor of three, from about 1.5 to about 4.5°C per doubling of CO2.
Meanwhile, the computer horsepower has increased beyond anyone’s wildest expectations. The size of the climate models has done the same. The climate models of 1979 were thousands of lines of code. The modern models are more like millions of lines of code. Back then it was atmosphere only models with a few layers and large gridcells. Now we have fully coupled ocean-atmosphere-cryosphere-biosphere-lithosphere models, with much smaller gridcells and dozens of both oceanic and atmospheric layers.
And since 1979, an entire climate industry has grown up that has spent millions of human-hours applying that constantly increasing computer horsepower to studying the climate.
And after the millions of hours of human effort, after the millions and millions of dollars gone into research, after all of those million-fold increases in computer speed and size, and after the phenomenal increase in model sophistication and detail … the guesstimated range of climate sensitivity hasn’t narrowed in any significant fashion. It’s still right around 3 ± 1.5°C per double of CO2, just like it was in 1979.
And the same thing is true on most fronts in climate science. We still don’t understand the things that were mysteries a third of a century ago. After all of the gigantic advances in model speed, size, and detail, we still can say nothing definitive about the clouds. We still don’t have a handle on the net feedback. It’s like the whole realm of climate science got stuck in a 1979 time warp, and has basically gone nowhere since then. The models are thousands of times bigger, and thousands of times faster, and thousands of times more complex, but they are still useless for regional predictions.
How can we understand this stupendous lack of progress, a third of a century of intensive work with very little to show for it?
For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.
Now we can debate what that fundamental misunderstanding might be.
But I see no other explanation that makes sense. Every other field of science has seen huge advances since 1979. New fields have opened up, old fields have moved ahead. Genomics and nanotechnology and proteomics and optics and carbon chemistry and all the rest, everyone has ridden the computer revolution to heights undreamed of … except climate science.
That’s the elephant in the room—the incredible lack of progress in the field despite a third of a century of intense study.
Now me, I think the fundamental misunderstanding is the idea that the surface air temperature is a linear function of forcing. That’s why it was lethal for the Charney folks to answer the wrong question. They started with the assumption that a change in forcing would change the temperature, and wondered “how well could we project the climatic consequences?”
Once you’ve done that, once you’ve assumed that CO2 is the culprit, you’ve ruled out the understanding of the climate as a heat engine.
Once you’ve done that, you’ve ruled out the idea that like all flow systems, the climate has preferential states, and that it evolves to maximize entropy.
Once you’ve done that, you’ve ruled out all of the various thermostatic and homeostatic climate mechanisms that are operating at a host of spatial and temporal scales.
And as it turns out, once you’ve done that, once you make the assumption that surface temperature is a linear function of forcing, you’ve ruled out any progress in the field until that error is rectified.
But that’s just me. You may have some other explanation for the almost total lack of progress in climate science in the last third of a century, and if so, all cordial comments gladly accepted. Allow me to recommend that your comments be brief, clear and interesting.
w.
PS—Please do not compare this to the lack of progress in something like achieving nuclear fusion. Unlike climate science, that is a practical problem, and a devilishly complex one. The challenge there is to build something never seen in nature—a bottle that can contain the sun here on earth.
Climate, on the other hand, is a theoretical question, not a building challenge.
PPS—Please don’t come in and start off with version number 45,122,164 of the “Willis, you’re an ignorant jerk” meme. I know that. I was born yesterday, and my background music is Tom o’Bedlam’s song:
By a host of furious fancies Whereof I am commander With a sword of fire, and a steed of air Through the universe I wander. By a ghost of rags and shadows I summoned am to tourney Ten leagues beyond the wild world's end Methinks it is no journey.
So let’s just take my ignorance and my non compos mentation and my general jerkitude as established facts, consider them read into the record, and stick to the science, OK?
DesertYote said @ur momisugly March 8, 2012 at 11:41 am
My latest acquired on Monday this week are similar except for the lense coating. They have a stainless steel frame and weigh 21 gm. A pair with glass lenses from 40 years ago before I became presbyopic weigh over 50 gm. Gits like technological progress 🙂
The Pompous Git says:
March 8, 2012 at 10:55 am
Tallbloke is right based on observed satellite data with trends in global cloud and temperature. Global cloud levels declined since around 1983 until 2000 then remained stable for a period after, until increasing recently. It is a disgrace that this has been ignored by alarmist climate scientsts recently. This global cloud trend has nothing to do with CO2 and is so obvious it has caused the recent warming.
Willis, your jerkitudinous mentation is versimilitudinously awsome.
KUTGW.
the guesstimated range of climate sensitivity hasn’t narrowed in any significant fashion. It’s still right around 3 ± 1.5°C per double of CO2, just like it was in 1979.
_________________________________________
Indeed, and they still don’t recognise, even though it’s been pointed out numerous times, that the sensitivity calculation is based on completely fabricated “physics” which assumes, firstly that the Earth’s surface only loses thermal energy by radiation – hence their 255K figure – and then they say that 33 degrees is due to water vapour and trace gases, when in fact it’s not 33 degrees at all (because the 255K is wrong) , and whatever it should be is due to the acceleration due to gravity, which determines the adiabatic lapse rate. Then, to cap it off, they put back evaporation and diffusion (wrongly named convection or thermals) into their energy diagrams, thus admitting their mistake in assuming that the surface only radiates like a perfectly insulated blackbody does.
They also neglect the cooling effect due to absorption of solar radiation in the SW IR range, followed by upwelling “backradiation” to space. This SW IR has more energy per photon than does the LW IR from the surface. And backradiation to space does prevent warming, just like reflection, whereas backradiation downwards cannot transfer thermal energy to the surface – it can only slow the radiative component of surface cooling, not the evaporative of diffusion processes.
Hence there is absolutely no basis whatsoever for any warming sensitivity when, in fact, carbon dioxide almost certainly has a very slight net cooling effect.
Matt G said @ur momisugly March 8, 2012 at 3:53 pm
Pan evaporation rates declined from circa 1950 onwards. Changes in global cloud cover from 1983 onward could have had no effect on pan evaporation in the 60s & 70s. Nor could they be responsible for the recent warming that began in the mid-70s. If you wish to argue with these points you will have to do better I’m afraid. Tallbloke referred to Palle’s work on pan evaporation rates, but I have been unable to find anything by him from an admittedly cursory search.
You might want to refer to http://www.mindfully.org/Air/2002/Decreased-Pan-Evaporation1nov02.htm as part of my “mine of disinformation”.
Further to my previous post about working from the known to the unknown (which has been reversed in contemporary climate science), Anthony’s Surfacestations project, Willis’ investigation of Argo, Geoff Sherrington’s work on the history of temperature stations and stats in Australia, and many others who deserve mention are surely the foundation of science. It’s not as sexy as playing with models and fancy computers, it doesn’t provide instant answers – but it is actually far more interesting and intellectually challenging.
Modern climate ‘science’ suffers from a top-down model being imposed on real data. Proper science is where we try to develop hypotheses from what is observed, or even from deductions from what has been observed.
As with the quest for the ‘cure’ for ‘cancer’, the question was wrongly framed in the first place.
If —— energy in = energy out——- then wisdom in, — must = wisdom out, so who the heck knows anything?
For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.
I’ve floated the idea that the forcings model/theory is wrong, because negative feedbacks operate on much shorter timescales than the forcings. Therefore, forcings have no effect on the (net) climate equilibrium
As these feedbacks are water based, what drives climate change is factors that affect the phase changes of water – aerosols, GCRs, possibly ozone. Affecting cloud formation and type, precipitation and snow/ice melt.
Willis, you are too modest. That was an excellent insightful analysis.
I should have said,
Forcings have no effect on the (net) climate equilibrium, over some timescale. Which doesn’t preclude natural cycles. I am thinking of natural ocean cycles like ENSO.
Willis hits the nail directly and firmly on the head here.
The whole thing started with a simple hypothesis.
And almost everything that has been done since has been based on the premise that the hypothesis is true.
Richard Courtney,
I quite agree with your technical analysis, and am glad of it. My original post was more focused on the human elements of learning and performing science. And also not wanting to be (too) harsh on (too) many people I’ve never met. At risk of insulting some, I’ll elaborate.
I think it quite possible for many students to go through a modern scientific education without being confronted with the failures of their results, theories or understandings. I know it is all too easy for a scientist to talk-the-talk and win arguments with their peers. Without external input this can happen collectively, and a whole group can become convinced their arguments are correct. And for long periods of time. So group-think can occur, especially in small incestuous research areas.
But in, for example, chemistry laboratories, the external input of real contradictory data can arrive very quickly and unpleasantly. This makes it much harder to ignore, so the human learning and thinking process may be a very different one. If the discipline is a large one, then there are more likely to be significant dissenters to a dominant paradigm who cannot be silenced. I would further add that we are probably still in the first generation where computer modelling is so widely and freely available to so many. I have experienced it to be a very powerful and seductive tool. Scientifically, it is also a very dangerous one.
Michael Palmer:
No, I’m not bitter about synthetic chemistry. It was tongue-in-cheek. As many unemployed or “under paid” scientists will admit when they are candid with themselves, they do it because they like it.
I think the “team” really does need the new expensive quantum computers. It’s the only thing that will allow them to be right and wrong at the same time……
humm, never mind. Why waist the money we have that already.
The conclusions of modern climatology are based upon a number of misunderstandings. One of the more fundamental of these is of the significance of the underlying statistical population to a scientific inquiry. In particular, in the absence of a statistical population an inquiry cannot be a “scientific” inquiry for the claims that are made by the theories/models are not susceptible to being tested. Today, as in the past, no statistical population underlies the IPCC’s claim of CAGW. To fail to identify the underlying statistical population is to ensure disaster for a scientific inquiry for this inquiry is not truly “scientific.”
Willis wrote “For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.”
Willis, the issue is very simple. Computer climate scientists try to solve the climate problem with a methodology that starts from the known first principles of physics. This methodology simply does not work for complex systems.
Complex systems need to be studied by using phenomenological models that look how the whole system behaves and try to directly model its dynamics. See here
http://en.wikipedia.org/wiki/Phenomenology_(science)
This philosophy is the one adopted in my studies.
In the 1970 the issue was addressed in numerous disciplines from Economics to Medicine and every discipline dealing with complex systems understood the problem and progresses by developing phenomenological approaches together with other more analytical approaches.
Unfortunately, computer climate scientists never got it and they get stuck.
And the discipline never developed.
The above issues are addressed extensively in my book
“Disrupted Networks: From Physics to Climate Change”
http://www.amazon.com/Disrupted-Networks-Physics-Nonlinear-Phenomena/dp/9814304301/ref=sr_1_1?ie=UTF8&qid=1331272230&sr=8-1
Great post Willis, again.
An analogy to the lack of progress in a science from starting with the assumption that we know how something works and then wasting time, money, and lives trying to prove what we assumed is Lysenkoism. Look what this did to the Soviet Union’s genetics and biology research.
Furthermore, Lysenko pursued the research because Stalin would give him money, power, and glory. Sounds much like the CAGW assumptions today and their lust for power, money, and glory.
This global cloud trend has nothing to do with CO2 and is so obvious it has caused the recent warming.
Indeed,
Climate science has completely ignored that reduced anthropogenic aerosol seeding has reduced cloud reflectivity (as well as reducing cloud amounts), while at the same time they are promoting schemes to artificially increase cloud reflectivity.
http://en.wikipedia.org/wiki/Cloud_reflectivity_modification
I don’t know whether they are blinded by the dogma or the money.
Reblogged this on OnYourMind and commented:
Nice Article
In the world to this day, all the inventors and all the dreamers are looking for positive feed back in everything, thus we could have perpetual motion and unlimited free power, alas and alack there is no free lunch any where, including the climate.
The Pompous Git says:
March 8, 2012 at 4:53 pm
Tallbloke referred to Palle’s work on pan evaporation rates, but I have been unable to find anything by him from an admittedly cursory search.
Incorrect. I referred to Palle et al in relation to your mention of the Earthshine measurements, which they carried out. They show that cloud increased after 1997.5, then levelled out. Their data, where it overlaps, is consistent with the findings of the ISCCP data which shows the decrease in low tropical cloud cover from 1983-1998 Matt referred to.
As Soon et al and Doug Proctor’s post at my site I linked earlier show, the correlation between sunshine hours and surface temperature is far closer than that between co2 levels and temperature.
So far as I can see, your contention that cloud cover change can’t be responsible for the late C0th warming is simply unsupported argument by assertion. The relationship between evaporation rates and cloud cover is complex and poorly understood. Yet you seem to be implying that your unspecified reference to an uncited study showing a reduction in evapo transpiration means the cloud cover reduction in the tropics measured and reported by the ISCCP didn’t happen.
Where’s the beef?
I wonder whether the explanation for lack of progress here is best explained by economics? In simple terms, there has been no market in climate science worth mentioning – all the money has gone to those who accept the tenants of the case outlined in the 1979 NAS report. There has been no reasonable funding of other viewpoints (not to mention the adherents of orthodoxy have tried to close down other viewpoints through non-scientific means), and therefore no particular reason for career scientists to pursue these views – or incentive to challenge the orthodoxy.
As in any field, a lack of competition for ideas means they do not develop – as any look at the papers of e.g. Professor Mann would indicate, most of what is published as important is in fact either tinkering with existing models or finding new justifications for this – it is not raw science as this is not what is required for funding. There is no competition, therefore there is no incentive to actually challenge the science.
In effect, this is what happens when a small field (probably less than 100 scientists in 1979 – and no specialised departments) receives loads of money, so there is no shortage. The challenge to the orthodoxy was only conducted by outsiders, by non-careerists who do science for love not money and, now, by increasing number of new entrants to the field (note how few early career academics are prominent supporters of the orthodoxy) caused by the increase in funds and competing for their share of the money (it is now a crowded field).
In short, the lack of a free market in ideas stalled any development of ideas. A wonderful case study.
Willis, Philip, Tallbloke and others
As I read your posts above it just seems there is so much more that climatologists overlook, and that’s why no progress is made. Maybe some do privately, but don’t dare to speak up. I guess I’m lucky to be able to study these issues for literally thousands of hours in my semi-retirement and not fear the sack or anything relating to my reputation, or whatever it is that holds back progress.
Basically, it’s one thing to pass a degree in physics, but it’s far more complex to really think through the physics and relate it to the atmosphere. In othet posts today I’ve given something of an idea of what I’ve written in much more detail and which will be available Monday or Tuesday.
You are looking at feedbacks and the like up in the atmosphere, fine, but you need to come to grips with what has been proven computationally (and I believe I can say theoretically and empirically) that radiated energy from a cool atmosphere is not converted to thermal energy in the surface. All it does is slow the radiative transfer of energy from the surface, but not the evaporative transfer, nor the diffusion (conduction, if you like) followed by convection.
But even that’s not the end of the story. The 255K and that 33 degrees are wrong and anyway have absolutely nothing to do with sensitivity. The upward backradiation to space of the solar SW IR (captured by WV and CO2) has a cooling effect, and, after all, there’s more energy in SW IR photons than LW IR photons.
But all these considerations are totally eclipsed by the 1,000 year and 60 year natural cycles (not just ENSO cycles which are a result of climate change, not a cause) and by the thermal inertia of the massive amount of thermal energy in the inner crust, mantle and core which we know is retained well by the crust because the terrestrial flow is so slow. It could take hundreds of thousands of years to change the gradient of the temperature plot all the way from the core to the surface. So there is a huge stabilising effect brought about by very steady temperatures just 1Km underground for example.
I can’t condensed 14 pages or so here, but I’ll get back when it’s available.
Oh I get it now! When our current scientific Climate Commissioner, Tim Flannery said “Within this century the concept of the strong Gaia will actually become physically manifest. This planet, this Gaia, will have acquired a brain and a nervous system.”( in an ABC broadcast on the first day of 2011) he was completely misquoted. We all thought he was channelling Lovelock when he was actually referring to supercomputer ‘Gaea’.
As you were ladies and gentlemen because all will be revealed in Gaea’s good time, although perhaps it just needs some offering of gingerbread or an ice cream sandwich or some such to help the Android manifest. What about an Apple for the great teacher?
The Pompous Git, Matt G, Tallbloke
For what it is worth here are the earthshine measurements showing an increase in cloud cover and changes in albedo from 1998 to 2008. http://www.bbso.njit.edu/Research/EarthShine/
The albedo has increased over that decade.
Earthshine variations by month: http://science.nasa.gov/science-news/science-at-nasa/2002/12apr_earthshine/
The Earthshine Project: Measuring the earth’s albedo. Latest results
Palle, E.; Montanes Rodriguez, P.; Goode, P. R.; Koonin, S. E.; Qiu, J.
EGS – AGU – EUG Joint Assembly, Abstracts from the meeting held in Nice, France, 6 – 11 April 2003, abstract #7730
ABSTRACT
“….. During the past 4 years, a significant increasing trend in the averaged Earth’s reflectance has been detected in the observational data. More scarce data from 1994 and 1995 allow us to take a longer-term look at the Earth’s albedo variability and the possibility of a response of this parameter to solar activity is discussed. Simultaneously, spectroscopic observations of the earthshine have been carried out at Palomar Observatory. First results and comparison between the spectral and photometric observations are also being presented….” http://adsabs.harvard.edu/abs/2003EAEJA…..7730P
Reblogged this on gottadobetterthanthis and commented:
Since Willis wrote it, it is obviously worth reading. Also, it goes with my recent comments on perspective. 33 years is a long time in our world, in our lives. It is amazing to think how far computers have advanced in my life time. It is discouraging that software stays ahead of the computing power. (My poor eight year old machine can hardly load typical web pages anymore.) So, I post a bit of perspective with regard to human understanding of the atmosphere of our planet. The bottom line is we may not even have started thinking about it properly yet. I do think we under estimate the effects associated with living organisms. Yet our pride makes us over estimate the effects we humans have, and we tend to grossly over estimate how much effect we can determine to have.