Under the radar – the NAS Report

Guest Post by Willis Eschenbach

Under the radar, and un-noticed by many climate scientists, there was a recent study by the National Academy of Sciences (NAS), commissioned by the US Government, regarding climate change. Here is the remit under which they were supposed to operate:

Specifically, our charge was

1. To identify the principal premises on which our current understanding of the question [of the climate effects of CO2] is based,

2. To assess quantitatively the adequacy and uncertainty of our knowledge of these factors and processes, and

3. To summarize in concise and objective terms our best present understanding of the carbon dioxide/climate issue for the benefit of policymakers.

Now, that all sounds quite reasonable. In fact, if we knew the answers to those questions, we’d be a long ways ahead of where we are now.

Figure 1. The new Cray supercomputer called “Gaea”, which was recently installed at the National Oceanic and Atmospheric Administration. It will be used to run climate models.

But as it turned out, being AGW supporting climate scientists, the NAS study group decided that they knew better. They decided that to answer the actual question they had been asked would be too difficult, that it would take too long.

Now that’s OK. Sometimes scientists are asked for stuff that might take a decade to figure out. And that’s just what they should have told their political masters, can’t do it, takes too long. But noooo … they knew better, so they decided that instead, they should answer a different question entirely. After listing the reasons that it was too hard to answer the questions they were actually asked, they say (emphasis mine):

A complete assessment of all the issues will be a long and difficult task.

It seemed feasible, however, to start with a single basic question:  If we were indeed certain that atmospheric carbon dioxide would increase on a known schedule, how well could we project the climatic consequences?

Oooookaaaay … I guess that’s now the modern post-normal science method. First, you assume that there will be “climatic consequences” from increasing CO2. Then you see if you can “project the consequences”.

They are right that it is easier to do that than to actually establish IF there will be climatic consequences. It makes it so much simpler if you just assume that CO2 drives the climate. Once you have the answer, the questions get much easier …

However, they did at least try to answer their own question. And what are their findings? Well, they started out with this:

We estimate the most probable global warming for a doubling of CO2 to be near 3’C with a probable error of ± 1.5°C.

No surprise there. They point out that this estimate, of course, comes from climate models. Surprisingly, however, they have no question and are in no mystery about whether climate models are tuned or not. They say (emphasis mine):

Since individual clouds are below the grid scale of the general circulation models, ways must be found to relate the total cloud amount in a grid box to the grid-point variables. Existing parameterizations of cloud amounts in general circulation models are physically very crude. When empirical adjustments of parameters are made to achieve verisimilitude, the model may appear to be validated against the present climate. But such tuning by itself does not guarantee that the response of clouds to a change in the CO2 concentration is also tuned. It must thus be emphasized that the modeling of clouds is one of the weakest links in the general circulation modeling efforts.

Modeling of clouds is one of the weakest links … can’t disagree with that.

So what is the current state of play regarding the climate feedback? The authors say that the positive water vapor feedback overrules any possible negative feedbacks:

We have examined with care ail known negative feedback mechanisms, such as increases in low or middle cloud amount, and have concluded that the oversimplifications and inaccuracies in the models are not likely to have vitiated the principal conclusion that there will be appreciable warming. The known negative feedback mechanisms can reduce the warming, but they do not appear to be so strong as the positive moisture feedback.

However, as has been the case for years, when you get to the actual section of the report where they discuss the clouds (the main negative feedback), the report merely reiterates that the clouds are poorly understood and poorly represented … how does that work, that they are sure the net feedback is positive, but they don’t understand and can only poorly represent the negative feedbacks? They say, for example:

How important the overall cloud effects are is, however, an extremely difficult question to answer. The cloud distribution is a product of the entire climate system, in which many other feedbacks are involved. Trustworthy answers can be obtained only through comprehensive numerical modeling of the general circulations of the atmosphere and oceans together with validation by comparison of the observed with the model-produced cloud types and amounts.

In other words, they don’t know but they’re sure the net is positive.

Regarding whether the models are able to accurately replicate regional climates, the report says:

At present, we cannot simulate accurately the details of regional climate and thus cannot predict the locations and intensities of regional climate changes with confidence. This situation may be expected to improve gradually as greater scientific understanding is acquired and faster computers are built.

So there you have it, folks. The climate sensitivity is 3°C per doubling of CO2, with an error of about ± 1.5°C. Net feedback is positive, although we don’t understand the clouds. The models are not yet able to simulate regional climates. No surprises in any of that. It’s just what you’d expect a NAS panel to say.

Now, before going forwards, since the NAS report is based on computer models, let me take a slight diversion to list a few facts about computers, which are a long-time fascination of mine. As long as I can remember, I wanted a computer of my own. When I was a little kid I dreamed about having one. I speak a half dozen computer languages reasonably well, and there are more that I’ve forgotten. I wrote my first computer program in 1963.

Watching the changes in computer power has been astounding. In 1979, the fastest computer in the world was the Cray-1 supercomputer. In 1979, a Cray-1 supercomputer, a machine far beyond anything that most scientists might have dreamed of having, had 8 Mb of memory, 10 Gb of hard disk space, and ran at 100 MFLOPS (million floating point operations per second). The computer I’m writing this on has a thousand times the memory, fifty times the disk space, and two hundred times the speed of the Cray-1.

And that’s just my desktop computer. The new NASA climate supercomputer “Gaea” shown in Figure 1 runs two and a half million times as fast as a Cray-1. This means that a one-day run on “Gaea” would take a Cray-1 about seven thousand years to complete …

Now, why is the speed of a Cray-1 computer relevant to the NAS report I quoted from above?

It is relevant because as some of you may have realized, the NAS report I quoted from above is called the “Charney Report“. As far as I know, it was the first official National Academy of Science statement on the CO2 question. And when I said it was a “recent report”, I was thinking about it in historical terms. It was published in 1979.

Here’s the bizarre part, the elephant in the climate science room. The Charney Report could have been written yesterday. AGW supporters are still making exactly the same claims, as if no time had passed at all. For example, AGW supporters are still saying the same thing about the clouds now as they were back in 1979—they admit they don’t understand them, that it’s the biggest problem in the models, but all the same but they’re sure the net feedback is positive. I’m not sure clear that works, but it’s been that way since 1979.

That’s the oddity to me—when you read the Charney Report, it is obvious that almost nothing of significance has changed in the field since 1979. There have been no scientific breakthroughs, no new deep understandings. People are still making the same claims about climate sensitivity, with almost no change in the huge error limits. The range still varies by a factor of three, from about 1.5 to about 4.5°C per doubling of CO2.

Meanwhile, the computer horsepower has increased beyond anyone’s wildest expectations. The size of the climate models has done the same. The climate models of 1979 were thousands of lines of code. The modern models are more like millions of lines of code. Back then it was atmosphere only models with a few layers and large gridcells. Now we have fully coupled ocean-atmosphere-cryosphere-biosphere-lithosphere models, with much smaller gridcells and dozens of both oceanic and atmospheric layers.

And since 1979, an entire climate industry has grown up that has spent millions of human-hours applying that constantly increasing computer horsepower to studying the climate.

And after the millions of hours of human effort, after the millions and millions of dollars gone into research, after all of those million-fold increases in computer speed and size, and after the phenomenal increase in model sophistication and detail … the guesstimated range of climate sensitivity hasn’t narrowed in any significant fashion. It’s still right around 3 ± 1.5°C per double of CO2, just like it was in 1979.

And the same thing is true on most fronts in climate science. We still don’t understand the things that were mysteries a third of a century ago.  After all of the gigantic advances in model speed, size, and detail, we still can say nothing definitive about the clouds. We still don’t have a handle on the net feedback. It’s like the whole realm of climate science got stuck in a 1979 time warp, and has basically gone nowhere since then. The models are thousands of times bigger, and thousands of times faster, and thousands of times more complex, but they are still useless for regional predictions.

How can we understand this stupendous lack of progress, a third of a century of intensive work with very little to show for it?

For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.

Now we can debate what that fundamental misunderstanding might be.

But I see no other explanation that makes sense. Every other field of science has seen huge advances since 1979. New fields have opened up, old fields have moved ahead. Genomics and nanotechnology and proteomics and optics and carbon chemistry and all the rest, everyone has ridden the computer revolution to heights undreamed of … except climate science.

That’s the elephant in the room—the incredible lack of progress in the field despite a third of a century of intense study.

Now me, I think the fundamental misunderstanding is the idea that the surface air temperature is a linear function of forcing. That’s why it was lethal for the Charney folks to answer the wrong question. They started with the assumption that a change in forcing would change the temperature, and wondered “how well could we project the climatic consequences?”

Once you’ve done that, once you’ve assumed that CO2 is the culprit, you’ve ruled out the understanding of the climate as a heat engine.

Once you’ve done that, you’ve ruled out the idea that like all flow systems, the climate has preferential states, and that it evolves to maximize entropy.

Once you’ve done that, you’ve ruled out all of the various thermostatic and homeostatic climate mechanisms that are operating at a host of spatial and temporal scales.

And as it turns out, once you’ve done that, once you make the assumption that surface temperature is a linear function of forcing, you’ve ruled out any progress in the field until that error is rectified.

But that’s just me. You may have some other explanation for the almost total lack of progress in climate science in the last third of a century, and if so, all cordial comments gladly accepted. Allow me to recommend that your comments be brief, clear and interesting.

w.

PS—Please do not compare this to the lack of progress in something like achieving nuclear fusion. Unlike climate science, that is a practical problem, and a devilishly complex one. The challenge there is to build something never seen in nature—a bottle that can contain the sun here on earth.

Climate, on the other hand, is a theoretical question, not a building challenge.

PPS—Please don’t come in and start off with version number 45,122,164 of the “Willis, you’re an ignorant jerk” meme. I know that. I was born yesterday, and my background music is Tom o’Bedlam’s song:

By a host of furious fancies

Whereof I am commander

With a sword of fire, and a steed of air

Through the universe I wander.

By a ghost of rags and shadows

I summoned am to tourney

Ten leagues beyond the wild world's end

Methinks it is no journey.

So let’s just take my ignorance and my non compos mentation and my general jerkitude as established facts, consider them read into the record, and stick to the science, OK?

0 0 votes
Article Rating
272 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
March 7, 2012 10:51 pm

I find it fascinating that optics is still moving ahead by leaps and bounds even though the major breakthrough (clear glass) occurred over a thousand years ago and spectacles 800 years ago. Without them, my life would be hell. Maybe climatology is still stuck in the Dark Ages 😉

neo
March 7, 2012 11:02 pm

apparently you don’t understand that science is hypothesis driven nor do I guess you understand what a hypothesis actually is or is used for in these studies.
Otherwise you wouldn’t taking the hypothesis as predetermined fact

SimonJ
March 7, 2012 11:04 pm

“The known negative feedback mechanisms can reduce the warming, but they do not appear to be so strong as the positive moisture feedback.”
But later you say that their conclusion is that feedback is overall NEGATIVE.
As an engineer familiar with control systems, and thus feedback, I realise that climate science (TM) has redefined the terminology used in discussing feedback, but surely the whole AGW edifice relies on POSITIVE feedback. (increased temperature produced more moisture, which produces more warming, which produces more moisture etc etc until we’re living in a 100% RH dripping jungle)
Have I missed something? (it is only 7 in the morning, and it’s been a bloody long week)

[Thanks, fixed
. You are right, they say it is overall positive -w.]

Roy UK
March 7, 2012 11:06 pm

A very interesting article, thank you Willis. They came up with some numbers back in 1979 based on computer models.
“We estimate the most probable global warming for a doubling of CO2 to be near 3′C with a probable error of ± 1.5°C.”
What fraction of doubling of CO2 have we had during this time, and what has the temperature change been? ie what does the real world data say?

March 7, 2012 11:13 pm

Reblogged this on The GOLDEN RULE and commented:
What a wonderful post! The conclusions by Willis are simply, as I read them, the science of AGW, whatever name they use, is not only NOT SETTLED, it is practically NON EXISTENT. I intend wafting throughthis article again to extract some relevant bits supporting my conclusion. In the meantime please read it yourself and tell me if you think I am wrong.

DBCooper
March 7, 2012 11:17 pm

“Net feedback is negative, although we don’t understand the clouds.” Didn’t you mean positive? [Thanks, fixed -w.]

Jon Alldritt
March 7, 2012 11:22 pm

You just found the answer to how to answer the question. Start with a new group of people that are not restricted trying to dry lab from the answer to the question. Now if only the people and funding could be put together. Never mind there is no way to create fear and get money.

March 7, 2012 11:26 pm

A complete assessment of all the issues will be a long and difficult task.
____________________________
No, a complete assessment of all the issues will be an impossible task.
My “complete assessment of all the issues” as to why it is impossible will be published in about 15 pages next week. There’s been a slight delay while I added more details of numerous errors in their physics.
It wouldn’t have taken them too long if they had engaged someone with an understanding of physics – and why any warming by radiation from the atmosphere would be a violation of the Second Law of Thermodynamics – and why we should be so glad the terrestrial heat flow is so slow that it keeps everything at nice stable temperatures (give or take a couple of degrees) for millions of years.
[Doug, please take your claims about the Second Law elsewhere. I suggest Tallblokes Talkshop. Here, you are off-topic, and pushing your usual SIF story. This discussion is NOT ABOUT THE SECOND LAW. Speculations about the Second Law are for another thread. Thanks. -w.]

eyesonu
March 7, 2012 11:26 pm

Good post Willis. A lot of good points.
Maybe they hear a little jingle that goes something like this with regards to CO2:
“And I just can’t get it out of my mind …” (apply musical notes).

PaddikJ
March 7, 2012 11:27 pm

If Svensmark is correct — and the data appear to be saying he is — then there has been significant progress in climate research (especially regarding clouds!). It’s just that the vast majority of CO2-obsessed climatologists don’t want to hear it (and so the Astrophysicists and Cosmologists and Solar Physicists, etc. are leaving them in the dust).
BTW, you write “In other words, they don’t know but they’re sure the net is negative.”, and even repeat it somewhere, I think. I believe you meant that net feedbacks are positive. [True, and fixed. Thanks. -w.]

March 7, 2012 11:28 pm

Well, thousands of years perhaps – until the next glacial period.

johnm
March 7, 2012 11:29 pm

The real world doesn’t matter.
It is obviously incorrect.
The models are the only option.

Richard deSousa
March 7, 2012 11:30 pm

The ghost writer for NAS report has to be James Hansen.

Toto
March 7, 2012 11:32 pm

How much are we spending on fusion research? How many teams are working on it? Compared to climate research I’d guess the answers are not much and not many. Yet finding alternative energy sources is important, especially if you believe in CAGW.
The short-term weather models have made good progress. Longer term weather models are still not very useful. And they may never be, simply because of the nature of the problem. That raises the question of what can we reasonably expect from climate models, if anything. What poor assumptions are they based on? Hockey sticks get good auditing. Who is auditing the climate models?

John Peter
March 7, 2012 11:37 pm

“For example, AGW supporters are still saying the same thing about the clouds now as they were back in 1979—they admit they don’t understand them, that it’s the biggest problem in the models, but all the same but they’re sure the net feedback is negative. I’m not sure clear that works, but it’s been that way since 1979.”
I am like SimonJ above. I got totally confused when it was stated (several times) that the “net feedback is negative”. I thought the whole point was they continue to claim that the net feedback is positive although they don’nt understand clouds. There is general agreement within the Team that the basic effect of a doubling of CO2 is 1C but the 3C is due to the secondary positive feedback. I for one need some clarification or I am ‘lost’ in this otherwise excellent article.

Doug Proctor
March 7, 2012 11:37 pm

Willis,
You display an astounding degree of common sense. It is imperative that I inform you that such sense holds neither candle nor book to mathematically derived scenarios. To note that more computer power appears not to solve what is said to be a computationally-determined problem is to fundamentally misunderstand that the question has never been about “x” amount of power. It is, like Al Gore’s wealth, all about “more”. More has no end point.
I once awoke in the dark north of Lake Superior to take over the driving duties as we headed east. At daylight it was realized that I had made a right-hand turn and was headed south, towards the United States. Climate science has made me understand that I was greviously abused when I was forced to turn around. Apparently to get to where you want, regardless of the direction in which you are headed, you only need to drive faster and more determinedly.

March 7, 2012 11:43 pm

lol Willis ! you know that the NAS Report isn’t about science, it’s about the public purse !
I would argue that there is climate science on this blog that policy makers, including the US government, refer to.
as the stones sang, ‘you can’t always get what you want’ (but if you try sometimes, you just might find, you get what you need).
I think the capacity of that computer will be yours in a few short years, maybe sooner.
in the meantime, keep scoring those goals !

David Henderson
March 7, 2012 11:47 pm

To answer the question of RoyUK: the Keeling curve in wikipedia shows the CO2 going from just below 340 ppm in 1980 to something like 385 circa 2010. Thats about a 13% change.
Do I dare make some kind of linearity assumption and say over this variation on CO2, the temperature change will be 0.4 degrees with an uncertainty of +/- 0.2 degrees? Its not clear from their model that this linearity assumption holds for smaller changes in CO2.
This “prediction” with such a small rise in CO2 cannot be tested or falsified, because the term “temperature change” was not particularly well specified. A weather station temperature reading is highly variable and subject to the phenomenon we call “weather”,
So, this prediction is based on a SWAG and does not lead itself to falsification. Does it deserve to be labeled “prediction”?

March 7, 2012 11:53 pm

A very interesting post, Willis. The ‘science’ seems to have been at a complete standstill in the field of Climate because no one is trying to understand anything, They are just looking for fragments of ‘evidence’ to bolster up the ricketty theory of Carbon Dioxide sponsored Global Warming. The lack of progress shows that there is fundamental misunderstanding at the very base of the modern climate edifice. In thirty years there has been no progress at all.

Lew 'Big Oil' Skannen
March 7, 2012 11:57 pm

I absolutely agree about the linear assumption and one of the articles on WUWT list a few hundred interacting parameters which destroy any hope of linearity.
I believe that we have made progress however, but since it is such a huge problem the progress does not show. I doubt it is ever likely to show because the orders of magnitude are too large. If we had a computer the size of Andromeda running on tachyons we might make a dent but until then we are just sucking up the Atlantic with an eye dropper.
If someone could put a line through the graph of where we are now compared to where we were in 1970 I think there would be a slight incline but nothing to get excited about and if it could be visually represented I think people (if they had the choice!) would pull funding and find better uses for the money.

Murgatroyd
March 7, 2012 11:59 pm

Now me, I think the fundamental misunderstanding is the idea that the surface air temperature is a linear function of forcing.
Heretic! You probably deny the existence of epicycles, too.

robert barclay
March 8, 2012 12:10 am

The real elephant in the room here is that you can’t heat water from above. Surface tension.

Jordan
March 8, 2012 12:10 am

SimonJ
Positive feedback (as the term is used in control theory) doesn’t unconditionally mean instability. (Therefore dripping jungles do not necessary follow.)
Try it with an open loop gain < 1. It's not pretty and not much use practically. But it can be stable.
The Nyquist Stability Criterion tests the position of the frequency locus to see how many times it loops around -1 on the real axis.
Hope that helps – I got snagged on this point once.

Roger Knights
March 8, 2012 12:12 am

Willis says:
Now me, I think the fundamental misunderstanding is the idea that the surface air temperature is a linear function of forcing.

This linear thinking goes along with a reductionist mindset that sees the earth as a black box whose inner turmoil (clouds, etc.) may be neglected because the problem “reduces” to:
Temp. rise = (rate of incoming radiation) – (rate of outgoing radiation)
(The latter being slowed down by increased insulation in the form of CO2)
This is why reductionist personality types like the capital-S skeptics at <Skeptical Inquirer, Skeptic, etc. are (on the whole) enamored with the consensus. (The “settled science” aspect of the consensus’s claim probably also attracts them, they being heresy-hunters.)

Claude Harvey
March 8, 2012 12:24 am

Once again, Willis, I applaud your ability to distinguish the forest from the trees. The self-educated man tends to start with the forest and then learn about trees out of necessity in order to satisfy his curiosity about puzzling forest behavior he has observed. The formally educated man may know all there is to know about trees and then extrapolate that knowledge into an artificial construct of forest behavior he confuses with reality, without ever having viewed an actual forest.

March 8, 2012 12:24 am

Richard deSousa sais @ March 7, 2012 at 11:30 pm

The ghost writer for NAS report has to be James Hansen.

Ah yes, the Ghost Who Talks 😉

Matt
March 8, 2012 12:26 am

Why so puzzled? It’s quite simple, we need bigger computers 🙂

Kozlowski
March 8, 2012 12:27 am

Great article Willis, thank you!
So.. we get to the wrong answers faster. Isn’t that progress?
And look what a pretty picture they have on the front of that massive server.

Ken
March 8, 2012 12:41 am

The answer is, of course, that the science was correct then as to the degree of warming, and is correct now. This just goes to show that it is indeed ‘settled’ within the knowledge available both then and currently. Sceptics are really just denying facts, like they always do …
No, I’m not a ‘warmist’, but just trying to anticipate the most likely response that community would be likely or very likely to make, if I can use IPCC language!

OYD
March 8, 2012 12:41 am

The NAS panelists are clearly stuck in an ancient groove, somebody please help get them out

John Peter
March 8, 2012 12:44 am

Now that the article has been corrected with regards to the feedback issue, I simply think that Willis Eschenbach should send the article with a covering note to the relevant US government department, as clearly so far they have not received a proper answer to their questions.
Perhaps the note should include a reference to some of the work being done on clouds and their effects by such eminent scientists as Dr Spencer and Professor Lintzen.

Jimbo
March 8, 2012 12:49 am

How important the overall cloud effects are is, however, an extremely difficult question to answer.

I may be wrong but I thought this was essence of the debate (is over)? Ah well, 95% of climate scientists agree on something or other.

tallbloke
March 8, 2012 12:58 am

In the headline post Willis says:
“the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error…Now we can debate what that fundamental misunderstanding might be.”

The co2 driven climate hypothesis insists that a slight change in air temperature can rapidly (over a few decades) change the bulk temperature of the ocean. The ocean is much more massive than the atmosphere and water has a far higher heat capacity than air. A brief visit to the tables engineers use to look up relative heat capacities would have saved us all a lot of time and money. Or they could have used the simple observation that near surface marine air temperatures lag sea surface temperatures by several months.
The bulk ocean temperature drives the air temperature, not the other way round. The tail does not wag the dog.
Furthermore, it is evident that changes in overall cloud albedo reflect changes in solar activity, and amplify the effect of those changes, as shown by Nir Shaviv’s JGR paper on using the oceans as a calorimeter. http://sciencebits.com/calorimeter
Whether or not the causal mechanism is along the lines proposed by Svensmark or Wilde doesn’t matter too much right now. What does matter is that the observed reality is attended to.
It seems more likely that the ocean can’t cool as quickly as the sun heats it unless its average bulk temperature rises to around 275K (2C), thus enabling its surface to evaporate and radiate at a rate which enables it to be in equilibrium with the surface insolation. therefore any putative warming effect of additional greenhouse gases becomes moot, because their primary role must be to cool the planet, not warm it.
At this stage it becomes obvious that the C20th warming was a result of less cloud and a more active sun, as evidenced by the much closer correlation between sunshine hours and surface temperature, than that between co2 levels and temperature.
http://tallbloke.wordpress.com/2012/02/13/doug-proctor-climate-change-is-caused-by-clouds-and-sunshine/
No speculations about the second law are needed to follow the simple logic of this argument. The main reason it will be ignored is because humans can be taxed for emitting co2, whereas the sun can’t be taxed for shining a little more brightly.

March 8, 2012 12:59 am

Thank you Willis, it seems that the more things change the more they stay the same. If they keep up this progress and the political class keep listening the PC crowd will have us back in the 1850 ties within ten years. It is hard to imagine that so many useful idiots could live on the planet all at the same time.

Steve Richards
March 8, 2012 1:01 am

Surely the OVERALL nature of feedback within the climate system must be negative?
If it were OVERALL positive, then we would have experienced a one way trip to permanent hot climate or permanent hot climate, the positive feedback reinforcing this effect.
It is only due to an OVERALL negative feedback effect that can cause ‘corrections’ to climate trends.
This obviously excludes external forces – solar etc.

Jimbo
March 8, 2012 1:02 am

It was published in 1979.

Note to self: should read entire article before commenting. 😉 But as you point out…

The Charney Report could have been written yesterday…..For example, AGW supporters are still saying the same thing about the clouds now as they were back in 1979—they admit they don’t understand them

😉

Somebody
March 8, 2012 1:08 am

No matter how powerful computer they’ll get, they cannot simulate the Earth in the forseeable future. The complexity of the problem increases exponentially. There are non-linear components in the system, so the system behaves chaotically. Bad things, Lyapunov exponents, exponential increase of the error (and we know how big the errors are in the start – garbage input data indeed). Yes, they say that somehow the pseudo-scientific average can be predicted, nevertheless, but it’s only religious preaching. Yes, in simple cases it happenes – see the ergodic systems from statistical physics – but that’s not at all the case for the Earth. They have no proof for their religious claim. Actually, it’s easy to say that the claim is false, verifying experimentally. So, there is no proof that actually the pseudo-scientific value, ‘global temperature’, is predictible. It’s just a pseudo-scientific average (a scaled sum of intensive quantities, with varying scale and number of values over time, which makes things worse) of chaotic behaving values. No reason to be able to predict that more than the prediction of the quantities it’s calculated from.

oMan
March 8, 2012 1:08 am

Willis: that is excellent. I did not watch the other hand and so I was surprised by the NAS punchline. I like the computer statistics; if you were programming in 1963, you were there almost at the Creation and your comparo of Cray-1 power to that of desktop is a great set-up for the closer. We have been running toward the “answer” for over 30 years with Moore’s Law supposedly helping us; and yet like the horizon the “answer” recedes endlessly. Maybe we just don’t know how to ask the question. As Daniel Kahneman shows (in his new classic on cognitive self-trickery, “Thinking, Fast and Slow”) when the question we’re asked is hard to answer, we substitute another, easier one. Here it seems we just beg the question. Since that is a circular process, I guess the reason we have seen no improvement in our understanding is, we have been running in a circle. No wonder the horizon is as far away as ever.

Myrrh
March 8, 2012 1:12 am

“Modeling of clouds is one of the weakest links … can’t disagree with that.”
Because, they’ve excluded the Water Cycle from their AGWSF energy budget – think deserts, without the Water Cycle the Earth would be around 67°C – and – carbon dioxide is fully part of that cycle, all pure clean rain is carbonic acid.
They have to exclude the Water Cycle, because it shows, falsifies if you will, that they have no basis for any of their modelling behaviour and consequently, no argument available to back up their conjecture that such a thing as ‘greenhouse gas warming’ even exists in the form they have it. The main ‘greenhouse gas’, water vapour, cools the Earth.
Without the Water Cycle, they cannot possibly understand clouds.

March 8, 2012 1:15 am

Moderator(s):
If I am allowed to respond to the above note by the Moderators, I would like to assure them that my peer-reviewed paper is indeed based on physics (without any unfounded speculation) and such paper is the culmination of, not only 50 odd years of studying physics, but also at least a thousand or two hours of study of the climate issues, and whether or not the conjectures of the warmists could be substantiated by physics.
In about 15 pages I have debunked each and every claim with cogent arguments totally based on physics.
I would have been willing to respond to each and every post if appropriate, should you have taken up my suggestion to perhaps run an article when my paper is published, now delayed till about Tuesday. The offer is still open as I respect this site, let me assure you.
I just don’t like to see you sitting on the fence and half agreeing with the complete hoax of the AGW proponents.
I quite anticipate that,, like the work of Prof Claes Johnson (who solved a problem that baffled Einstein,) it will take some time for the world to realise that he and I are right on this. I certainly don’t want to be off topic, so maybe you might consider another “open” thread,, or a basic thread on the greenhouse speculation.
After all, I initially thought this was a “sceptical” site, so I am a bit surprised that you appear to agree that radiation from the cooler atmosphere penetrating the warmer ocean a little, can have its energy being converted to more thermal energy below the surface of the water, even while the Sun is also warming everything. Yes I’m just a bit surprised that a reputable site like this would agree with the warmist speculation, or hoax I should say, that such a “transaction” was not a violation of the Second Law, purely on the absurd claim that the Second Law is not violated because more energy would flow out by evaporation or radiation later that day.
I leave you with a copy of an email just received from the principal reviewer of my paper being published next week …
__________________
Hi Doug,
Well done!
Will spend time on your paper today and aim to have it ready for publication soon.
Terrific to see so much effort having gone into this.
Glad to have been able to assist you.
.[name withheld]
_____________

DN
March 8, 2012 1:19 am

It seems to me that the basic, underlying flaw in climate science is the same one that Ed Lorenz identified in 1963, and that the IPCC itself copped to in AR3: that it is impossible to predict the long-term behaviour of a coupled, non-linear system, full stop. It doesn’t matter how much computing power you have – if the system you’re trying to predict is chaotic, every time you increase your accuracy by an order of magnitude, you get only a few more predictive iterations out of the equations. It simply can’t be done.
So here’s a prediction of my own: 30 years from now, our desktops will be 1000 times as powerful, and the Cray-Googleplex will be a gazillion times as powerful, and we’ll still be no closer to being able to predict the behaviour of terrestrial climate. If I’m right, we’ll all still be around to see it, although those of us living in the Great White North will be huddled around our tar-sands-fuelled fires to keep warm through the solar minimum. If I’m wrong and Hansen’s right, nobody will care because we’ll all be living in Waterworld, trying to outrun the Smokers and arm-wrestling Kevin Costner for the lovely Jeanne Tripplehorn and a jar full of dirt.

Alan the Brit
March 8, 2012 1:20 am

Great post. Some excellent points raised & made.
They do like their big words, don’t they…………”When empirical adjustments of parameters are made to achieve verisimilitude, the model may appear to be validated against the present climate.” Pocket OED:1925, Verisimilitude:Air of being true, semblence of actuality. So, no real evidence then, it just looks like it worked when they’ve tweeked it here & there to get the answer they wanted. Is that what they’re really saying? In an example in parenthesis, it states “verisimilitude is not proof”! When their models can give me six numbers on the lottery for Saturday night that are right, then I might just maybe possibly potentially likely could, believe them! Well, as long as they don’t do something to vitiate it! 😉

TinyCO2
March 8, 2012 1:25 am

If you study enough people, in enough detail, for long enough, you can still only create probabilities of how a population will fare. Individually any predictions you make will mostly turn out to be wrong.
Climate science has studied one planet, for a mere heart beat, with crude and often inaccurate tools and hopes to exceed medical achievements in diagnosis.

March 8, 2012 1:39 am

One item i have never seen addressed is the grid system itself. What i have never heard talked about is an analysis of the grid blocks and what it contains. My assumption is that they treat the grids as a gray box that is keeping some of the effects of the model and reflecting some of the effects. Lets consider some different grid boxes. Arctic,desert, plain, forest, ocean, mountain, urban, rural, jungle etc etc etc.
I would expect each of these grid boxes to have a different response to the climate model and return a different feedback. In fact i would think that each grid box needs split into much smaller grids and a composite grid generated. The only thing that i have seen like this is ocean/land masks which can take care of the majority of grids (i.e. ocean).

Rick Bradford
March 8, 2012 1:41 am

*A complete assessment of all the issues will be a long and difficult task.*
In English, we say: “Please keep my grant money coming, and add a bit more on top, please.”

March 8, 2012 1:58 am

Willis
The Stefan-Botzmann formular tells us that temperature is NOT determined by radiation which is defined by frequency, but by mass which is defined by thermodynamics.
Radiative transfer in and of itself can only ever be a potential source of heat.
It is the thermodynamics of an environment containing mass which determines that radiative potential.
The first words I ever posted on this thread more than two and a half years ago were, “there is no such thing as the greenhouse effect”.
Which is essentially what you have alluded to above in your conclusion.
However Willis, you still appear to lack sufficient mass in the cowboy nugget department, to be so bold.
Snip away Mr scissor hands!

March 8, 2012 2:02 am

Willis that was a nice surprise. 1979! yes no progress since then.
But WHY???
Well actually there has been progress. But since 1979 it has all been outside climate orthodoxy’s blinkers. Out of field, out of mention, and if you do mention it you get passed-over or blasted to h***. Pretty soon, you learn to shut up, stay away, and work elsewhere, and simply not risk the firestorm. Soon and Baliunas. Svensmark. Miskolczi. Many astrophysicists. Geologists. Physicists.
Problem is, we now have the danger of creating climate skeptics orthodoxy blinkers, same in kind, different in subject, with similar exodus of good thinkers and crackpot thinkers (you always get both around the paradigm-shifting material) from – your own pages dear Willis.
Two redeeming features you have. One, I know you intend to stay open – and two, I know you see all too clearly the result of blinkers. And I like to believe that you already have a third redeeming feature, that scientists are going to need to cultivate in future – the ability to say sorry, I was mistaken – and to recognize how we all have human weaknesses and limitations that are often the flip side of the very talents and passions that keep us going and doing good work.
Dear Willis, I look forward to the day you can look again at certain recent stuff – because I still value your attitude. I’m working hard behind the scenes to make it more comprehensible, as well as to answer FAQ from leading climate skeptics. There were unfortunate problems initially that IMHO can all be answered and put aside now, but effectively hid important work at the time.
I’m saying this here Willis because I believe I speak not just for myself. And because my intention is to reconcile and heal, and stand up for good science, not stir up more fire.

DirkH
March 8, 2012 2:04 am

They found themselves in a very comfortable position after the 1979 report and had no intention to leave that position. For them, the scientific question was answered: What triggers generous funding. A trillion Dollar industry was born. And it still works. Understanding clouds or aerosols threatens that comfortable position. Yes, I do assume malfeasance. They know very well what kind of game they are playing; that’s why they get so mad when challenged. (Like Gleick-mad, Hansen-mad, or Mann-mad.)

Disko Troop
March 8, 2012 2:07 am

The NAS report is a climate version of the TV show “Jeopardy”. We know the answer, now ask us a question that fits it!

Markus Fitzhenry
March 8, 2012 2:09 am

The hypothesis of how radiation is treated by an atmosphere under the principle of greenhouse seems peculiar. There is a conflict in the observations of black bodies that feed into the Stephan- Boltzman equations. The observations that lead to the accepted paradigm of black body theory maintained the laws of conservation of energy and recognized the exponential dependency upon energy and temperature.
The conflict manifests in the treatment of the energy balance (budget) of the atmosphere. The saturated adiabatic lapse rate has been reversed engineered from observations of Earths LWR surface emissions. That is, the equation used for the mean atmospheric emission surface is inefficient as a measure of radiative forcing. It is necessary to wholly accept or reject ideal black bodies in natural physics. You can’t have a bob each way it’s one or the other. The existence of the universe depends on it.
The Stephan-Boltzman equation was theorized around a black body, it is an abuse of the equation to fit it to the natural laws of a gaseous atmosphere. I mean, who would have thought that atmospheric body would not mimic a uniform thermal distribution regardless of its S-B emission.
The equation ( Ts ~ Te + τH ) used in modelling is flawed compared to observations. The theory faces a double jeopardy; it commits the conclusion from both the mean radiative surface and the TSI.
The Earth grew its atmosphere into its composition with the enhancement of the Sun and Cosmic rays. In effect it matured to have the physical nature of an astrophysical body. It returns to space the energy it receives. Who knows what the natural force that drives this phenomena is. Without it the universe would not have our Earth amongst it jewels. Our atmosphere supports life because the atmosphere has evolved to maintain equilibrium. But for dynamics we would be a perfect black body.
We don’t even know the answers to gravity. It is presumptuous to think we have the mastery to devise new theory by a virtual model. The observations and experiments do not achieve, well enough, a reasonable certainty of its premises for a modelling of global circulation to be successful.
The lower temperatures of gravity split the strong and electromagnetic forces rendering the theory of ‘back-radiation’ in natural physics something of an anomaly.
Any wonder the models have got the climate upside down lately.

H.R.
March 8, 2012 2:11 am

And… 30 years control of the temperature record and the modelers can’t match unreality either.

thingadonta
March 8, 2012 2:19 am

“….It’s still right around 3 ± 1.5°C per double of CO2, just like it was in 1979.”
They were lucky the climate warmed a little between then and now. (In my opinion due to a positive PDO and a ~20 year+ lag effect from the increase in TSI between about 1750-1985). If it didnt, nobody would have believed them, and there would be little research funding.
T

thingadonta
March 8, 2012 2:24 am

Regarding clouds: the Bureau of Meteorology in Australia are saying the last few years in Australia have been cooler partly due to an increase in cloud cover and the breaking of the drought. But during the 7 year drought from ~2002-2009 they didnt say it was slightly warmer because of the co- incident lack of cloud cover, they of course blamed C02. So when its cooler it’s because of clouds, when its warmer it’s because of c02….

Davy12
March 8, 2012 2:25 am

Brilliant article.
Don’t understand the physics then all you are doing is playing with numbers.

thingadonta
March 8, 2012 2:29 am

I agree with you reasoning regarding the lack of progress in climate science, partly or mostly coming down to basic assumptions of linearity.
But I would add one more thing: the galileo phenomenon. They don’t want the sun to be involved because they can’t do anything about it.

Kev-in-Uk
March 8, 2012 2:34 am

Surely this just proves that governments haven’t spent/invested nearly enough money in the CAGW ‘research’ ?
We need more taxation, more green energy, more AGW based research grants, more consenting warmist-leaning scientists producing models, etc, etc……….
My goodness, perhaps I’m putting ideas in the warmista heads!
(I really don’t need to put /sarc, do I?)

March 8, 2012 2:39 am

There is much discussion of clouds herein, so I trust this is considered on topic. It is also an issue I have discussed in detail in my peer-reviewed paper which uses physics to show that there can be no warming effect from clouds, nor carbon dioxide. And to understand why, you have to think outside the square, but I shall not release the details prior to publication next week..
But, inside “the square” of the atmosphere the fact most often overlooked (or avoided perhaps?) is that water vapour absorbs a significant amount of incident IR radiation from the Sun and the energy returns to space by (upward) backradiation, thus having a cooling effect. Of course the whole atmosphere also has a cooling effect in daylight hours.

WillieB
March 8, 2012 2:39 am

Willis, as always, enjoyed your post. Without fail, I always learn something new and gain greater insight. You have a great knack for being able to wade through the chaff and get to the crux of an issue.
PS–Let me thank you by simply stating: “Willis, you’re an ignorant jerk”. Cha-ching. 45,122,165. Next stop 50 million! (LOL)
PPS–Can’t even imagine where Anthony must rank on the “ignorant jerk” comment list.
PPPS–Reminds me of the old SNL comedy skit “Point/Counterpoint” where Dan Aykroyd turns to Jane Curtain and retorts, “Jane, you ignorant slut”. 😉

Eric (skeptic)
March 8, 2012 2:40 am

Models cannot be validated simply because they cannot model anything of consequence. Want to know what the weather will be like in Peoria next week, next year, next decade, next century? Forget about it, no regional characterization of weather is possible (despite a few claims that have almost always been wrong). Instead the modelers say they do not need to reproduce any regional patterns, they can simply make multiple runs with various kinds of weather and average them to get the trend which they call climate.
The problem is that without properly modeling (NOT “predicting”) the weather, the modelers will never get an accurate trend. If rising CO2 changes the small scale day to day clouds and precipitation, the modelers will never know what that change will be.
The models will also never predict ENSO and again that’s not my concern. It’s that the models will never determine the changing statistics of ENSO in a changing climate. The modelers will thus never know cloud and albedo changes, water cycle changes or how much warmth will be permanently sequestered in the deep ocean.

Dave Wendt
March 8, 2012 2:44 am

Willis
I think your being much to generous in claiming that they have made no progress in a third of a century. In my view the current state of the art is barely distinguishable from what was laid down by old Svante Arrhenius back at the turn of the previous century when quantum mechanics and photons were barely beyond speculations. When you find yourself in argument with one of the true believers, if you can push past the point where they’re calling you a knuckle dragging moron for having the temerity to disagree with your intellectual superiors to actually get them to offer evidence in support of their position, even now their favorite fallback is old Svante. BTW has anyone else noticed that in many of their written works when making this cite they use Arrhenius(1896)? Are they all that ignorant or are they just trying to conceal the fact that he revisited the topic a decade later reducing his estimate of the GHE by 2/3rds to 3/4ths.

March 8, 2012 2:45 am

The just reconfirms the cornerstones of current ‘climate science’ are dependent on four factors:
1. Totally rejection of the influence and concept of natural climate cycles – something totally self-evident to anyone other than a CAGW fanatic.
2. The use of increasingly complicated computer models, but which are still programmed to produce the same pre-determined results. Any argument here from anyone?
3. Exaggeration of the positive feedback effect of clouds, when this effect is very little understood and increasingly looks like it is a mildly negative effect.
4. Repetition of the same tired old mantra that “carbon dioxide is an evil gas and any increase is to be avoided at all costs”. In reality, the impact of rising carbon dioxide levels seem to be largely beneficial – e.g. it is a natural fertiliser, so crops grow quicker etc.
‘Climate science’ is now a huge industry employing a vast army of bureaucrats and pseudo-scientists. It is hugely expensive, produces distorted, highly qustionable and dubious research and is solely interested in its own self-perpetuation, as witness by the myriad number of unfounded scare stories it generates.

March 8, 2012 2:47 am

First define, scientifically, Climate Consequences.
The alarmists, it seems to me, consider the ‘normal’ GST to be a constant. Error! It never has been. Climates change all the time which means surface temperature changes all the time. There is no ‘normal’ surface temperature. In fact if the last 800Ma are looked at the planet has been in ice house conditions more than hot house but to take the average of this cycle would be meaningless in itself and to the climate argument today.
Perhaps the real problem is the reliance on the theory of GHG’s. Just because two talented scientists formulated it does not mean that it is correct. Fourier and Arhenius were brilliant men but men with man’s biases and presumptions. And Arhenius got a bit funny in old age when he switched from chemistry to physics and not all chemical reactions conform to his rules.

DEEBEE
March 8, 2012 2:51 am

However, as has been the case for years, when you get to the actual section of the report where they discuss the clouds (the main negative feedback), the report merely reiterates that the clouds are poorly understood and poorly represented … how does that work, that they are sure the net feedback is positive, but they don’t understand and can only poorly represent the negative feedbacks?
==============================================
That the money paragraph. Clearly shows you are not a co-religionist, otherwise it would be obvious to you that when you take an essentially auto-correlated ramdom variable (like UM temperature) and globally average it certainity increases.
Willis get with the program man(n).

March 8, 2012 2:57 am

Model vs reality: North Atlantic SST.
http://oi56.tinypic.com/wa6mia.jpg
Model is obviously driven by something unphysical. How anyone dares to mention something based on “models”?

banjo
March 8, 2012 3:11 am

Mark Hladik
March 8, 2012 3:14 am

The current state of climate modeling:
“If the data do not fit the model, then OBVIOUSLY the data are wrong!”
“We do precision guesswork (using HIGH technology).”

Robert of Ottawa
March 8, 2012 3:16 am

Robert Berkley, any scuba diver will correct you. The incoming SW light heats the water, surface interaction is generally evaporation, i.e. cooling. Yes, LW outgoing radiation too … Not sure what the ratio of the two cooling effects are … Anyone got an idea?

Richard S Courtney
March 8, 2012 3:17 am

Willis:
Thanks for this article. It reminds that ‘The Team’ have achieved nothing except the expenditure of billions of dollars per year.
You ask;
“For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.
Now we can debate what that fundamental misunderstanding might be.
But I see no other explanation that makes sense. Every other field of science has seen huge advances since 1979. New fields have opened up, old fields have moved ahead. Genomics and nanotechnology and proteomics and optics and carbon chemistry and all the rest, everyone has ridden the computer revolution to heights undreamed of … except climate science.”
With respect I suggest there is another “explanation that makes sense”: i.e.
There is no “far-reaching theoretical error” and no “fundamental misunderstanding” because there is no fundamental understanding of the climate system (instead the models use simplifying curve-fitting assumptions) and, therefore, there cannot be a theoretical error because the models do not apply a theory.
I remind that you demonstrated the models’ outputs can be emulated by a simple one-line equation.
And I remind that the models each assumes a different climate sensitivity and the output of each model is adjusted to match past mean global temperatures by adopting an assumed value of aerosol cooling which is different for each model. This aerosol adjustment being used in the Hadley Centre model was first reported by me in a paper published in 1999, and Kiehle later reported that similar adjustment exists in all of the models but each model uses different values of climate sensitivity and aerosol cooling to those used in every other model.
This is a direct proof that there is no fundamental climate theory being applied in any of the models. They would each use the same values of climate sensitivity if they were all using the same theory.
Furthermore, as you say, the lack of understanding of cloud behaviour is important but the lack of understanding of aerosol behaviour is much, much more important. The ignorance of real aerosol behaviour is being used to provide an appearance that the models emulate past mean global temperature. They don’t emulate past mean global temperature: they are adjusted to match past mean global temperature by use of the variety of assumed aerosol cooling values.
As you say, the 1979 NAS Report said the ability to emulate regional climates was poor. It still is. However, if a climate model were emulating the world’s climate system then it would emulate regional climates and the mean of all the emulated surface temperatures would match observed mean global temperature.
The fact that the models are adjusted to match mean global temperature but fail to match regional temperatures is a direct proof that none of them is emulating the climate system of the real Earth.
And the fact that each model uses a different value of climate sensitivity is a direct proof that they are not applying a unique theory of climate behaviour.
But model falsification seems to play no part in what is disingenuously called ‘climate science’.
Richard

anticlimactic
March 8, 2012 3:23 am

AGW adherents remind me of those who were thought that the Earth was at the centre of the solar system. Without the right viewpoint no progress can be made, just more elaborate explanations of how the planets move which made no sense. It just shows that once you put the Sun at the centre you can progress to better science!
PS. It seems to me that clouds effectively act as insulators, reflecting sunlight during the day to make it cooler and keeping the heat in at night making it warmer.

Ken Hall
March 8, 2012 3:34 am

From the article above:

That’s the oddity to me—when you read the Charney Report, it is obvious that almost nothing of significance has changed in the field since 1979. There have been no scientific breakthroughs, no new deep understandings. People are still making the same claims about climate sensitivity, with almost no change in the huge error limits. The range still varies by a factor of three, from about 1.5 to about 4.5°C per doubling of CO2.
Meanwhile, the computer horsepower has increased beyond anyone’s wildest expectations. The size of the climate models has done the same. The climate models of 1979 were thousands of lines of code. The modern models are more like millions of lines of code. Back then it was atmosphere only models with a few layers and large gridcells. Now we have fully coupled ocean-atmosphere-cryosphere-biosphere-lithosphere models, with much smaller gridcells and dozens of both oceanic and atmospheric layers.
And since 1979, an entire climate industry has grown up that has spent millions of human-hours applying that constantly increasing computer horsepower to studying the climate.


And the other thing that has stayed exactly the same with regards to the climate models, is the human assumptions that they code in to the model, and so regardless of the complexity and power of the computers, they will always get the same conclusion coming out of the computers, because that is what they are programmed to do.
I started computing in 1983 and now I hold in my hand a smartphone which has way more power and application, than the super-computers of that day in 1983 which only Phd computer-science professors had access to… Sometimes I am stopped in my tracks with a WOW moment of how much technology has moved on.

Pete in Cumbria UK
March 8, 2012 3:40 am

Does anyone still work with ‘analogue computers’ any more?
Non techni-digital maybe but they’ve got almost infinite resolution and bandwidth.
The reason I wonder was that many many moons ago, I recall (I think) the UK treasury wanted to model money flow around The Economy (how interest rates affected things for example) and even the likes of Cray 1 were not up to the task.
I dimly recall someone putting together a system of pipes, pumps, (adjustable) valves and water tanks. To run it, the various tanks were primed with certain amounts of water, each representing something different (amount of known money in circulation, personal savings/loans, Government gilts/loans etc) and pumps/valves were adjusted to represent inflation, interest rates, growth of GDP and other important monetary stuff.
The thing would be primed with water, switched on left to run for an hour or so and the level of water in various pipes/tanks would represent the predicted state of the economy. Changing the speed of the ‘inflation’ pump would cause one tank to empty and the level in various others to change and, by accounts, the beast was spookily accurate in its ‘predictions’ Much more so than its digital counterpart.
In its simplest sense, think about the maths of pouring some water from a jug into a glass tumbler. Each of those trillions of trillions of water molecules ‘knows’ where its going, how fast its going and where it will eventually finish up.
Could Gaea even say how many water molecules remain stuck to the inside of the jug, let alone work our where the rest of them went?
Would an analogue computer fare any better in the climate prediction projection game?

Joe
March 8, 2012 3:42 am

Ken says:
March 8, 2012 at 12:41 am
The answer is, of course, that the science was correct then as to the degree of warming, and is correct now. This just goes to show that it is indeed ‘settled’ within the knowledge available both then and currently. Sceptics are really just denying facts, like they always do …
No, I’m not a ‘warmist’, but just trying to anticipate the most likely response that community would be likely or very likely to make, if I can use IPCC language!

Fair one, in which case I’d like to make the following observation:
I will accept that the science is settled, to the point that a 2 million times increase in modeling power, and 30 years of intensive model development, can’t give any finer resolution than 3.0 +/- 1.5 degrees. So it’s reasonable to conclude that no finer resolution is possible.
,
Given the above, 3.0 +/- 1.5 deg is our final answer, and we no longer need to fund any further modeling, research, or analysis of the problem because it’s now as good as it gets. To all the climatologists out there, in recognition of your great service, we’ll be happy to offer you free re-training in a new (climate UNrelated) field of your choice.

cui bono
March 8, 2012 3:47 am

w –
As you demonstrate, the core issue for computer models from the beginning was CO2 sensitivity. This was, after all, the key question the models were designed to answer – not how climate actually works. So I always imagine computer models started with 2*CO2 ~ 3C. This little equation was then ‘cut-n-pasted’ into any subsequent models, and surrounded by millions of lines of code on other peripheral matters. The code gets longer and more complex each year. Meanwhile the computers get faster which makes absolutely no difference except the (wrong) answers are generated in hours, not millenia.
It seems there is no ‘Darwinian evolution’ in the models as none are ever falsified against real-world data, presumably because the AGW-types are not concerned about real-world data. They’re too busy playing with their expensive toy models.

Bill Hunter
March 8, 2012 3:56 am

verisimilitude = curve fitting = astrology
When the program is to blackball or fire anybody who deviates from the curve fit; then progress is virtually eliminated. Best example of such is probably the house arrest of Galileo.
The lack of science progress on this is the greatest scandal in history. The science is settled and anybody who fails to pay homage is labeled a denier.
The State of Virginia finds itself in a unique position of the Attorney General not being able to audit branches of his state government; while at the same time not being able to take advantage of laws written to allow investigation of non-government entities (any person, corporation, organization, etc.)
It seems we have created a huge secret society in America that can operate above the law.
[url]http://climateaudit.org/2012/03/05/above-the-law/[/url]
The really sad truth here is if indeed the planet were at risk from the emissions of CO2 the system of non-accountability of academia we have created has set the stage for us as a democratic nation to fully fail to realize it.
Nothing unusual or unexpected going on here. Any person, corporation, affiliation, organization, etc. given protection from accountability will always gravitate towards rent seeking and self interest.
Those on the research dollar bandwagon will energetically resist any diversion of funds towards any research not theirs and especially if it is viewed as threatening to their personal beliefs and efforts.
The National Academy of Sciences is a private not-for-profit that solicits work for its members. Its an exclusive trade organization, sort of a science aristocracy. Members are elected by current members. In any such exclusive organization the most highly valued qualification for entry is the ability to bring in funding and political connections. Just the facts.

corporate message
March 8, 2012 3:56 am

Cracked me up. Loved the surprise twist. .

thingadonta
March 8, 2012 3:56 am

A couple more points:
1. Regarding linearity. I have come across this problem at a senior level within certain fields of scientific research, modelling, and policy within the earth sciences, (which was not in fields to do with climate science). The basic issue was assuming linearity in certain processes, which assumption was being driven by a social agenda; by assuming linearity it actually upgraded in importance their particular field of research over other fields, specialisations, and perspectives. It also appealed to those who pulled the purse strings and gave appointments, providing an apparently strong scientific argument for the increased importance of their particular field.
Linearity was most strongly assumed by those not only with a certain agenda, but also by those who had basic difficulty recognising and dealing with ‘valid variation’. The same people who assumed linearity had an inborn difficulty negotiating with different people and perspectives, and had a lot of difficulty understanding the nature of uncertainty; but there were good organisers, and they were good at thinking and working in a very structured environment. I got the impression they would make good engineers, but not good scientists.
2. regarding the sun and the ‘galileo phenomenon’: I would also add that in the last 30 years+ of climate science research, the sun has been downgraded and generally ignored for the same reason it’s influence, position, and importance was downgraded and misplaced when Galileo was around, it’s not only because climate scientists and the AGW movement can’t do anything about the sun’s influence that its importance has been ignored and downgraded, but also of course because if the sun has more influence and importance then it makes their entire position and purpose much less relevant and self-important. Climate science and c02 would not be the centre of the universe.
If they continue to misplace the position and influence of the sun in changing the earth’s climate because of a deep-seated, all-too-human, level of insecurity and self-importance (amongst other reasons), climate science will continue to stagnate and go nowhere.

Markus Fitzhenry
March 8, 2012 4:03 am

“”thingadonta says: March 8, 2012 at 2:24 am
But during the 7 year drought from ~2002-2009 they didnt (sic) say it was slightly warmer because of the co- incident lack of cloud cover, “”
They are numbnuts.
A more active Sun means more solar winds and less cloud seeding cosmic ray particles and a faster thermal lapse rate, the opposite attracts more clouds and greater greenhouse effect from a slower thermal lapse rate. The absorbed heat distribution of the oceans drives a stable climate; somewhat explaining the weak sun phenomena.
The water vapor content of the southern atmospheric circulation is not surprising given the decade time frames to uniformly distribute the higher level of heat in the Earth that has been stored from the spike in solar activity of the late twentieth century. Although it is at the end of the monsoon season, the recent excessive moisture over south east Australia has blown in from the Tasman Sea. We are seeing a similar effect with snow across a large part of the NH.
Now with solar activity on the other side of a ‘maximum’ we will see the opposite in 6 – 8 years across Australia, with cold blue skies and droughts, after a return to the more dominate El Nino oscillation. In the meantime there will be increasingly mild weather with less water vapor. Darwin will remain 32C in winter and 33C in summer and rain every afternoon.
The NH will experience more water vapor in the lower latitudes for the next 6 – 8 years.
Both poles will continue increasing ice cover. And there will be less polar bear because it will be too cold to get food.

richard verney
March 8, 2012 4:09 am

There has been no significant advancement in climate science because they are too wedded to models.
Until such time that they ditch the models and start going out in the real world and collecting empirical data and start conducting experiments that would shed light on the response to DWLWIR, there will be no significant advancement.
They are stuck in a GIGO syndrome. They need better raw material and they need to start understanding in greater detail, the real world. As you (and many others have repeatedly pointed out), they need to understand clouds and if they are to use these ghastly models (which I consider should be ditched for at least a generation) they need to be able to model clouds with accuracy.

Dave
March 8, 2012 4:11 am

Willis>
“If we were indeed certain that atmospheric carbon dioxide would increase on a known schedule, how well could we project the climatic consequences?”
Could you clarify your objection to that? It seems eminently reasonable to me. ‘The climactic consequences’ would include ‘nothing will happen’, but the question is whether we can predict anything at all. I don’t see any assumption there that climate will necessarily change.
When it comes to the computer issue, I agree with you that there may well be fundamental problems in climate science, but that computers haven’t delivered on your expectations in this area is probably unrelated; there are many other problems which have seen little significant progress over the years, like speech recognition, or travelling-salesman type problems.

Robin Hewitt
March 8, 2012 4:11 am

Another classic Willis finding. I am sitting here bamboozled by the scientific new-speak and wishing they’d answered the original questions, would have been good for a laugh. But Willis isn’t dragged in to that particular morass. No, he simply comes back, stating the blooming obvious and hammers yet another nail in to their coffin. So what did they spend all that money on then? If it was Satellites and Argo buoys you might think they’d be more interested in the data coming back.

Mike Bromley the Canucklehead
March 8, 2012 4:26 am

neo says:
March 7, 2012 at 11:02 pm
apparently you don’t understand that science is hypothesis driven nor do I guess you understand what a hypothesis actually is or is used for in these studies.
Otherwise you wouldn’t taking the hypothesis as predetermined fact

My hypothesis is that the above was supposed to make sense somehow but got lost when empirical adjustments of parameters were made to achieve verisimilitude.

Cadae
March 8, 2012 4:27 am

My reading of the question is – why have other science-based disciplines progressed so much, but Climate Science hasn’t ? You may be right about fundamental flaws in the basic models, assumptions and approaches to the science. But I think there is a deeper reason why Climate Science hasn’t progressed as much, and why Climate Science’s flawed approach isn’t being replaced as rapidly as faulty models are replaced in other science disciplines.
Many other sciences have the advantage of being able to test their models via direct experimentation against objective reality, but Climate Science does not have clones of the Earth to experiment with nor the tools to experiment with the earth itself. The closest Climate Science can come to an experimental Earth are computer simulations- but because of the complexity of the Earth, these simulations are entirely inadequate for properly and objectively testing climate hypotheses.
The only real test and proof of Climate Science hypotheses are the on-going changes in the Earth’s climate itself – but this happens slowly in real-time and accurate data measurements have not been long established.
Worse still is that we cannot experimentally set the various parameters of the climate to check boundary assumptions in hypotheses – the earth’s climate hardly ever enters boundary conditions and computer simulations are useless for testing boundary conditions as they simply respond with the boundary assumptions designed in their programs. All we can do is wait for the earth itself to generate conditions that roughly fit various parameters of hypotheses and observe the climate responses. Only rarely does the earth or its climate enter a mode that can conclusively prove or disprove even a tiny part of any climate hypothesis.

Cadae
March 8, 2012 4:29 am

Oops – typo in the first word – me should be ‘my’.
[Fixed. -w.]

Jim Cripwell
March 8, 2012 4:32 am

I have said it before, and I will say it again. The only thing that we can trust in physics is the hard, measured, preferably independently replicated, observed data. The output of non-validated models tells us absolutely nothing.
The observed data shows conclusively that there is no CO2 “signal” which can be distinguished from the “noise” of natural variations. It is not there, No-one can find it. The question no one will answer is; how long will we have to wait for a CO2 “signal” to appear, before we conclude that no CO2 “signal” exists? In the end, proper scientists will realise that CAGW always was, still is, and always will be, just a hoax.

March 8, 2012 4:37 am

Willis, can you post a link to the NAS report you are discussing?

handjive
March 8, 2012 4:40 am

This is o/t, but possibly relevant. It involves PNAS.
Prehistoric Humans and Changing Climate Worked Together to Kill Off Great Ice Age Mammals.
http://www.archaeologydaily.com/news/201203078028/Prehistoric-Humans-and-Changing-Climate-Worked-Together-to-Kill-Off-Great-Ice-Age-Mammals.html
“It turns out that prehistoric human hunters of the Ice Age had significant help from the weather when it came to driving the big mammals, like mammoths and mastodons, to their extinction.
Their analysis suggested that it was actually the combination of hunting and/or habitat destruction caused by modern humans, and climate change, that caused the extinctions.
So reports the authors of a new study published in the Proceedings of the National Academy of Sciences (PNAS).”
How does PNAS apply this as a lesson for today?
“The key difference this time is that the climate change is not caused by fluctuations in the earth’s rotation axis but to warming caused by fossil fuel burning and deforestation by humans – a double whammy of our own making. We should learn the lesson and act urgently to moderate both types of impact.”
That seems like a big leap to a illogical conclusion.
A quick check of cO2 levels in the Quaternary Period:
“In the last 600 million years of Earth’s history only the Carboniferous Period and our present age, the Quaternary Period, have witnessed CO2 levels less than 400 ppm.”
http://geocraft.com/WVFossils/Carboniferous_climate.html
The paper, entitled ‘Quantitative global analysis of the role of climate and people in explaining late Quaternary megafaunal extinctions’, is published in the March 5, 2012 edition of the PNAS.

John West
March 8, 2012 4:52 am

“How can we understand this stupendous lack of progress, a third of a century of intensive work with very little to show for it?”
Well, I think you’re right; they are stuck in a paradigm, but why did they get stuck? Because, they stopped doing science and concentrated on advocacy. Every study, every dime, and every research moment had to go into the advancement of policy. Politics will never answer science questions.

Theo Goodwin
March 8, 2012 5:01 am

Excellent article, Willis. You totally nailed them.
My take on the problem, and please pardon me for repeating myself for the gazillionth time, is that climate science as practiced today is “a priori” rather than empirical. Once climate science becomes a serious empirical science, climate scientists will undertake empirical study of clouds. Something like a highly advanced ARGO system is needed for study of clouds, and maybe much more is needed. Until such empirical studies have demonstrated substantial progress, climate scientists should say nothing about future climate. The NAS should say nothing about future climate at this time.
Is there no one at the NAS who can distinguish between “a priori” studies and empirical studies? Is there no one at NAS to offer criticism of the sorry statement that Willis has quoted in this post?

Chuck Nolan
March 8, 2012 5:10 am

As a young man working on electronic equipment ‘back in the day’, I had a simple philosophy. If you’ve been troubleshooting a problem too long you either a) are missing an indication or symptom; or b) you don’t know how the equipment is supposed to work.

Jason Calley
March 8, 2012 5:17 am

You asked that we not draw analogies with fusion research, but please forgive me if I point out what may be the single most pertinent similarity. There are HUGE sums of money and power involved in NOT solving both questions.
I wish I could attribute the quote, but someone said, “Science + politics = politics.”

Chris B
March 8, 2012 5:19 am

It seems to me that 90% of climate/weather research money is spent on theory/prediction and 10% on measurements. Isn’t the cart before the horse?

Theo Goodwin
March 8, 2012 5:22 am

Treating the NAS document as a student essay that I am grading, I would be compelled to comment that one cannot write that the negative feedbacks, especially cloud behavior, are unknown at this time but that he net feedback will be positive. It is like a young man saying that he has no idea what the lady thinks of a future with him but that he is sure she will accept his proposal of marriage. The grade is F.
Why would NAS, supposedly serious scientists, write garbage like this? Does NAS not read what it writes for public consumption?

wws
March 8, 2012 5:22 am

Your thoughts about the science never getting any better remind me of something that I believe Watson (of Watson and Crick fame) said about UFO’s years ago. He noted that in any ordinary experiment, the observations add up, a thousand small observations add together to give one a truer picture of reality. But, he noted, with UFO observations, that the “big picture” never changes no matter how many reports are made. The model never advances, the science never adds up, comparing it to a chemistry experiment, he said “it never reduces”. The story of UFO’s is the same today as it was 60 years ago.
He then made the scientific observation that if nothing ever adds up no matter how many thousands of observations you make, then by far the most likely conclusion is that whatever you are looking for just isn’t there. I would add that this is the natural result of thousands of people willfully deluding themselves in thousands of ways. Why? Because a lot of people like to delude themselves, and others, for all the same old reasons that people do anything that they do.
UFO’s – climate “science” – I think the two have quite a bit in common.

March 8, 2012 5:45 am

DirkH says:
March 8, 2012 at 2:04 am
They found themselves in a very comfortable position after the 1979 report and had no intention to leave that position. For them, the scientific question was answered: What triggers generous funding. A trillion Dollar industry was born. And it still works. Understanding clouds or aerosols threatens that comfortable position. Yes, I do assume malfeasance. They know very well what kind of game they are playing; that’s why they get so mad when challenged. (Like Gleick-mad, Hansen-mad, or Mann-mad.)
————————————————
Bingo, bango, bongo. Right in the bull’s eye, DirkH.
One day their madness too shall pass and we’ll look back and shake our heads. Our kids and grandkids will laugh at what morons we were as we scramble to rebuild our exhausted and leeched economies, repair the horrendous damage and look for ways to reclaim some of the loot, to punish a few offenders and find better ways to guard from such massive hijackings. And on the morning after, we’ll have to build a memorial to these charlattans so that we may remember how we and our children had been had, bamboozled, owned, pwned, bilked, robbed, mooched, bled, used and abused. We’ll inscribe an epitaph inspired by your post:
Science, for them, was not a search for mysteries and truth,
A reach for the distant stars, or the Fountain of Youth.
T’was but a frenzied grubbing,
And a drooling grunting
For a brief spot on the news,
And generous funding.

tim in vermont
March 8, 2012 5:46 am

Willis,
You style is improving. I didn’t even realize it was you who wrote it until the end when you had sort of a relapse. Other than that comment, the whole thing makes a whole lot of sense.

MarkW
March 8, 2012 5:48 am

Actually, there has been a sizeable increase in our understanding of clouds. The problem is that the answer that has been found is not the one the warmists are looking for, so they have rejected it.

Steve Keohane
March 8, 2012 5:49 am

The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.
Exactly Willis! I have found it impossible to fix a problem that I misidentify the properties of.

babaji333
March 8, 2012 5:49 am

Hum? what is a non scientist to think concerning how to determine who is correct in the Climate senstivity wars?
I notice that the projected positive feedbacks, which are completely theoretical, depend on the least understood aspects of the affect of water vapor and cloud formation, so the strong feedbacks PROJECTED are the least dependable, while the “OBSERVATIONS” used by Lindzen, Spencer, and others, support the lower estimates of climate sensitivity. Additional peer reviewed studies support stronger solar influences on albedo and cloud formation then previously projected, further supporting lower sensitivity. These studies are reinforced by OBSERVATIONS.
Mckitrick’s paper on the PROJECTED ‘hot spot” using IPCC PROJECTIONS, demonstrated that OBSERVED warming was 1/4 to 1/2 of the projected, and that was before the recent cooling.
I must go with the scientists who have observations to match their assertions.

Paul Linsay
March 8, 2012 5:50 am

“When empirical adjustments of parameters are made to achieve verisimilitude, the model may appear to be validated against the present climate.
Merely corroborative detail, intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative.
W.S. Gilbert
How are the recent estimates of CO2 caused warming significantly different from the original estimates by Arrenhius 100 years ago? Not much progress over pencil and paper.
If CO2 causes positive feedback, why doesn’t water vapor, which is about four times more powerful as a GHG, cause the same? Plus there’s an infinite supply of it.

Theo Goodwin
March 8, 2012 6:06 am

Cadae says:
March 8, 2012 at 4:27 am
“Many other sciences have the advantage of being able to test their models via direct experimentation against objective reality, but Climate Science does not have clones of the Earth to experiment with nor the tools to experiment with the earth itself.”
This is a fallacy that should have disappeared long ago. If clones of Earth were necesary for experimentation in climate science then clones of the universe would be necessary for the studies in search of the Higgs Boson by CERN scientists.
Experimentation can be passive or active. Passive experimentation requires only that one observe some phenomenon, identify its salient features, and create a record of measurements. All of that has been easily available to climate scientists but they have done nothing except create the ARGO system. They need to get to work on data collection and other empirical matters. They seem to have a deep seated aversion to empirical work.

Owen in GA
March 8, 2012 6:08 am

It has been mentioned here before, but no one has been able to tell me how much of the CO2 increase we are measuring is due to the increase in temperature due to the rebound from that little cold spell that ended sometime between 1780 and 1850 (I’ve seen end dates all over the spectrum there). We may be adding some little bit, but I think our signal is probably being swamped by the oceanic out-gassing of CO2 from that whole solubility/temperature curve.
If as I suspect, CO2 is a symptom of a warming climate rather than the cause, the whole movement is looking at the moon through the wrong end of the telescope, but such is the power of a meme.

Veritas
March 8, 2012 6:12 am

You sure the computer is named Gaea and not DeepThought? Regardless of everything else, the models are always going to give the same answer – a 3 degree warming per doubling of CO2. It’s all clear now. It’s the new “42”.

1DandyTroll
March 8, 2012 6:24 am

In the darkest recesses of the future hall of super computers of the Met offices of Sweden and Norway (apparently Denmark has been left out), where the echo of the abysmal emptiness that is its space, sits, inside a sticky finger resistant glass cube, an apparatus of wonderous applications: The A B A C U S!
Constructed by a genius, but apparently only for geniuses, such tentalizing mind blowing power might it have, if only they could get it to operate.
They continously need new super computing power, but to do what? I’m glad you asked, to continously make the same A2 and B2 projection, on the high end, in all regions in both countries, that the IPCC AR* reports states will be the regional temperatures in the future.
Apparently all their old super computers were flawed since they all seem to have failed, in the past, to predict present day climate, and for safety, we’re talking way into the future these day, and no pesky 40 years, no, instead it’s the period between 2071-2100. And of course they blame green house gases for the temerpature rise from 1860, to present day, to future days.
What’s interesting to note is that the rest of the EU countries will, according to every climate super computer owning country, follow the same, on the high end, trends of the IPCC AR* A2 and B2 scenarios.
What’s then highly ironic is the obvious fact: They only need one abacus to calculate the predicted projections for the forseable future.
A crash course in the inner workings of the secret manual of an (IPCC stamped) abacus would save us millions in tax payers money.
Since they just seem to manhandle their computers and software to project IPCC “concensus” numbers, I trully wonder if the scandinavian met offices would note any difference if one replaced their “cray(on)” computers with an abacus but with the cray logo on it?
:p

Kev-in-Uk
March 8, 2012 6:24 am

curryja says:
March 8, 2012 at 4:37 am
Judith – I think you will find it by clicking on ‘Charney Report’ in the text?
regards

DR
March 8, 2012 6:26 am

Nir Shaviv wrote on this very topic recently.
http://sciencebits.com/IPCC_nowarming

March 8, 2012 6:27 am

More cerebral bell-ringing moments for me, this time brought to us by Willis. Thanks, Willis.
Two questions pop-up. One, if the science was settled in 1979 and a consensus was reached, why are we still paying scientists through the nose for the ongoing “settling”? And two, given how chaotic and complex climate appears to be and how it is impossible to predict or “project” future effects of a presumed scenario on an ever-changing world, does it make the slightest of differences if we try to calculate things with a child’s abacus versus a super-duper computer? Seems to me that on this scale it would be kind of like deciding on whether to treat an advanced case of Ebola with Aspirin, Tylenol, homeopathic tinctures of echinacia or camomile tea. (Might as well go for the tea; tastes better than the other stuff)

Jim Turner
March 8, 2012 6:27 am

The point about the rate of scientific progress is an interesting one – it is certainly not ‘even’, some areas have advanced enormously and others not. I think that this can only partly be explained by the effort expended. In my own area (pharmaceutical research) there is much effort, and continual impressive progress is being made in the understanding of underlying processes. In say, the last fifty years we have gained a huge amount of understanding of genetics, biochemical processes and cell signalling; however our actual ability to treat diseases like cancer has improved rather modestly by comparison.
I think part of the explanation may be that how much more we know than we did before is less important than how much less ignorant we are. For example, our knowledge of something may double – but it may be that our knowledge of all that it is possible to know has actually only increased from 0.1% to 0.2%, so we are still largely ignorant.
By the way, “millions of human-hours” is a bit too PC for me (though less-so than person-hours), it sounds like something a Klingon might say. I am sticking with ‘man-hours’ on the basis of the broader Anglo-Saxon meaning of ‘man’, as in: ‘a man with a womb is a womb-man (often abbreviated to woman)’.

Gail Combs
March 8, 2012 6:41 am

William Martin says:
March 7, 2012 at 11:43 pm
lol Willis ! you know that the NAS Report isn’t about science, it’s about the public purse !
William M. has got it in one.
Remember this started out with “Scientists” like Professor Stephen Schneider warning about a coming ice age in the 1970s then converting to the promoting of man-made global warming fears and most recently the newest modification “Climate Change”
In all cases it is about scaring the public into
1. Giving the politicians MORE power and tax money,
2. Giving the scientists and universities MORE tax payer funded grant money
3. Giving corporations MORE (as in incredible amounts) of government “incentives” Grants and tax deductions.
The only one on the short end of the stick is the poor deluded tax payer who foots the bill. Is it any wonder that the news media has done their best to brainwash the public to the point where all this “environmentalism” crud is now a religion to many?

Mark Bofill
March 8, 2012 6:41 am

Specifically, the charge given to us by Willis was
1. To identify the root causes of the stupendous lack of progress in cloud modeling and climate modeling in general despite vast increases in computational speed and power and thirty years of well funded effort.
2. To assess whether or not a fundamental flaw in our understanding lies at the root of the problem.
3. To summarize this in concise and objective terms in a blog response.
A complete assessment of all the issues will be a long a difficult task.
It seemed feasible however to start with a single basic question: If our grants were doubled now, how well could we divert attention from these issues and generate public support for dubious enviromental and economic policies?

beng
March 8, 2012 6:42 am

Observe how astrophysics (another mostly theoretical science) has progressed since 1979 for comparison. And without the rent-seekers (universities) & resultant massive money-laundering like “climatology”. Astrophysics is one of the few sciences remaining that seems free of being stifled by political-correctness. And look at the amazing results.

March 8, 2012 6:43 am

This is very interesting indeed. I have been daydreaming a little about what it would have been like if GCMs had never been invented. They have not, it would seem, contributed much progress. Apart from a couple of typos now fixed, the following was also posted by me on Bishop Hill a few days ago:
It is an idle thought I know, but I occasionally wonder how it would be now if the GCMs had never been invented. I think we’d perhaps need a few more surface weather stations to supplement the satellites and keep weather forecasting at its best (a best which is inevitably constrained by the turbulent behaviour the atmosphere on a wide, and very relevant, range of space and timescales).
We might suffer a possibly detectable drop in weather forecasting skill if we travelled back in time to erase the models, but we’d have removed a weapon from the armoury of those who are irresponsible enough to seek to scare us with their tales of imminent doom based on zero observational evidence of anything odd going on in the vehicles of that doom: winds, rains, sea levels etc.
We’d also have had perchance a more extensive examination of the physics and other aspects of the science, free from the wet-blanket effect that computer modellers can bring with their dark arts and all but impenetrable claims and obsessions – for some, the virtual world of their software can become more vivid, more ‘real’, and more congenial than the even messier real one. There is something about computers that can get in the way of thinking and of conversation – something to do with a sense of a black-box that we scarce know how to argue with and which we know is capable of computational chores that we cannot begin to match with pencil and paper and spreadsheats.
Would the IPCC have fizzled out without its very own version of ‘the computer says’ that so helped the Club of Rome in their previous foray into mass scaremongering called ‘Limits tof Growth’? I am inclined to think so, and I think some considerable opprobrium belongs upon computer models and their keepers during the Foot and Mouth fiasco in the UK. Our peculiar, almost superstitious, fear of computers and our readily exercised panic around them was also illustrated by the Year 2000 shenanigans over what could and should have been a matter of routine review and testing.
But the genie is out, we have them and they seem get more funding year after year. What can be done to protect us from further harm from them? This is tricky. Their keepers are scarce likely to go big on any notion that they are so feeble in the face of the immense complexity of the system that they should have no more impact on public policy than a Mystic Meg, or a Whitaker’s Almanack. They needs must urge us to believe that if not yet, then at least just around the corner, great benefits will follow from climate modelling. And of course, they may be right, and we may even get technical breakthoughs that no one has even imagined yet. I think it a good thing that some people should pursue them for research just in case. But we need more public recognition of the dangers of them.
Posted as a comment on this thread at Bishop Hill (Mar 6, 2012 at 11:32 AM): http://www.bishop-hill.net/blog/2012/3/5/new-solar-paper.html?currentPage=3#comments

March 8, 2012 6:50 am

wws says:
March 8, 2012 at 5:22 am
UFO’s – climate “science” – I think the two have quite a bit in common.
Funny that you mention that. Try “Aliens Cause Global Warming: A Caltech Lecture by Michael Crichton” (Posted on July 9, 2010 by Anthony Watts). Great read. Guaranteed.

Richard S Courtney
March 8, 2012 6:51 am

Cadae:
In your post at March 8, 2012 at 4:27 am you say;
“Many other sciences have the advantage of being able to test their models via direct experimentation against objective reality, but Climate Science does not have clones of the Earth to experiment with nor the tools to experiment with the earth itself. The closest Climate Science can come to an experimental Earth are computer simulations- but because of the complexity of the Earth, these simulations are entirely inadequate for properly and objectively testing climate hypotheses.”
Sorry, but that completely misses the point. The question at issue is NOT how “climate hypotheses” can be assessed. In his article Willis says;
“And since 1979, an entire climate industry has grown up that has spent millions of human-hours applying that constantly increasing computer horsepower to studying the climate.
And after the millions of hours of human effort, after the millions and millions of dollars gone into research, after all of those million-fold increases in computer speed and size, and after the phenomenal increase in model sophistication and detail … the guesstimated range of climate sensitivity hasn’t narrowed in any significant fashion. It’s still right around 3 ± 1.5°C per double of CO2, just like it was in 1979.”
Developments of the models have achieved nothing. The question at issue is NOT how “climate hypotheses” can be assessed. It is WHY the models’ developments have achieved nothing. And the reason for that failure of achievement is that the models make no attempt to test “climate hypotheses”.
Models could test such hypotheses by comparison with existing real-world observations; e.g.
Does a model designed to represent one climate behaviour (or combination of climate behaviours) provide an output which emulates another climate behaviour?
The clearest example of such a test would be for the model to output spatial temperature and precipitation patterns which match the temperature and precipitation patterns of the real Earth. If a climate hypothesis does not provide such an output then it is not correct so reject it and construct another model. But climate modellers don’t do that although none of the models – not one of them – provides a reasonable emulation of such basic climate variables over the surface of the Earth.
Each modelling team has invested much time, effort and money in its model so refuses to scrap it and keeps adding things to it in hope that somehow one day it will work. Hope is not science.
But, as I explain in my above post at March 8, 2012 at 3:17 am:
“The fact that the models are adjusted to match mean global temperature but fail to match regional temperatures is a direct proof that none of them is emulating the climate system of the real Earth.
And the fact that each model uses a different value of climate sensitivity is a direct proof that they are not applying a unique theory of climate behaviour.
But model falsification seems to play no part in what is disingenuously called ‘climate science’.”
At present the climate models are not being used as scientific tools: they are being used as playthings. And I object to my taxes being used to pay people to play these computer games.
Richard

pwl
March 8, 2012 6:54 am

A fundamental mistake that the climate scientists are making is to assume that the climate can be modeled at all, let alone modeled using traditional mathematics.
“If theoretical science is to be possible at all, then at some level the systems it studies must follow definite rules. Yet in the past throughout the exact sciences it has usually been assumed that these rules must be ones based upon traditional mathematics. But the crucial realization that led me to develop the new kind of science in this book [A New Kind Of Science aka ANKS] is that there is in fact no reason to think that systems like those we see in nature should follow only such traditional mathematical rules.” – Stephen Wolfram, A New Kind of Science, page 1.
The types of systems that Wolfram (and others) have discovered have simple rules yet generate immense complexity, as complex as any complex system. Yet these systems fail all attempts at being modeled by traditional mathematics.
Among the relevant discoveries to climate science that Wolfram has discovered is that these simple systems, which are pervasively prevalent in nature’s climate systems, can and do generate their own internal randomness that make these simple systems that generate complexity impossible to predict; the only way to know what they will do next is to observe their state changes unfold through time. Climate is made up of such simple systems that generate complex behavior and internal randomness and thus can’t be modeled on first principles. This is also one reason why the traditional mathematics fail to describe and never will describe most natural systems including climate with any accuracy. Even using ANKS methods you’d not be able to model the climate accurately as the model is never the real climate, “the map is not the territory”, the models can’t model something that generates it’s own randomness – it’s simply not possible as Wolfram has proven mathematically.

Hu McCulloch
March 8, 2012 6:55 am

I’m old enough to remember from freshman chemistry lab back in 1963 when WIllis was writing his first computer program that a “probable error” is the half-width of a 50% confidence interval, i.e. 0.67 sigmas. A more conventional 95% CI extends about 2 sigmas, or 3 times the PE. Ie 3 +/- 1.5 PE gives a 95% CI of 3 +/- 4.5 dC. Not very informative!
Another big uncertainty is the effect of human emissions on atmospheric CO2. Both are going up, and there is surely a connection. However, the level of atmospheric CO2 looks a lot more like the annual rate of emissions than like the cumulative emissions to date. Past emissions must therefore eventually get taken up by the environment on a timescale that is more like a decade than a century. There must be a connection between steady-state annual emissions (holding solar etc constant) and steady-state CO2 that depends on this take-up rate, but there doesn’t seem to have been an effort to quantity this.
As for nuclear fusion, Willis, it was achieved more than 60 years ago. I think you mean cold fusion? ! 🙂

March 8, 2012 7:04 am

Says Willis,

For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.

Do you suppose the Sun does not go around the Earth?
Could it be that CO2 is not an important driver of the Earth’s climate?
Hmm. . .
/Mr Lynn

March 8, 2012 7:06 am

One thing is for sure, Willis isn’t a Texas Sharpshooter. He hits the bullseye without the help of post-shot correction factors.
The AGW crowd consists mostly of neo-Luddites, watermelons, dishonest scientists, fools, charletans, cowards, and bandwagonists. After 33 years of failed predictions, falsified data, modified data, and lies. I have no respect for those who still try to keep a foot in both camps. I’m sorry but Dr. Judith Curry’s dialog is losing meaning. No one on the other side is doing responsible science, just one “worse than we thought” after another. They never even check their premises, or for that matter, ask themselves, “If this were really true how would it have changed the past?”
I attended an HPC workshop last week. It was for the Oil & Gas Industry – we’re in deep trouble, when an oil company infrastructure specialist proudly states how much CO2 they’ve mitigated by their construction and power utilization efficiency.
If we could just clone, say, 10,000 Willis’ …

dp
March 8, 2012 7:08 am

The NAS report reads like one of the greatest fictions since Edgar Rice Burroughs populated the dead sea bottoms of Barsoom. Theirs was the most convoluted way of saying “we don’t know what we’re doing” and “This is hard – let’s do it wrong” I’ve seen in quite a while. Because this report is outside their charter I hope we taxpayers are not picking up the tab. It’s all been a big waste of a good Gaea so far.

Jeremy
March 8, 2012 7:17 am

30 years and no one has come up with data collection methods to try to understand clouds better? That’s shameful. I wish I understood more precisely what the climate scientists need in terms of data. It seems like you should be able to simulate clouds from physical first principles.

RockyRoad
March 8, 2012 7:28 am

The AGWCF (CF = “Control Freaks”) crowd couldn’t possibly control water, but they can control CO2 (or give it a try). So they tailor their “science” to fit their control appetite, which is one horrible indictment on their whole sordid affair.
I shall characterize it as “Epic Fail”. So now we have AGWCFEF. You’d think that would put an end to it, but unfortunately, it continues unabated.

Frumious Bandersnatch
March 8, 2012 7:29 am

Willis,
Good post (as usual – I especially enjoyed the fact that your “jerkitude” is read into the record – gave me quite a laugh). The one point on which I would disagree with you is the fact that Enviro science is the only one infected by lack of foresight. There are others which have their fair share of problems in this arena (such as evolutionism, though perhaps not to the degree climatology has) and even some areas of science, generally real good, have had issues (such as the long and sometimes vitriolic discussions over the Big Bang theory.
All of that said, I have never seen the level of corruption or vicious personal attacks in any other area (except from Muslim Jihadists) that seem to be so endemic in climate science.

ChE
March 8, 2012 7:34 am

PS—Please do not compare this to the lack of progress in something like achieving nuclear fusion. Unlike climate science, that is a practical problem, and a devilishly complex one. The challenge there is to build something never seen in nature—a bottle that can contain the sun here on earth.

Indeed.
Now let’s consider a hypothetical. Let’s suppose that you’re the president of the world, and you have ten trillion dollars to spend on this. You can spend ten trillion on getting a definitive answer to the climate sensitivity question, or you can spend ten trillion and get practical fusion. Which is the smarter investment (forgive me, Judith Curry)?

Mickey Reno
March 8, 2012 7:35 am

Richard S Courtney says: “model falsification seems to play no part in what is disingenuously called ‘climate science’.”
I second this statement. I was arguing falsification on RealClimate a couple of weeks ago, (the Bickmore thread) and they have a tin ear, they just don’t care about falsification. According to the consensus, real climate science allows you to simply readjust the model, and try again.
As for how this unscientific process might tend to corrupt, I didn’t dare mention, of course, because of the imperious and sanctimoneous attitudes there. Nor do they have any conception of how far off the rails such carelessness can take public policy, which sort of expects accuracy, rather than statistical “skill.”

Bob Johnston
March 8, 2012 7:37 am

“An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out and that the growing generation is familiarized with the idea from the beginning.” – Max Planck
With the human mind being what it is, I have little hope that the current warmist crop will ever have the ability to conceive that their CO2 hypothesis could ever be wrong. The brain doesn’t work like that in most people, it’s why you see the warmist side resort to name calling and distortions in an effort to resolve their cognitive dissonance. People get entrenched (What, me wrong???) and they simply cannot shift course after having taken a stand.
The CAGW situation and it’s lack of forward movement is very similar to another field of study that interests me and that has not moved forward in 40 years; that field being cardiovascular health and obesity as it relates to diet. This field had it’s own Michael Mann, that being Ancel Keys back in the 50’s. His flawed studies and overbearing manner set this field of study down a path that has led to incorrect conventional wisdom that just can’t seem to be broken by logic or even studies showing the contrary.
People develop blind spots and no amount of persuasion will ever get them to change. It’s only after they die or leave the field that new ideas can take hold.

Hector M.
March 8, 2012 7:44 am

@Judith Curry (March 8, 2012 at 4:37 am): The link to the NAS report is in the post, where it mentions the Charney report.
More specifically, it is at http://www.atmos.ucla.edu/~brianpm/download/charney_report.pdf.

Rick
March 8, 2012 7:45 am

OT but I was listening to a March 8th CBC radio program “Quirks and Quarks” and W.Richard Peltier, the most recent winner of the Gerhard Herzberg Medal and prize, was being interviewed. Peltier is the founding director of the University of Toronto’s Centre for Global Change Science, a 2010 winner of the Bower Award and prize for achievement in Science from the Franklin Institute in Philadelphia, the 2002 winner of the Vetlesen Prize for Earth Sciences, and a mentor of more than 30 doctoral students and an equal number of post doctural fellows. His use, more than 6 times in a short intervew, of the term “denier” caught my ear. Here we have a very prestigious award, worth a million dollars, awarded to a man with obvious talents and he spends most of his intervew time whining about “deniers”.

Frank K.
March 8, 2012 7:48 am

Willis – you had me going there for a minute – 1979!! Heh :^)
It is interesting to note that one of the two models discussed by Charney is none other than Jim Hansen’s early GCM.

RockyRoad
March 8, 2012 7:53 am

pwl says:
March 8, 2012 at 6:54 am

A fundamental mistake that the climate scientists are making is to assume that the climate can be modeled at all, let alone modeled using traditional mathematics.

Actually, they do a pretty good job of it–for example the “Map” procedure at Weather.com. That gathers radar data using “traditioinal mathematics”, interpolates it, and presents it in a series of frames–a “model” if you will of the past six hours or so. They even have a “Next 6 Hours” feature that projects the trends out for an equal duration (apparently predictions beyond 6 hours don’t have any validity or they may just be too resource intensive). And they have a 10-day forecast, but admit today is 80% accurate, tomorrow is 60% accurate, while the next day 40% accurate, and so on. (They recently added a “5-day” forecast apparently recognizing that after 5 days, their predictive ability is zero.)
So I submit there is a “climate model” based on actual data, but most of you would argue that weather isn’t climate, while one could argue it is a glimpse of the climate. And yes, I’m stretching this assertion quite a bit. (Thank goodness I’ve not seen a CO2 meter anywhere on Weather.com yet, but I fear I may have given them an idea.)

DocMartyn
March 8, 2012 8:04 am

The date is quite interesting, 6 years after H. Kacser and J. A. Burns. The control of flux.
Symp. Soc.Exp. Biol., 32:65–104, 1973.
The biochemists had realized that box models, using known rate constants, didn’t describe multi-component systems. They failed when tested against real data. This led to metabolic control theory, which is now part of canonical control theory. Interestingly, economics has also gone in this direction, although using a different formalism.
In layman’s terms, metabolic control theory shows you that complex systems have both inertia and elasticity. Tipping points are few and far between.
A review for those who like math is here:-
http://www.siliconcell.net/sica/NWO-CLS/CellMath/OiOvoer/Hofmeyr_nutshell.pdf
The nice thing about MCT is that you can actually do experiments to test the analysis, thus testing the hypothesis.

March 8, 2012 8:04 am

@Willis — Do we have any documentation on the “millions of lines” of code. If you include all the libraries used to make the application happen, I am sure that is easy to get to. Most of us don’t monkey in the libraries to create our code. Include them and use them. Do I get to count the lines in the library as part of my “program size”?
If it is millions of lines of code, it begs for a re-eval. Might I suggest using minecraft as their modeling engine… Maybe Roblox…

Nate_OH
March 8, 2012 8:10 am

“Jason Calley says:
March 8, 2012 at 5:17 am
You asked that we not draw analogies with fusion research, but please forgive me if I point out what may be the single most pertinent similarity. There are HUGE sums of money and power involved in NOT solving both questions.
I wish I could attribute the quote, but someone said, “Science + politics = politics.””
Exactly!
Fusion, space flight, climate research, etc have become political institutions, not R&D goals.
Pournelle’s Iron Law of Bureaucracy goes into effect and we end up where we are.
“…in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representatives who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.”

kim
March 8, 2012 8:17 am

Gaea looks like shadows on the wall of the cave.
=================

March 8, 2012 8:28 am

“Now me, I think the fundamental misunderstanding is the idea that the surface air temperature is a linear function of forcing.”
While this is a charming idea—just an innocent scientific mistake, correct it, and all will be well—it doesn’t ring remotely true to me.
Firstly, in a trivial, tautological sense, the assumption must be true—a forcing is anything that changes the surface temperature, and the greater the change, the stronger the forcing.
Secondly, with respect to the influence of specific observable forcings, I don’t believe that no one ever thought of removing the assumption that their effects will be linear. The idea of feedbacks between different observables naturally leads to the expectation of non-linearity.
It is worth noting that the climate of the earth is not the only complex problem that so far has withstood the onslaught of increasing computing power. Take, for example, the function and the development of the brain. We understand the function of individual nerve cells, and the interactions of small assemblies of nerve cells, in a qualitative to semi-quantitative fashion; however, no one has a valid working model approaching anything like a bird, or even insect, brain.
Complexity continues to elude us even with something comparatively simple such as the folding of single protein molecules. For background: Each protein molecule is initially synthesized in the cell as an inert, linear strand of amino acids. This strand then spontaneously bends, twists and curls into a certain, specific folded shape; only in this folded state does the molecule assume some function useful to the cell.
When artificially unfolded back to a linear strand in vitro and then left alone, most proteins will spontaneously revert to their folded, functional structure. This tells us that all the information required for reaching that structure must be contained in the amino acid sequence of the linear strand. Therefore, it should be possible to predict the folded structure from the amino acid sequence alone. However, while some heuristics exist to predict some likely features of the folded structure, we are very far away from accurate, complete and meaningful prediction, and therefore we continue to require X-rays and NMR to study folded structures.
The protein folding problem is many orders of magnitude simpler than the climate; and importantly, unlike the latter, it is also amenable to extensive experimentation. If we cannot even understand the choreography of a single molecule with a few thousand atoms, is it really surprising that we have failed to understand something as humongously complex as the climate?
Of course, any focus on scientific reasons for the lack of progress ignores the fact climate science has become so politicized that free, open, disinterested debate has been disrupted. So, even if the physical problem had been a simple one, this lack of openness would have ensured failure, much like Lysenko’s politically endorsed dogma assured failure to understand the comparatively trivial question of genetic variation and inheritance.

Jay Davis
March 8, 2012 8:31 am

Lack of progress on understanding “climate change” is easy to explain. When I read the garbage published by the so-called “climate scientists” that is presented here on WUWT and on other sites, it’s easy to see that those doing the research are far from being the “best and the brightest”.

Joachim Seifert
March 8, 2012 8:33 am

Willis, great article…..good to read…..the climate MISERY since 1979!
I agree profoundly with your major quote:
“””For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error…..””””
There are 2 major errors concerning fundamentals/underlying paradigms:
(1) To CO2: The trace gas does not warm the climate, see papers of G.Kramm &Tscheuschner
and G. Gerlich on the role of CO2
(2) The global climate warming/cooling cause on centennial scale is not atmospherical
but ASTRONOMICAL, thus only HARMONIC models can show the cause….
I just finished the last numbers on it, after being inspired by Nic Scafetta’s harmonic model
vs. IPCC GMC’s , the underlying warming mechanics is absolutely clear now, no doubt left
and results show (for the keen reader) that each of the 3 coming decades will shed 0.1’C
per decade in GMT…..
Cheers to all….
JS

michael hart
March 8, 2012 8:41 am

I see the fundamental problem as one of kinetics. A synthetic chemist can go into the lab to test their great new idea. From bitter experience I can tell you that it will usually fail the first time. And it fails again. That comes with the territory. Wash the glassware. Maybe change something, and try again. But at least the chemist will learn that rapidly. Possibly within hours. Reality can be a tough disciplinarian in science.
Now, what if the experiment takes several years, or even decades? The student may have completed their Ph.D. and be building a successful career before the results come in. If the results are not as expected there may be no time or resources to start again from scratch. What are they going to do? Retract publications, say they may have been wrong and resign? Of course not. Press on. More analysis, faster computers. Even when there may be reasons to think that can’t help. Sure the student had a good advisor, but that advisor likely faced the same problems with the same unsatisfactory solutions.
Every endeavour has it’s associated hype and marketing. When a new drug or medical treatment fails, it will be in the papers. An engineer sure gets to know what people think if their bridge falls down, or their satellite doesn’t work, or the rocket blows up on the launch pad. Right now I can’t see where the failure point is in climate science. It is not necessarily the fault of someone studying it that they have wait a whole career to be proved wrong. But it’s a fundamental question that a science has to address.

kakatoa
March 8, 2012 8:46 am

Willis,
On the positive side of things, from my perspective, two quotes from the report were something that might lead to a reevaluation of what areas of of research one could/should focus our limited resources on in the field of climate science.
1) “Trustworthy answers can be obtained only through comprehensive numerical modeling of the general circulations of the atmosphere and oceans together with validation by comparison of the observed with the model-produced cloud types and amounts.”
2) “At present, we cannot simulate accurately the details of regional climate and thus cannot predict the locations and intensities of regional climate changes with confidence. This situation may be expected to improve gradually as greater scientific understanding is acquired and faster computers are built.”
The second quote is the one that interests me the most. As you have noted the development of computers (information processing) has increased dramatically over the years in terms of computations per second and in the field we can expect further improvements in the scientific and technological development over time in computations per second.
My suggestion would be for climate science research to focus a bit more, ok a lot more, on “greater scientific understanding” and a lot less on running the models that we already know cannot “predict the locations and intensities of regional (let alone local) climate change with confidence.”
Thanks for the post.

March 8, 2012 8:51 am

Rick says:
March 8, 2012 at 7:45 am
OT but I was listening to a March 8th CBC radio program “Quirks and Quarks” and W.Richard Peltier, the most recent winner of the Gerhard Herzberg Medal and prize, was being interviewed. . . Here we have a very prestigious award, worth a million dollars, awarded to a man with obvious talents and he spends most of his intervew time whining about “deniers”.

And that is the problem. For all the cheering here and elsewhere at the tattered and embarrassing state of self-styled ‘climate science’, for all the signs that governments in Europe and America are backing away from additional ‘climate’ expenditures, the institutional leviathan rolls on, dominating the discussion at every level, stifling market initiative, and skewing scientific inquiry by demanding it study nothing but ‘climate change’.
We’ll know the monster has been stopped when a prestigious academic like this Peltier character, “a mentor” (says Rick) “of more than 30 doctoral students and an equal number of post doctoral fellows” turns around and tells his chairman and the world, “This ‘climate change’ nonsense is based on utterly fallacious and untestable assumptions; my students are henceforth going to do real science, and the granting agencies had better wise up and support them.”
/Mr Lynn

March 8, 2012 8:56 am

Willis says: “Now we can debate what that fundamental misunderstanding might be.”
Let’s start with the fundamental that drives all else.
Can CO2 do what they day it does?
I say no.

March 8, 2012 9:02 am

Michael Hart,
you make some excellent points. From the way you make them, as in: “A synthetic chemist can go into the lab to test their great new idea. From bitter experience I can tell you that it will usually fail the first time,” I take it that you must be younger than me. I no longer find the experience bitter—instead, I fully expect and look forward to it; I am fully focused on observing the way it fails, and am almost disappointed if something works as planned the first time.
You correctly say, “Reality can be a tough disciplinarian in science”. This, to me, is the single most valuable aspect of my scientific education. If you have the smarts, you can easily work a spreadsheet (with the possible exception of trend lines, of course ;), whip up some code, subscribe to some dogma or even invent a new one. However, only rigorously comparing your predictions to reality will make it science, and will make you a better person.

JMW
March 8, 2012 9:28 am

I guess this is a classic example of “Policy driven Science”.
They knew the answer they needed to have and hence they had to build on the assumption of AGW and CO2 driven runaway warming.
You say it is OK for them to say “hey, the question you ask is too difficult, so we’ll answer the question we want to answer and not the question you want answered.”
I not sure I agree.
But then again, maybe it is right.
If they were to answer the question they did answer then they have had plenty of time – since 1979, shed loads of money and a massive increase in computing power and numbers of researchers and research projects has expanded dramatically.
So, given the nature of the question they did try to answer, resources and time were never an issue.
That being the case, they could equally have decided to answer the question they were asked and simply said “OK. We can do that but it will take time and a shed-load of tax payers money. Don’t expect an answer soon.”
But there are two things wrong with this.
Firstly the implication is that if they tried to answer the question stated that because there was no fundamental flaw, the answers (and the science) should have matured a lot faster and a lot cheaper. OK, we can’t know. We might suppose and expect but we might still come up with “we don’t know.”
Secondly they would then be answering a question instead of trying to support a hypothesis designed to support a policy. In that case, maybe the funds and researchers would not have flowed like water from the tax payers pockets. In which case they probably wouldn’t actually be any further advanced. In all likely-hood the question would have been asked of another organisation slightly more savvy and who would then do exactly as they have done.
These “what would happen IF….?” questions are a nice diversion but I guess all we really can do is exactly as you have done.
Ask “where’s the beef?”

Brian H
March 8, 2012 9:36 am

On the very iconoclastic UK site “Number Watch” is a segment on “The Laws”. One of relevance is:

The law of computer models
The results from computer models tend towards the desires and expectations of the modellers.
Corollary
The larger the model, the closer the convergence.

So all those gigaflops of added complexity are causing convergence with the modellers’ assumptions. I wonder if it’s asympototic, or linear …

March 8, 2012 9:44 am

Wonder if that new Cray supercomputer called “Gaea” has been already ideologically programmed.

Richard S Courtney
March 8, 2012 9:46 am

michael hart:
In your post at March 8, 2012 at 8:41 am you address a question which you pose’ viz.
Now, what if the experiment takes several years, or even decades?
But that question is a ‘red herring’ because anybody who examines the outputs of climate models can see that none – not any – of the climate models emulates the climate system of the Earth. The examination takes minutes (n.b. NOT years) and the conclusion is indisputable. (This is explained in my above post at March 8, 2012 at 3:17 am).
Therefore, each of the models should be rejected as a predictive tool because its inability to emulate existing climate indicates it is very unlikely to emulate responses of existing climate to altered inputs.
The climate modellers know their models do not emulate the Earth’s climate system but wish to obscure the fact from wider circulation. So, they do “ensemble runs” and take the average of the runs. Of course, this practice is merely fakery because average wrong is wrong.
Richard

Septic Matthew/Matthew R Marler
March 8, 2012 9:51 am

Willis, I liked your quote from Tom O’Bedlam’s song. We must have at some time purchased books from the same bookstore or something.
The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.
The parts of the climate science that give readily computable answers are the simplified thermodynamic models, illustrated in great detail by Raymond T. Pierrehumbert’s book “Principles of Planetary Climate”. Most of that science was “settled” a while back, and the answers are mostly consistent decade after decade. The omissions and necessary elaborations have not changed that much. An alternative to your inference that climate science is based on a basic and far-reading error is that the climate is a complex of many locally acting and some globally acting processes, and that it is merely hard to learn the whole system; a task requiring some more decades, at least, of patient and persistent empirical research (like the Georgia Tech paper you thrashed recently), and patient and persistent model building and testing. You could be a p-47 pilot complaining that the F-22 can’t be built because of fundamental error in the underlying paradigm — but as we have seen, many detailed steps were required, not a fundamental paradigm shift, in between, including supercomputers to give better answers to the Navier-Stokes equations, and other improved mathematical/computer modeling of fluid flow, and better materials.
Someday someone like Isaac Held, with a more complex and detailed model running on a more powerful computer will have an answer (tentative, a la Fred Mooten of Climate Etc) to the question of what will extra CO2 do to cloud and rain formation in open Pacific Ocean.
Maybe something as clearly revolutionary as the Hahn-Meitner-Strassman discovery of uranium fission will occur, but the solutions we seek may well come from persistent dedicated efforts of the kinds now underway and planned. And, like many fundamental discoveries, the discovery of fission was an unplanned outcome of a well-planned and well-executed series of experiments over a long time span. They did not start off trying to create a new weapon or new power station.

Doubting Rich
March 8, 2012 9:52 am

Great post.
However I have another idea as to what is stultifying the science: mutually-exclusive assumptions.
The whole edifice is based on the assumption that the climate was stable before human intervention. The hockey stick is the most obvious expression of this, but it is the underlying assumption, down to pre-emptive demands in secret emails that they must get rid of the MWP.
However it is at odds with the idea of positive feedback. And that is also one of the foundations of the edifice, as of course without positive feedback AGW can never be catastrophic.
However no system showing positive feedback can be stable. So there is an inherent contradiction between two basic assumptions, and work on one is necessarily pulled down by work on the other. Progress cannot be made until the researchers give up one of these.

D. J. Hawkins
March 8, 2012 9:55 am

The Pompous Git says:
March 8, 2012 at 12:24 am
Richard deSousa sais @ March 7, 2012 at 11:30 pm
The ghost writer for NAS report has to be James Hansen.
Ah yes, the Ghost Who Talks 😉

Maybe he can help Trenberth with the “phantom” heat he’s been looking for ;-).

mojo
March 8, 2012 10:06 am

Nitpick:
“A knight of ghosts and shadows”, surely?
[Per Wiki: “Both “Tom O’ Bedlam” and “Mad Maudlin” are difficult to give a definitive form, because of the number of variant versions and the confusion between the two within the manuscripts.” -w]

March 8, 2012 10:06 am

Doug Cotton says:
There is much discussion of clouds herein, so I trust this is considered on topic. It is also an issue I have discussed in detail in my peer-reviewed paper…
Doug, I think a big part of the problem is that you come across a lot like a spammer. I’m sure that’s not your intention, and you may not even see it yourself (I can understand that you’re excited about your paper), but everything you say has to point to your “peer-reviewed paper where you”. A lot of others here have also published peer-reviewed papers, but they don’t hype them all the time like that.
Please, join the discussion, and please share when your paper is published. But please stop hawking it with every post, unless your desire is to get people to tune you out.

March 8, 2012 10:10 am

Uncle Terrence’s used to recount something his brother Dennis said to him once, there are several versions of the story, but it goes something like this:

“Have you ever noticed that as the bonfire of knowledge is built ever brighter that the surface area of mystery gets ever larger?”

W^3

March 8, 2012 10:25 am

Willis, sometimes progress consists of figuring out that what you thought you knew wasn’t actually so. The history of economics teaches many hard lessons of this type. The work by Granger, Newbold and Phillips, beginning in the 1970s, to identify the problem of spurious regressions and unit roots in economic time series and provide a theoretical explanation, was an amazing example of intellectual progress. But the implication was that a lot of the empirical work on which economic modeling had been based up to that point was garbage. It’s a painful part of the evolution of any discipline to see a theoretical or empirical tradition busted up by new findings or new data, and I don’t suppose climatologists are any more or less receptive to the process than experts in any other discipline.
I do suspect there is an institutional difference, though. Modern climatologists all work in academic societies that have issued position statements on climate change that effectively make loyalty to a set of conclusions a precondition of being a member in good standing of the society. Economics societies do not issue position statements, leaving it up to individual members to speak for themselves. I think the latter tradition is more conducive to progress.

Richard S Courtney
March 8, 2012 10:33 am

Septic Matthew/Matthew R Marler:
A biologist predicts a gazelle will leap because a lion is near. The prediction is accurate because the behaviour of gazelles has been studied and, thus, is known. A model of the complex gazelle structure and central nervous system is not needed for the prediction to be accurate, and such a model which could make the prediction may not be capable of construction for centuries to come.
But concerning the problems with models of the even more complex climate system, in your post at March 8, 2012 at 9:51 am you suggest:
“Maybe something as clearly revolutionary as the Hahn-Meitner-Strassman discovery of uranium fission will occur, but the solutions we seek may well come from persistent dedicated efforts of the kinds now underway and planned.”
Perhaps and perhaps not.
At issue is how much longer are we supposed to throw money at this in the hope that “the solutions we seek may well come from persistent dedicated efforts of the kinds now underway and planned”.
33 years have passed since the Charney Report and (as the above article by Willis points out) there has been no progress over that time; none, zilch, nada. Expenditure to achieve this nothing is probably running at more than US$ 5 billion p.a.. The US alone is spending US$ 2.5 billion p.a. on it.
And all we have obtained is evidence which refutes the underlying model assumption of rising atmospheric GHG concentrations (especially CO2 concentrations) forcing global temperature upwards; e.g.
The ‘hot spot’ is absent.
The “committed warming” asserted by the last IPCC report has not happened.
No climate model prediction has yet proved correct.
Global temperature has been falling for more than a decade will both anthropogenic CO2 emissions and atmospheric CO2 concentration have continued to rise.
Importantly, no climate model emulates the climate system of the real Earth.
The inability of any climate model to emulate the Earth’s climate system should not surprise any rational person. As I have repeatedly pointed out on this blog and elsewhere, the climate system is more complex and has more interactive components (e.g. biological organisms) than the human brain has interactive components (e.g. neurons). Extreme scepticism would confront anybody who claimed to have constructed a computer model of the human brain that could predict b rain behaviour, but you suggest that computer models of the climate system “may well come from persistent dedicated efforts of the kinds now underway and planned”. Yeah. Really. You expect rational people to swallow that!?
It is time to stop the waste of money on the hubristic construction of GCMs. Expenditure should be directed at monitoring climate parameters around the world. When we have detailed knowledge of climate behaviour then we may be able to predict that behaviour. But a model of the climate system may never be capable of such prediction.
Richard

Jim G
March 8, 2012 10:53 am

Willis says:
“That’s the elephant in the room—the incredible lack of progress in the field despite a third of a century of intense study.”
I ain’t sayin that you treated me unkind,
You coulda done better but I don’t mind,
You just kinda wasted my precious time,
Don’t think twice, it’s all right.

March 8, 2012 10:55 am

tallbloke said @ March 8, 2012 at 12:58 am

At this stage it becomes obvious that the C20th warming was a result of less cloud and a more active sun, as evidenced by the much closer correlation between sunshine hours and surface temperature, than that between co2 levels and temperature.
http://tallbloke.wordpress.com/2012/02/13/doug-proctor-climate-change-is-caused-by-clouds-and-sunshine/

This appears to contradict the explanation of the pan evaporation paradox. The measured decrease in evapotranspiration over the last 50 yr is supposed to be due to an increase in cloud cover. The increase in cloud cover was supposedly confirmed by measurement of earthshine from the moon.

March 8, 2012 11:18 am

Doubting Rich said @ March 8, 2012 at 9:52 am

However no system showing positive feedback can be stable. So there is an inherent contradiction between two basic assumptions, and work on one is necessarily pulled down by work on the other. Progress cannot be made until the researchers give up one of these.

Unfortunately, you are incorrect here. Systems with positive feedback can be stable. Back in the dim and distant, the Git built a single valve radio of the sort called a TRF. This included a gain control that used positive feedback to increase the available amplification of the signal received. Too much feedback produced a squeal from the headphones, but a lesser amount allowed turning very weak unusable signals into something comprehensible from the furthest parts of planet Earth. Sadly, there’s too much EMF noise these days for such a simple circuit to be useful.
While the climate system is dominated by negative feedbacks, this is not evidence of absence of positive feedbacks also occurring.

Molon Labe
March 8, 2012 11:28 am

I think the reason there has been no advancement of the science is that the advancements that were made went in the “wrong” direction and were discarded.

bw
March 8, 2012 11:29 am

Congrats on the effort by all, I’m impressed with many of the comments.
I agree that the climate issue lacks progress since 1979 due to using the wrong analytical tools.
In 1900 Climatologists thought he atmosphere was of geological origin, so was studied using the tools of physicists. That’s just a start. A couple billion years ago the chemistry of the earth began to show signs of biological activity.
It seems that those early observations show that the atmosphere is primarily biological. Nitrogen, Oxygen and CO2 are entirely parts of planetary scale biological cycles. Bacteria produce Nitrogen, photosynthesis produces Oxygen, CO2 comes from respiration. Water/oceans are a biological soup. The entire land surface is covered with living soil. Trees are 92 percent CO2 and 7 percent water.
It seems that if you want to understand how the atmosphere behaves, you need to use the tools of biologists. I’m sure that physicists would resist that idea.
How many of the climate models in 1979 included biological interactions??
How many of the 2012 climate models include the fact that the atmosphere is derived from 2 billion years of biological activity?

Vince Causey
March 8, 2012 11:33 am

Nothing has changed since 1979 – the same difficulty in reconcilling the complex functions of clouds and water vapour, the same inability to narrow the range of temperature projections. The funny thing, while reading this, it reminded me of one other area of science that is also stuck in a time warp.
This has to do with string theory – a particular model in theoretical physics that tries to produce a theory of the universe that will reconcile quantum mechanics and general relativity. This too has gone nowhere in about the same period of time. An entire generation of physicists have spent their careers trying to define the one string theory model (there are 10**500 possible variants) that actually works. What’s gone wrong?
I think what has done for this area of physics is exactly the same as that which has stymied progress in climate science. For most of the last 30 years, string theory has been the only game in town. To obtain professorial positions, you have to do string theory. All alternative avenues or research, such as quantum gravity, have been all but crowded out. It is as if a consensus exists about what line of research is the correct one, which ideas are valid, and which are to be shunned.
There can be little doubt either, that climate science has been burdened with the same tunnel vision, the same obsession with a single paradigm. The effect this has on how the science is done, who gets tenure, and what ideas are to be promoted, and what ideas shunned, should be obvious to anyone who reads WUWT.
The really sad thing is, that nothing seems to change. It is if a form of delusion has taken hold. Maybe this is Thomas Kuhns paradigm shift (or lack of shift) in action.

DesertYote
March 8, 2012 11:41 am

The Pompous Git says:
March 7, 2012 at 10:51 pm
I find it fascinating that optics is still moving ahead by leaps and bounds even though the major breakthrough (clear glass) occurred over a thousand years ago and spectacles 800 years ago. Without them, my life would be hell. Maybe climatology is still stuck in the Dark Ages 😉
###
I think your last statement is about as close to the truth as one can get. Political considerations trumping the pursuit of truth.
BTW, my eyesight is very bad. My latest glasses are amazing. Progressive trifocals made from a very high IF material with anti-reflective coating. Ever pair of glasses I get are more advanced then the last pair, and this has been the case for 30 years.

Olavi
March 8, 2012 12:21 pm

There should be ways to measure increased feedback. Why theese scientists don’t show increased backradiation? Because there is no increase in real life, increase is only in computermodels. Greenhouse theory is somehow wrong, because data won’t fit the theory model or calculations. Solarforcing is underestimated and CO2 effect is wastly overestimated. If thermometer shows cooling – it has to be broken. LOL Argo showed cooling in north atlantic so they adjusted data wihout fixing thermometers. Why? There was nothing to fix. LOL

March 8, 2012 12:24 pm

From Delingpole’s article in ‘The Commentator’, a London-based magazine.
“… here’s MIT atmospheric physicist Professor Richard Lindzen addressing the House of Commons in February: “Perhaps we should stop accepting the term ‘skeptic’ because ‘skepticism’ implies doubts about a plausible proposition. Current global warming alarm hardly represents a plausible proposition. Twenty years of repetition and escalation of claims does not make it more plausible. Quite the contrary, the failure to improve the cause over 20 years makes the case even less plausible, as does the evidence from Climategate and other instances of overt cheating.”
http://www.thecommentator.com/article/972/the_high_priests_of_global_warming_have_lost_their_prestige_and_the_realists_are_winning_the_debate

March 8, 2012 12:30 pm

Excellent post Willis. There is something that puzzles me I wish that you (or anyone else) might address. Why has the climate sensitivity range in the (what is it?) 23 AOGCM’s used by the IPCC not been “tuned”, “fudged” or adjusted to show more convergence rather than being left just where it was in 1979? Of course, the reduced range and mean or “best estimate” would still need to imply catastrophe, but the modellers could say, “See, we are getting there.” One explanation that came to mind was that maintaining a broad, but not absurdly broad, range was part of the “program” all along. As you reduce the range, you increase the likelihood that some subrange within that reduced range will, despite the consensus’ best efforts, be shown to be empirically false or implausible. Now you are stuck with an even narrower range, and the same problem will come at you again even worse than before. So the strategy fell into place to protect the climate sensitivity range at all costs. When I say “strategy,” I do not mean to imply that this had to develop by agreement, but perhaps by some sort of spontaneous intellectual synchronization much like Joseph Sobran’s concept of the behive that starts to buzz without taking a decision to do so.

Cicero308
March 8, 2012 12:34 pm

Quantum computing might *eventually* be able to help model the non-linear behaviors of climate. There is tremendous progress being made outside of traditional linear computer designs. I got a ‘C’ from second semister Differential Equations and was damn glad to get that C. its just really hard conceptually… Applied to Control Systems in 400 level B.S E.E. school we had to use sparse matrixes of diff eq’s to model forcings in rather simple electronic control systems. A quantum computer could in theory implement those sparse matrixes in near real-time but EEs take liberties with the math that I doubt could be sustained for actual science. Very interesting discussion with actual science! Who knew actual science continues in spite of Big Science!

pidge
March 8, 2012 12:41 pm

Calling Douglas Adams, we need a planet sized-computer building….

Gary Swift
March 8, 2012 12:45 pm

That was a very thoughtfull editorial, but not without points of debate. To say that climate science has made little progress since 1979 is probably a bit of an exageration. I think it would be more accurate to say that official climate reporting has made very little progress.
The study and modeling of our little blue dot has improved more than most people know. They can actually do better than random chance on a 5 day forecast now!! That’s not bad!! There are a whole host of new land, sea and space observatories with instrumentation never before available.
So, although the reports read almost exactly like the reports from 30 years ago, the observational data and our skill at using it has improved. Eventually that data is going to overwhelm the beliefs of the orthodoxy, but as with many scientific paradigms of the past, the concensus will not change unitl the present clergy retires and new idea are brought in by a new generation.

Sparks
March 8, 2012 12:49 pm

If we simulate a natural chaotic system a billion times with a billion different random scenarios and we end up with a billion different outcomes, then what is the significance of one of the simulation runs matching a theoretical result or even a wild assumption you had before you began?
It boggles the mind how the laughing stock of the scientific community with their man made global warming catastrophic climate change bias, that have made no progress over the years while receiving bloated research funding, who claim to be trying to understand what they also claim to be “Settled Science” (i.e “AGW causes colder NH winters”) Why do they get to play with the very expensive scientific instruments.
Maybe it’s time to let some of the brighter kids play with our publicly owned toys, Maybe to contribute to some progress in such areas like solar physics, astronomy, nuclear fusion and other productive areas of science.

March 8, 2012 1:12 pm

“Greetings, Professor Falken. Want to play a game???”

Al Gored
March 8, 2012 1:13 pm

For me this lack of progress in Consensus “climate science” just reveals that:
1) It is not science but ideology. Given all the new discoveries which have been made – despite the obstacles to that – and the questions they raise, the Consensus opinion has not budged. Real science would have moved but ideology doesn’t… because it is ideology that must be defended.
2) These ideologues ‘found’ the ‘answer’ they wanted before the whole project started and have too much based on it to change. On a much smaller scale the same effect happened in the great ‘Clovis First’ controversy in archaeology when researchers would simply stop digging when they hit the Clovis-dated era lest they find something older that upset that orthodoxy. It took a long time to get past that, with similar groupthinking smears of the early ‘heretics.’ And that field has real solid evidence to work with (or deny).
They have been working on this for a long time:
http://inthesenewtimes.com/2009/11/29/1975-endangered-atmosphere-conference-where-the-global-warming-hoax-was-born/

March 8, 2012 1:14 pm

Willis, you say: “For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.”
=========
That’s exactly my point too. – I do not believe the theory of the Atmospheric Greenhouse Effect (AGHE) to be a “misnomer”. I think it is just plain wrong and has been so ever since the day in 1896 when Arrhenius invented “back-radiation”. – A belief in “Back-radiation” is, of course, quite reasonable – so I am not knocking it per se. –
However CAGW enthusiasts and “Skeptics” alike all believe in this “Misnomer”. – The difference is, the skeptics say, that any “warming” attributed to CO2 concentration is likely to be so small it can quite easily be lost “in the noise”. (At least that was the case was before they put a “circa 1 deg. C” number on it).
I am fine with that – cool – in fact. – But, my observation is; “Just because both sides cannot be correct – does not, automatically, mean that one side has got it completely right. – My argument is that both sides and their beloved GHG “misnomer”- may all be in the wrong. – This means that skeptics cannot “win the argument” until Hell – and the rest – freezes over. – By which time it will all be too late.
The AGHE theory, seems to me, to be reliant on the belief, or assumption, that “Heat” is radiated from the surface to be absorbed by atmospheric GHGases which – radiate them straight back. – Thus conserving energy and raising the surface temperature at the same time. –
Unless, there are more types of radiation than the electromagnetic (EM) one, then all “the evidence” is that heat cannot be radiated. – So let’s look at just a few things:
The Sun radiates energy to the earth, in the form of light’ in less than 9 minutes. – (At least that’s the story I have heard).
Electric energy can flow through the appropriate cable (from + to – or vice versa) similarly, it happens at light speed. – At a break in the cable where electricity meets the air, the flow stops. – You can force the electric energy to jump across that break in the cable (through the air) by for example creating an arc with a couple of bits of carbon. The “Arc” created, always emits visible light. – My advice to anyone, is not to touch that light as it will severely burn your fingers. – And, by the way an electric lead, or cable always creates a magnetic field around itself as long as the current is flowing through it.
– Furthermore, with the appropriate equipment setup you can talk into a microphone in America and someone with the correct receiver in Africa can hear your words – instantly. —– These are just a few examples of energy-flow, light and heat creation.
Heat never moves all that fast, (at least, not at light speed) and it probably would be “static” if it was not for Conduction and Convection. ——- Now then, – think of this: If the Sun is a ball using its potential energy to produce DC positive electricity and all the planets are “overall” negatively charged, then the Solar System would be like a “max. Atom” – and thinking about it – compared to the universe – we are not all that big, at all.
————–
And lastly you say: “PPS—Please don’t come in and start off with version number 45,122,164 of the “Willis, you’re an ignorant jerk” meme. I know that. I was born yesterday, and my background music is Tom o’Bedlam’s song:
I hope I didn’t, but I too am often described by many as “an ignorant jerk” and I am proud of it, as it tells me I am still alive. — sorry, I’m not a singer

March 8, 2012 1:19 pm

I am skeptical of skepticism! – Hurrah – That’s my niche.

March 8, 2012 1:35 pm

“Willis Eschenbach says:
March 8, 2012 at 11:01 am
Michael Palmer says:
Firstly, in a trivial, tautological sense, the assumption must be true—a forcing is anything that changes the surface temperature, and the greater the change, the stronger the forcing.

That is assuredly not the definition of a forcing that is used in climate science.”
True. This is why I prefixed my paragraph with a caveat.
“In climate science, a ‘forcing’ is generally taken to be a change in the downwelling radiation at the top of the atmosphere.”
That, on the other hand, seems an overly narrow definition of the term “forcing”.
“Whether this change in downwelling radiation ends up changing the temperature, or whether it is simply balanced out by e.g. a change in the cloud albedo, is the huge unanswered question in climate science. And the fact that you think this question has been answered means you are not following the story. You can’t simply claim that the biggest unanswered question in the field is answered.”
I made no such claim, of course.
PS Willis, quite apart from this question: I can’t find your email address anywhere—I would like to discuss a technical aspect that might benefit your research. If you are interested, please email me – I promise to not misuse your email address.

Richard S Courtney
March 8, 2012 1:44 pm

Leigh B. Kelley:
Your post at March 8, 2012 at 12:30 pm asks:
“There is something that puzzles me I wish that you (or anyone else) might address. Why has the climate sensitivity range in the (what is it?) 23 AOGCM’s used by the IPCC not been “tuned”, “fudged” or adjusted to show more convergence rather than being left just where it was in 1979? “
I answer that this was politically impossible. Whose values of climate sensitivity and aerosol forcing should be preferred?
There is no justification for any of the climate sensitivities used in the models. Indeed, all empirical studies indicate much lower climate sensitivities. So, adoption of any particular value would be an agreed preference for the model which used that value and, therefore, each model Team would argue for adoption of the value it used. An average of all the used values would not solve this because the average would be nearest to the value used by one model.
Simply, any attempt to adopt an agreed value for use in all models would initiate a continuous squabble between the model Teams which could only harm them all. But the existing situation benefits them all.
Richard

IAmDigitap
March 8, 2012 1:45 pm

In climate science a forcing is leveraging temperature or any other variable, one way or the other – not only radiative.

Colin in BC
March 8, 2012 1:49 pm

anticlimactic says:
March 8, 2012 at 3:23 am
AGW adherents remind me of those who were thought that the Earth was at the centre of the solar system.

Bingo!
I’ve made this very analogy myself. The hubris displayed in both examples regarding the significance of Man is extraordinary.

Gary Hladik
March 8, 2012 1:58 pm

Well done, Willis!
And kudos to the commenters on this thread for their added insights. Special thanks to Bill Hunter (March 8, 2012 at 3:56 am) for his eloquent comment on accountability.

Big Bob
March 8, 2012 1:59 pm

It seems to me that if feed backs were at all positive the earth would have burnt to crisp long ago. I don’t see that it matters whether the initial source is CO2 or anything else. Positive is positive. Any increase in temp for any reason whatever wouild cause thermal runaway. Obvously it has not.

johanna
March 8, 2012 2:03 pm

Jim Turner says:
March 8, 2012 at 6:27 am
The point about the rate of scientific progress is an interesting one – it is certainly not ‘even’, some areas have advanced enormously and others not. I think that this can only partly be explained by the effort expended. In my own area (pharmaceutical research) there is much effort, and continual impressive progress is being made in the understanding of underlying processes. In say, the last fifty years we have gained a huge amount of understanding of genetics, biochemical processes and cell signalling; however our actual ability to treat diseases like cancer has improved rather modestly by comparison.
I think part of the explanation may be that how much more we know than we did before is less important than how much less ignorant we are. For example, our knowledge of something may double – but it may be that our knowledge of all that it is possible to know has actually only increased from 0.1% to 0.2%, so we are still largely ignorant.
————————————————————————
I thought of ‘the war on cancer’ as well when I read this post. However much has been spent on unproductive climate research, it is a small fraction of what has been spent on unproductive cancer research.
For a long time, it was assumed that cancer is one disease, not many. and that if we could only crack the magic code, we could find a universal cure. Real progress was not made until it was understood that it is not one disease, just one set of symptoms (to over-simplify for the purpose of discussion). So, we now know that many cervical cancer cases are triggered by a genital wart virus, and a vaccine has been developed. I think it is likely that further biological triggers for other types of cancer will be found. We know that age is probably the biggest risk factor, as our cells gradually become less functional. We know that some chemicals, and high levels of radiation exposure, may be triggers in some people. Detection, surgery, chemotherapy and radiotherapy have improved greatly, but a ‘cure’ is essentially as elusive as ever.
These prosaic facts are a long way away from the ‘cure for cancer’ objective that characterised research for many decades, with negligible results.
I think that most climate scientists are still stuck in the same universal remedy mindset, and like the cancer researchers of the past, keep doing the same thing over and over, but in more high-tech ways, and expecting different results.
In tackling a complex problem, I have always found that the best way is to break it down to bite-size chunks and start with compiling the knowns as a basis for working on the unknowns. As Willis’ post points out, the foundations of modern climatology are exactly the other way around.

David A
March 8, 2012 2:07 pm

Septic Matthew/Matthew R Marler says:
March 8, 2012 at 9:51 am
“The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.”
==========================================================
The parts of the climate science that give readily computable answers are the simplified thermodynamic models, illustrated in great detail by Raymond T. Pierrehumbert’s book “Principles of Planetary Climate”. Most of that science was “settled” a while back, and the answers are mostly consistent decade after decade. The omissions and necessary elaborations have not changed that much.
====================================================
Curious, thermodynamic models that are consistently not reflected in real world observations “are mostly CONSISTENT decade after decade.”
Perhaps Emerson would have something to say on that, “A foolish consistency is the hobgoblin of small minds”

1DandyTroll
March 8, 2012 2:21 pm

Hudson: Stop your grinnin’ and drop your linen!
I found it, the reason for the time warp.
The year was 1979, electricity was in the air, oil and coal was again abundant after all, economies was booming, the ice age never came, probably a department or two were down sized. But did this stop the industrious climate-to-be entrepreneurs, after all they had felt they were on the right track with the ice age, they just need enough computer power to really push those numbers up and up.
One day, “whom” ever it was, saw the ad, that would change everything, so powerful was it ripped the very fabric of time and space, and it read:
*Fast and powerful
*Systems grow with your needs
*Easy to operate
*Affordable
http://www.trs-80.com/covers/cat-rs-trs80model2(1979).jpg
But, alas, they needed a good predictable software to go with it, to really complete the two to one wounderous digital soul.
As the story is told, a group of people, not a whole lot of different then the ice age mongers themselves and in fact, for all intent and purposes, they were literally in the same field, like soul friends even, inveted what was to become the one side that would complete the machine into one coded super soul, enter: A S T R O L A B E ‘ s A S T R O L O G Y S O F T W A R E ! ! !
The rest is like they say, a history worthy of a recap: By 1988, with their newly accuired, almost too expensive, “64-bit wink wink nudge nudge” system labeled C-64, and with the permanent UN based memory module installed, they’ve been literally PEEKing and POKEing the IPCC-ROM ever since.
(No pun intended to the good folks of astrolabe or the dandy Tandy.)

IAmDigitap
March 8, 2012 2:24 pm

[SNIP: Please address the issues. Thee is no point in provoking him. -REP]

March 8, 2012 2:27 pm

O H Dahlsveen says:
March 8, 2012 at 1:14 pm
I do not believe the theory of the Atmospheric Greenhouse Effect (AGHE) to be a “misnomer”. I think it is just plain wrong
_______________________________________
Basically you are correct. There can be no transfer of thermal energy from the cooler atmosphere to the warmer surface by any physical process, radiation or otherwise. However, we have to acknowledge that radiation from the atmosphere does slow the rate of radiative energy transfer from the surface to the atmosphere. This is why it can be warmer on moist nights. However, on balance, other processes, mostly evaporation and diffusion (conduction) will make up for any reduction in radiative flux, because of the stabilising effect of the massive store of thermal energy beneath the outer crust, which is not due to the very slow rate of terrestrial energy flow.
There is also a cooling effect due to water vapour and CO2 etc as these absorb downwelling IR radiation from the Sun and send upward backradiation to space.
The temperature gradient in the atmosphere is determined by the mass of the atmosphere and the acceleration due to gravity, both close enough to being constants. All the claims about 255K are based on the false assumption that the surface is anything like a blackbody. It’s not because it’s not insulated from losses by diffusion and evapoation. Less than half the energy exits by radiation. So, not only is that 33 degree figure based on a totally incorrect 255K figure, but it also ignores the fact that there is an adiabatic lapse rate that has nothing to do with backradiation.
This is a very brief summary of my peer-reviewed paper being published next week.
[Please take it elsewhere, Doug. Your ideas are not welcome here, the thread is about something completely different. If you want to discuss your fantasies about the climate, please take it to Tallblokes. It is not welcome here. -w]

Septic Matthew/Matthew R Marler
March 8, 2012 2:43 pm

Willis: You’re missing my point. You seem to think that “the answers are consistent decade after decade” means something other than that they are asking the wrong question. My point is, the answers are just the same now as they were in 1979.
Your statement that “The answers are consistent decade after decade” is merely another way of saying what I said, that there has been very little progress in the field for a third of a century.
w.
PS—Ray Pierrehumbert is one of the most committed of the AGW alarmists, and one of the people behind RealClimate. Believe anything he says at your own peril.

I did not intend to defend consistency of the thermydynamics-based forecasts, merely to account for why they have remained consistent. Pierrehumbert’s book “Principles of Planetary Climate” should be read, in my opinion, by everyone who wishes to understand the skeptics’ case because he articulates it so well, but in passing and not intending to support skepticism. Repeatedly he draws attention to omitted details and the inaccuracies of mathematical approximations to shared-world relationships. I have pointed out a few of these to readers of Climate Etc. The book is a good example of much rich science which is not quite accurate enough and complete enough to substantiate long term predictions.
A good complementary book is “Dynamic Analysis of Weather and Climate” by Marcel Leroux. It’s full of presentations of energy flows such as were presented in the Georgia Tech paper that you critiqued. Denying the intellectual content of Pierrehumbert’s book because he is a warmer is a mistake, in my opinion. In my days as “Septic Matthew” I had a few good interchanges with him at RealClimate. I respect his work, though I think his conclusion CO2 induced future warming is inadequately supported by the evidence..
Another example of a complex science full of partial knowledge and mathematical approximations over several time scales and spatial scales is brain science. Like atmospheric science, of which I take climate science to be a subset, I think it is an example of an area of research in need of much more study, but no new “paradigm” in the Kuhnian sense (I say “Kuhnian” sense because according to Kuhn new paradigms are rare; nowadays, even the invention of a new measuring instrument may be called a “paradigm shift”.) I think that there has been much progress in the field of atmospheric science, but the equilibrium thermodynamic arguments are about the same as ever. It is a fundamentally important question whether cloud changes consequent on CO2 or temp increase will provide net negative or net positive feedback (e.g. net retention of warmth at night, net increased albedo in daytime, in most but not all of the earth), but no new overarching Paradigm or Methatheory will be necessary to understand it.

braddles
March 8, 2012 2:53 pm

Imagine that you wanted to change the world, and climate alarmism offered a way. What prediction of future temperature change would suit best for this purpose? The prediction, with an average and a range, would have to:
– have an average value not too extreme, but high enough to create disaster.
– have a lower limit just barely consistent with observations (with a bit of fudging), but not too low to worry about.
– have an upper limit that is not patently absurd.
I suggest that a prediction of 3 degrees plus or minus 1.5 would be perfect to fit these criteria. Is this why the prediction has not changed in 30 years?

Septic Matthew/Matthew R Marler
March 8, 2012 2:54 pm

Richard S. Courtney: Extreme scepticism would confront anybody who claimed to have constructed a computer model of the human brain that could predict brain behaviour,
I am glad that you mentioned brain science.

tallbloke
March 8, 2012 2:59 pm

The Pompous Git says:
March 8, 2012 at 10:55 am
The measured decrease in evapotranspiration over the last 50 yr is supposed to be due to an increase in cloud cover. The increase in cloud cover was supposedly confirmed by measurement of earthshine from the moon.

What a mine of disinformation you are. Go read Palle et al’s work and report back.

u.k.(us)
March 8, 2012 3:21 pm

So, after 30+ years of research, all that has been accomplished, is the creation of a well funded niche that is currently being exploited for political /industrial /bureaucratic gain.
Funded by taxpayers (voters), that must fight to see how their money is being spent.
!*+#%$……., but rant/.

Jurgen
March 8, 2012 3:30 pm

As for ideas “why” I was pondering this.
Any model construction presupposes some mechanisms and relations, and being a “model” (a more or less complex set of interrelations) it’s testability, and by implication it’s adaptability and flexibility for improvement are hampered from the start.
This line of thought goes beyond your theory about the linear surface air temperature forcing assumption as a possible cause, as it states doubt about the choice of starting with a model in the first place.
This may sound like too vague an argument to be of practical value, so I’ll clarify a bit.
In astronomy there is the well-known 3 body problem – I’ll cite what scholarpedia says about it:
http://www.scholarpedia.org/article/Three_body_problem
“While the two-body problem is integrable and its solutions completely understood (see [2],[AKN],[Al],[BP]), solutions of the three-body problem may be of an arbitrary complexity and are very far from being completely understood. ”
Well, there you have it. Here all you have are “just” three bodies with known physical properties, and their initial movements are also known, but still, even at this very basic level and in the ideal theoretical situation of no other influences but their own attraction, solutions “are very far from being completely understood”.
Mind you, there is an intrinsic problem here, and computers have nothing to do with it.
There seems to be a pretty naive assumption that computers by increasing their capacity may cross a boundary beyond which they become capable of solving a problem. Well you may try to square the circle with one or with a thousand million supercomputers – mathematically it is just not possible. It’s an intrinsic problem, and calculation power has nothing to do with it.
Now you can argue that simulation is not “problem solving” and that with simulation capacity matters. Well it matters if the model is a good one and a practical one, like with simulating a jet-engine. Here a model and a simulation are successfully applied. Well, an engineer knows his engine up to the tiniest part in and out. That is what makes this possible.
Compared with a model of a jet-engine a climate model is a wild gamble, a complete shot it the dark. Climate is an open system with many variables and non-linear relations you just make guesses about.
You can summarize the parallel in both your argument and mine: to start with some assumed knowledge about climate is the very thing that gets in the way of finding answers. You think you know what you are looking for but actually you don’t, so you will never find it.
As for a solution of this paradox I would completely turn around my strategy. Accept it you know not a lot if anything about how climate works. Just start with the data you have. There are a lot of data out there about many subjects related to climate. And start what’s called “data-mining”.
It’s a challenge (see e.g. http://www.scribd.com/doc/33728981/Applicability-of-Data-Mining-Techniques-for-Climate-Prediction-%E2%80%93-A-Survey-Approach) – but of course it is – we’re talking climate here. It won’t give spectacular answers to the public soon, but in my opinion it’s the only sensible approach.

March 8, 2012 3:46 pm

tallbloke said @ March 8, 2012 at 2:59 pm

The Pompous Git says:
March 8, 2012 at 10:55 am
The measured decrease in evapotranspiration over the last 50 yr is supposed to be due to an increase in cloud cover. The increase in cloud cover was supposedly confirmed by measurement of earthshine from the moon.

What a mine of disinformation you are. Go read Palle et al’s work and report back.

tallbloke, here’s a link: http://www.mindfully.org/Air/2002/Decreased-Pan-Evaporation1nov02.htm
There is no mention of Palle et al in that paper.

March 8, 2012 3:51 pm

DesertYote said @ March 8, 2012 at 11:41 am

I think your last statement is about as close to the truth as one can get. Political considerations trumping the pursuit of truth.
BTW, my eyesight is very bad. My latest glasses are amazing. Progressive trifocals made from a very high IF material with anti-reflective coating. Ever pair of glasses I get are more advanced then the last pair, and this has been the case for 30 years.

My latest acquired on Monday this week are similar except for the lense coating. They have a stainless steel frame and weigh 21 gm. A pair with glass lenses from 40 years ago before I became presbyopic weigh over 50 gm. Gits like technological progress 🙂

Matt G
March 8, 2012 3:53 pm

The Pompous Git says:
March 8, 2012 at 10:55 am
Tallbloke is right based on observed satellite data with trends in global cloud and temperature. Global cloud levels declined since around 1983 until 2000 then remained stable for a period after, until increasing recently. It is a disgrace that this has been ignored by alarmist climate scientsts recently. This global cloud trend has nothing to do with CO2 and is so obvious it has caused the recent warming.

Richard G
March 8, 2012 4:20 pm

Willis, your jerkitudinous mentation is versimilitudinously awsome.
KUTGW.

March 8, 2012 4:36 pm

the guesstimated range of climate sensitivity hasn’t narrowed in any significant fashion. It’s still right around 3 ± 1.5°C per double of CO2, just like it was in 1979.
_________________________________________
Indeed, and they still don’t recognise, even though it’s been pointed out numerous times, that the sensitivity calculation is based on completely fabricated “physics” which assumes, firstly that the Earth’s surface only loses thermal energy by radiation – hence their 255K figure – and then they say that 33 degrees is due to water vapour and trace gases, when in fact it’s not 33 degrees at all (because the 255K is wrong) , and whatever it should be is due to the acceleration due to gravity, which determines the adiabatic lapse rate. Then, to cap it off, they put back evaporation and diffusion (wrongly named convection or thermals) into their energy diagrams, thus admitting their mistake in assuming that the surface only radiates like a perfectly insulated blackbody does.
They also neglect the cooling effect due to absorption of solar radiation in the SW IR range, followed by upwelling “backradiation” to space. This SW IR has more energy per photon than does the LW IR from the surface. And backradiation to space does prevent warming, just like reflection, whereas backradiation downwards cannot transfer thermal energy to the surface – it can only slow the radiative component of surface cooling, not the evaporative of diffusion processes.
Hence there is absolutely no basis whatsoever for any warming sensitivity when, in fact, carbon dioxide almost certainly has a very slight net cooling effect.

March 8, 2012 4:53 pm

Matt G said @ March 8, 2012 at 3:53 pm

Tallbloke is right based on observed satellite data with trends in global cloud and temperature. Global cloud levels declined since around 1983 until 2000 then remained stable for a period after, until increasing recently. It is a disgrace that this has been ignored by alarmist climate scientsts recently. This global cloud trend has nothing to do with CO2 and is so obvious it has caused the recent warming.

Pan evaporation rates declined from circa 1950 onwards. Changes in global cloud cover from 1983 onward could have had no effect on pan evaporation in the 60s & 70s. Nor could they be responsible for the recent warming that began in the mid-70s. If you wish to argue with these points you will have to do better I’m afraid. Tallbloke referred to Palle’s work on pan evaporation rates, but I have been unable to find anything by him from an admittedly cursory search.
You might want to refer to http://www.mindfully.org/Air/2002/Decreased-Pan-Evaporation1nov02.htm as part of my “mine of disinformation”.

johanna
March 8, 2012 5:25 pm

Further to my previous post about working from the known to the unknown (which has been reversed in contemporary climate science), Anthony’s Surfacestations project, Willis’ investigation of Argo, Geoff Sherrington’s work on the history of temperature stations and stats in Australia, and many others who deserve mention are surely the foundation of science. It’s not as sexy as playing with models and fancy computers, it doesn’t provide instant answers – but it is actually far more interesting and intellectually challenging.
Modern climate ‘science’ suffers from a top-down model being imposed on real data. Proper science is where we try to develop hypotheses from what is observed, or even from deductions from what has been observed.
As with the quest for the ‘cure’ for ‘cancer’, the question was wrongly framed in the first place.

March 8, 2012 5:30 pm

If —— energy in = energy out——- then wisdom in, — must = wisdom out, so who the heck knows anything?

March 8, 2012 5:49 pm

For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.
I’ve floated the idea that the forcings model/theory is wrong, because negative feedbacks operate on much shorter timescales than the forcings. Therefore, forcings have no effect on the (net) climate equilibrium
As these feedbacks are water based, what drives climate change is factors that affect the phase changes of water – aerosols, GCRs, possibly ozone. Affecting cloud formation and type, precipitation and snow/ice melt.
Willis, you are too modest. That was an excellent insightful analysis.

March 8, 2012 6:00 pm

I should have said,
Forcings have no effect on the (net) climate equilibrium, over some timescale. Which doesn’t preclude natural cycles. I am thinking of natural ocean cycles like ENSO.

markx
March 8, 2012 7:10 pm

Willis hits the nail directly and firmly on the head here.
The whole thing started with a simple hypothesis.
And almost everything that has been done since has been based on the premise that the hypothesis is true.

michael hart
March 8, 2012 7:42 pm

Richard Courtney,
I quite agree with your technical analysis, and am glad of it. My original post was more focused on the human elements of learning and performing science. And also not wanting to be (too) harsh on (too) many people I’ve never met. At risk of insulting some, I’ll elaborate.
I think it quite possible for many students to go through a modern scientific education without being confronted with the failures of their results, theories or understandings. I know it is all too easy for a scientist to talk-the-talk and win arguments with their peers. Without external input this can happen collectively, and a whole group can become convinced their arguments are correct. And for long periods of time. So group-think can occur, especially in small incestuous research areas.
But in, for example, chemistry laboratories, the external input of real contradictory data can arrive very quickly and unpleasantly. This makes it much harder to ignore, so the human learning and thinking process may be a very different one. If the discipline is a large one, then there are more likely to be significant dissenters to a dominant paradigm who cannot be silenced. I would further add that we are probably still in the first generation where computer modelling is so widely and freely available to so many. I have experienced it to be a very powerful and seductive tool. Scientifically, it is also a very dangerous one.
Michael Palmer:
No, I’m not bitter about synthetic chemistry. It was tongue-in-cheek. As many unemployed or “under paid” scientists will admit when they are candid with themselves, they do it because they like it.

Jeff D
March 8, 2012 8:27 pm

I think the “team” really does need the new expensive quantum computers. It’s the only thing that will allow them to be right and wrong at the same time……
humm, never mind. Why waist the money we have that already.

March 8, 2012 9:19 pm

The conclusions of modern climatology are based upon a number of misunderstandings. One of the more fundamental of these is of the significance of the underlying statistical population to a scientific inquiry. In particular, in the absence of a statistical population an inquiry cannot be a “scientific” inquiry for the claims that are made by the theories/models are not susceptible to being tested. Today, as in the past, no statistical population underlies the IPCC’s claim of CAGW. To fail to identify the underlying statistical population is to ensure disaster for a scientific inquiry for this inquiry is not truly “scientific.”

March 8, 2012 9:49 pm

Willis wrote “For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.”
Willis, the issue is very simple. Computer climate scientists try to solve the climate problem with a methodology that starts from the known first principles of physics. This methodology simply does not work for complex systems.
Complex systems need to be studied by using phenomenological models that look how the whole system behaves and try to directly model its dynamics. See here
http://en.wikipedia.org/wiki/Phenomenology_(science)
This philosophy is the one adopted in my studies.
In the 1970 the issue was addressed in numerous disciplines from Economics to Medicine and every discipline dealing with complex systems understood the problem and progresses by developing phenomenological approaches together with other more analytical approaches.
Unfortunately, computer climate scientists never got it and they get stuck.
And the discipline never developed.

March 8, 2012 9:51 pm

The above issues are addressed extensively in my book
“Disrupted Networks: From Physics to Climate Change”
http://www.amazon.com/Disrupted-Networks-Physics-Nonlinear-Phenomena/dp/9814304301/ref=sr_1_1?ie=UTF8&qid=1331272230&sr=8-1

Len
March 8, 2012 10:19 pm

Great post Willis, again.
An analogy to the lack of progress in a science from starting with the assumption that we know how something works and then wasting time, money, and lives trying to prove what we assumed is Lysenkoism. Look what this did to the Soviet Union’s genetics and biology research.
Furthermore, Lysenko pursued the research because Stalin would give him money, power, and glory. Sounds much like the CAGW assumptions today and their lust for power, money, and glory.

March 8, 2012 11:11 pm

This global cloud trend has nothing to do with CO2 and is so obvious it has caused the recent warming.
Indeed,
Climate science has completely ignored that reduced anthropogenic aerosol seeding has reduced cloud reflectivity (as well as reducing cloud amounts), while at the same time they are promoting schemes to artificially increase cloud reflectivity.
http://en.wikipedia.org/wiki/Cloud_reflectivity_modification
I don’t know whether they are blinded by the dogma or the money.

March 8, 2012 11:40 pm

Reblogged this on OnYourMind and commented:
Nice Article

March 9, 2012 12:25 am

In the world to this day, all the inventors and all the dreamers are looking for positive feed back in everything, thus we could have perpetual motion and unlimited free power, alas and alack there is no free lunch any where, including the climate.

tallbloke
March 9, 2012 12:27 am

The Pompous Git says:
March 8, 2012 at 4:53 pm
Tallbloke referred to Palle’s work on pan evaporation rates, but I have been unable to find anything by him from an admittedly cursory search.

Incorrect. I referred to Palle et al in relation to your mention of the Earthshine measurements, which they carried out. They show that cloud increased after 1997.5, then levelled out. Their data, where it overlaps, is consistent with the findings of the ISCCP data which shows the decrease in low tropical cloud cover from 1983-1998 Matt referred to.
As Soon et al and Doug Proctor’s post at my site I linked earlier show, the correlation between sunshine hours and surface temperature is far closer than that between co2 levels and temperature.
So far as I can see, your contention that cloud cover change can’t be responsible for the late C0th warming is simply unsupported argument by assertion. The relationship between evaporation rates and cloud cover is complex and poorly understood. Yet you seem to be implying that your unspecified reference to an uncited study showing a reduction in evapo transpiration means the cloud cover reduction in the tropics measured and reported by the ISCCP didn’t happen.
Where’s the beef?

Watchman
March 9, 2012 2:30 am

I wonder whether the explanation for lack of progress here is best explained by economics? In simple terms, there has been no market in climate science worth mentioning – all the money has gone to those who accept the tenants of the case outlined in the 1979 NAS report. There has been no reasonable funding of other viewpoints (not to mention the adherents of orthodoxy have tried to close down other viewpoints through non-scientific means), and therefore no particular reason for career scientists to pursue these views – or incentive to challenge the orthodoxy.
As in any field, a lack of competition for ideas means they do not develop – as any look at the papers of e.g. Professor Mann would indicate, most of what is published as important is in fact either tinkering with existing models or finding new justifications for this – it is not raw science as this is not what is required for funding. There is no competition, therefore there is no incentive to actually challenge the science.
In effect, this is what happens when a small field (probably less than 100 scientists in 1979 – and no specialised departments) receives loads of money, so there is no shortage. The challenge to the orthodoxy was only conducted by outsiders, by non-careerists who do science for love not money and, now, by increasing number of new entrants to the field (note how few early career academics are prominent supporters of the orthodoxy) caused by the increase in funds and competing for their share of the money (it is now a crowded field).
In short, the lack of a free market in ideas stalled any development of ideas. A wonderful case study.

March 9, 2012 2:31 am

Willis, Philip, Tallbloke and others
As I read your posts above it just seems there is so much more that climatologists overlook, and that’s why no progress is made. Maybe some do privately, but don’t dare to speak up. I guess I’m lucky to be able to study these issues for literally thousands of hours in my semi-retirement and not fear the sack or anything relating to my reputation, or whatever it is that holds back progress.
Basically, it’s one thing to pass a degree in physics, but it’s far more complex to really think through the physics and relate it to the atmosphere. In othet posts today I’ve given something of an idea of what I’ve written in much more detail and which will be available Monday or Tuesday.
You are looking at feedbacks and the like up in the atmosphere, fine, but you need to come to grips with what has been proven computationally (and I believe I can say theoretically and empirically) that radiated energy from a cool atmosphere is not converted to thermal energy in the surface. All it does is slow the radiative transfer of energy from the surface, but not the evaporative transfer, nor the diffusion (conduction, if you like) followed by convection.
But even that’s not the end of the story. The 255K and that 33 degrees are wrong and anyway have absolutely nothing to do with sensitivity. The upward backradiation to space of the solar SW IR (captured by WV and CO2) has a cooling effect, and, after all, there’s more energy in SW IR photons than LW IR photons.
But all these considerations are totally eclipsed by the 1,000 year and 60 year natural cycles (not just ENSO cycles which are a result of climate change, not a cause) and by the thermal inertia of the massive amount of thermal energy in the inner crust, mantle and core which we know is retained well by the crust because the terrestrial flow is so slow. It could take hundreds of thousands of years to change the gradient of the temperature plot all the way from the core to the surface. So there is a huge stabilising effect brought about by very steady temperatures just 1Km underground for example.
I can’t condensed 14 pages or so here, but I’ll get back when it’s available.

observa
March 9, 2012 3:54 am

Oh I get it now! When our current scientific Climate Commissioner, Tim Flannery said “Within this century the concept of the strong Gaia will actually become physically manifest. This planet, this Gaia, will have acquired a brain and a nervous system.”( in an ABC broadcast on the first day of 2011) he was completely misquoted. We all thought he was channelling Lovelock when he was actually referring to supercomputer ‘Gaea’.
As you were ladies and gentlemen because all will be revealed in Gaea’s good time, although perhaps it just needs some offering of gingerbread or an ice cream sandwich or some such to help the Android manifest. What about an Apple for the great teacher?

Gail Combs
March 9, 2012 4:07 am

The Pompous Git, Matt G, Tallbloke
For what it is worth here are the earthshine measurements showing an increase in cloud cover and changes in albedo from 1998 to 2008. http://www.bbso.njit.edu/Research/EarthShine/
The albedo has increased over that decade.
Earthshine variations by month: http://science.nasa.gov/science-news/science-at-nasa/2002/12apr_earthshine/
The Earthshine Project: Measuring the earth’s albedo. Latest results
Palle, E.; Montanes Rodriguez, P.; Goode, P. R.; Koonin, S. E.; Qiu, J.
EGS – AGU – EUG Joint Assembly, Abstracts from the meeting held in Nice, France, 6 – 11 April 2003, abstract #7730
ABSTRACT
“….. During the past 4 years, a significant increasing trend in the averaged Earth’s reflectance has been detected in the observational data. More scarce data from 1994 and 1995 allow us to take a longer-term look at the Earth’s albedo variability and the possibility of a response of this parameter to solar activity is discussed. Simultaneously, spectroscopic observations of the earthshine have been carried out at Palomar Observatory. First results and comparison between the spectral and photometric observations are also being presented….” http://adsabs.harvard.edu/abs/2003EAEJA…..7730P

March 9, 2012 4:58 am

Reblogged this on gottadobetterthanthis and commented:
Since Willis wrote it, it is obviously worth reading. Also, it goes with my recent comments on perspective. 33 years is a long time in our world, in our lives. It is amazing to think how far computers have advanced in my life time. It is discouraging that software stays ahead of the computing power. (My poor eight year old machine can hardly load typical web pages anymore.) So, I post a bit of perspective with regard to human understanding of the atmosphere of our planet. The bottom line is we may not even have started thinking about it properly yet. I do think we under estimate the effects associated with living organisms. Yet our pride makes us over estimate the effects we humans have, and we tend to grossly over estimate how much effect we can determine to have.

Myrrh
March 9, 2012 5:45 am

Matt G says:
March 8, 2012 at 3:53 pm
The Pompous Git says:
March 8, 2012 at 10:55 am
Tallbloke is right based on observed satellite data with trends in global cloud and temperature. Global cloud levels declined since around 1983 until 2000 then remained stable for a period after, until increasing recently. It is a disgrace that this has been ignored by alarmist climate scientsts recently. This global cloud trend has nothing to do with CO2 and is so obvious it has caused the recent warming.
So if they declined between 1983 and 2000 why didn’t temperatures plummet? We kept being told is was always ‘hotter than’ throughout this 20 odd years?
Tallbloke’s cloud cover increase is when temps began falling – so how is this proof that increased cloud cover is cause of warming?
I do wish y’all would come back to basic trad science..
.. do the models take into consideration why clouds form?
Because, warmer temps cause water to evaporate faster, evaporated fluid liquid water is fluid gas water vapour, (water is always evaporating (triple point), it is lighter than air), warmer water vapour rises and cools in the colder heights condensing out into fluid liquid water or ice – releasing its heat. Carbon dioxide is fully part of this WATER CYCLE that has been completely excised from the ‘energy budget’ the models work to, all pure clean rain is carbonic acid, formed because real gas molecules of water and carbon dioxide have an irresistable attraction to each other.
Clouds don’t just appear magically except in the empty space imaginary ideal gas gravity less atmosphere the models inhabit. They form because water vapour takes away heat from the surface – think deserts, without the water cycle the Earth would be 67°C – 52°C hotter.
The meme ‘greenhouse gases warm the Earth’ is sleight of hand, there never was a pea under the thimble.
What was the rainfall like in the period when there was less cloud cover? Were there more storms getting rid of the heat? Is the more cloud cover since then when temperatures have been dropping because it’s not hot enough to give more precipitation?
The reason there has been no progress as Willis has amusingly shown, is because there is not supposed to be, this is a scientific fraud complete with its own fictional fisics, not real world science.
What has improved in line with technical progress, is the narrative, to practically world domination.

tallbloke
March 9, 2012 6:11 am

Gail Combs says:
March 9, 2012 at 4:07 am
The Earthshine Project: Measuring the earth’s albedo. Latest results
Palle, E.; Montanes Rodriguez, P.; Goode, P. R.; Koonin, S. E.; Qiu, J.
EGS – AGU – EUG Joint Assembly, Abstracts from the meeting held in Nice, France, 6 – 11 April 2003, abstract #7730
ABSTRACT
“….. During the past 4 years, a significant increasing trend in the averaged Earth’s reflectance has been detected in the observational data. More scarce data from 1994 and 1995 allow us to take a longer-term look at the Earth’s albedo variability and the possibility of a response of this parameter to solar activity is discussed. Simultaneously, spectroscopic observations of the earthshine have been carried out at Palomar Observatory. First results and comparison between the spectral and photometric observations are also being presented….” http://adsabs.harvard.edu/abs/2003EAEJA…..7730P

Thanks for the reference. After I worked out the URL had been munged by wordpress (too many full stops) I got to the page full of hope, only to be confronted with:
Fulltext Article not available
Shame.

tallbloke
March 9, 2012 8:09 am

Myrrh says:
March 9, 2012 at 5:45 am
Tallbloke’s cloud cover increase is when temps began falling – so how is this proof that increased cloud cover is cause of warming?
I do wish y’all would come back to basic trad science..

The ability to miss the point is strong in some…
The overall cloud feedback is negative as Spencer and Lindzen have shown with real world data. The warming from ~1980 to ~1998 was due to less cloud/more sunshine at the surface. The cooling since is due to cloud increase.

March 9, 2012 9:39 am

tallbloke said @ March 9, 2012 at 12:27 am

The Pompous Git says:
March 8, 2012 at 4:53 pm
Tallbloke referred to Palle’s work on pan evaporation rates, but I have been unable to find anything by him from an admittedly cursory search.

Incorrect. I referred to Palle et al in relation to your mention of the Earthshine measurements, which they carried out. They show that cloud increased after 1997.5, then levelled out. Their data, where it overlaps, is consistent with the findings of the ISCCP data which shows the decrease in low tropical cloud cover from 1983-1998 Matt referred to.
As Soon et al and Doug Proctor’s post at my site I linked earlier show, the correlation between sunshine hours and surface temperature is far closer than that between co2 levels and temperature.
So far as I can see, your contention that cloud cover change can’t be responsible for the late C0th warming is simply unsupported argument by assertion. The relationship between evaporation rates and cloud cover is complex and poorly understood. Yet you seem to be implying that your unspecified reference to an uncited study showing a reduction in evapo transpiration means the cloud cover reduction in the tropics measured and reported by the ISCCP didn’t happen.
Where’s the beef?

My original response was:

The measured decrease in evapotranspiration over the last 50 yr is supposed to be due to an increase in cloud cover. The increase in cloud cover was supposedly confirmed by measurement of earthshine from the moon.

[emphasis added]
Supposed means “believed uncertainly”. I was hoping for an explanation to resolve the apparent contradiction. Your response was pretty unresponsive and definitely rude.
It is not my contention that evapotranspiration rates have decreased over the last half century. I apologise for linking to an “unspecified reference to an uncited study”; here is the full reference where you can pay to read the content I linked to:
The Cause of Decreased Pan Evaporation over the Past 50 Years, Michael L. Roderick and Graham D. Farquhar, Science 15 November 2002: Vol. 298 no. 5597 pp. 1410-1411 DOI: 10.1126/science.1075390-a.
Also note that while your original post to which I responded linked to your website, it timed out as happens quite often when one lives in the nether region of the planet.
I never “[contended] that cloud cover change can’t be responsible for the late C0th warming”, I merely pointed out that this did not appear to be reconcilable with the work of Roderick & Farquhar (not to mention many other agricultural researchers).
Most of the beef are eating the grass I grow, though some of it is in the freezer.

March 9, 2012 9:46 am

Gail Combs said @ March 9, 2012 at 4:07 am

The Pompous Git, Matt G, Tallbloke
For what it is worth here are the earthshine measurements showing an increase in cloud cover and changes in albedo from 1998 to 2008. http://www.bbso.njit.edu/Research/EarthShine/
The albedo has increased over that decade.

Thanks Gail. Obviously I was misinformed that the earthshine project had confirmed the insolation variation invoked to explain the pan evaporation paradox is incorrect. The period of coverage is far too short and recent.

Gail Combs
March 9, 2012 10:24 am

Tallbloke, on references to: The Earthshine Project: Measuring the earth’s albedo……
I have this PDF: http://bbso.njit.edu/Research/EarthShine/literature/Palle_etal_2006_EOS.pdf
Can Earth’s Albedo and Surface Temperatures Increase Together?
and this research Article: http://www.hindawi.com/journals/aa/2010/963650/
Automated Observations of the Earthshine
and this PDF: http://bbso.njit.edu/Research/EarthShine/literature/Palle_etal_2008_JGR.pdf
Inter-annual variations in Earth’s reflectance 1999-2007.
Hope that helps.

March 9, 2012 10:26 am

Miscellaneous thought #7,853
The pan evaporation decrease is supposedly the result of decreased insolation in turn caused by increased cloud cover, and reduced windspeed. Pan evaporation decrease is observed in both hemispheres of the planet. Global warming OTOH is confined to the Northern hemisphere. WUWT?

Matt G
March 9, 2012 12:14 pm

The Pompous Git says:
March 8, 2012 at 4:53 pm
Pan evaporation rates declined from circa 1950 onwards. Changes in global cloud cover from 1983 onward could have had no effect on pan evaporation in the 60s & 70s. Nor could they be responsible for the recent warming that began in the mid-70s. If you wish to argue with these points you will have to do better I’m afraid. Tallbloke referred to Palle’s work on pan evaporation rates, but I have been unable to find anything by him from an admittedly cursory search.
REPLY
The recent warming didn’t start properly until 1980 when just before it had slightly warmed from a much cooler short term period, but was no different from years in the early part of the same decade (1970’s)
http://www.woodfortrees.org/plot/hadcrut3gl/from:1998/plot/hadsst2gl/from:1998/plot/hadcrut3gl/from:1998/trend/plot/hadsst2gl/from:1998/trend/plot/hadcrut3gl/from:1980/to:1998/plot/hadcrut3gl/from:1980/to:1998/trend/plot/hadcrut3gl/from:1934/to:1980/plot/hadcrut3gl/from:1934/to:1980/trend/plot/esrl-co2/from:1955/normalise
Unfortunately there is generally no satellite data before 1983 representing global cloud changes, so can only compare from 1983 onwards. Global cloud cover changes before 1983 will have had some affect on pan evaporation rates because after these relate to each other in some way. Just that we will never know what global cloud changes were by observations back then because we have no data for it. Therefore global cloud cover in the 1960’s and 1970’s could have had an affect on pan evaporation rates.

Myrrh
March 9, 2012 12:24 pm

tallbloke says:
March 9, 2012 at 8:09 am
Myrrh says:
March 9, 2012 at 5:45 am
Tallbloke’s cloud cover increase is when temps began falling – so how is this proof that increased cloud cover is cause of warming?
I do wish y’all would come back to basic trad science..
The ability to miss the point is strong in some…
Why don’t you try concentrating a little more?
I was addressing this:
=============
Matt G says:
March 8, 2012 at 3:53 pm
The Pompous Git says:
March 8, 2012 at 10:55 am
Tallbloke is right based on observed satellite data with trends in global cloud and temperature. Global cloud levels declined since around 1983 until 2000 then remained stable for a period after, until increasing recently. It is a disgrace that this has been ignored by alarmist climate scientsts recently. This global cloud trend has nothing to do with CO2 and is so obvious it has caused the recent warming.
=============
Saying you were right and ending saying the “it has caused the recent warming” which links to “increasing recently” of cloud cover. So if you have a gripe take it up with him. Or, is mangling my posts to make them appear something they’re not, part of your remit?
The overall cloud feedback is negative as Spencer and Lindzen have shown with real world data. The warming from ~1980 to ~1998 was due to less cloud/more sunshine at the surface. The cooling since is due to cloud increase.
And my point again, less cloud during the warming years could be because they had precipitated out..
Real world physics, not the modelled imaginary atmosphere where clouds magically appear in an ideal gas world without gravity. They do not account for why clouds form.
As I said, they have anyway taken out the Water Cycle all together – they don’t include the whole process! If they did, it would show that the water cycle brings down the temperature by some 52°C to get to the 15°C we have. Imagining what clouds v non clouds means without considering the whole process is meaningless, and any conclusion not based on the reality of the actual process not worth reading.
Anyway, I’m asking, were there more storms during the warmer years? That would seem logical, these clearing the sky of cloud.

Matt G
March 9, 2012 12:36 pm

Gail Combs says:
March 9, 2012 at 4:07 am
For what it is worth here are the earthshine measurements showing an increase in cloud cover and changes in albedo from 1998 to 2008. http://www.bbso.njit.edu/Research/EarthShine/
REPLY
Thankyou, little difference to this data source.
http://img854.imageshack.us/img854/5246/globaltempvglobalcloudb.png
Both show a stable period early on (first half of 2000’s) before increasing later in the decade and show overall a decadal increase in global cloud cover.

Matt G
March 9, 2012 12:44 pm

Myrrh says:
March 9, 2012 at 5:45 am
I see Tallbloke has answered that question since.

Sparks
March 9, 2012 4:23 pm

255K was the memory of an Amstrad I used to own, before it was sucked into a hypothetical black body when I was trying to understand SB law and pressed the wrong button or something or other.

Gail Combs
March 9, 2012 5:06 pm

The Pompous Git says:
March 8, 2012 at 10:55 am
“……….Anyway, I’m asking, were there more storms during the warmer years? That would seem logical, these clearing the sky of cloud……”
_________________________________________
For what it is worth, in response to Willis’s Thermostat Theory, I took a quick and dirty look at the number of days of rain per month during the summer along the USA eastern seaboard. Florida had about 20 stormy days per month and that decreased to 10 storms per month until you hit about Fayetteville NC. At that point the number of storms became rather sporadic. The average temperature also decreased from normally in the 90F and higher to the 85F to 95F range.
So all things being equal it looks like thunderstorm formation is at least somewhat temperature dependent.
NOAA says it takes three factors:
Moisture
Instability, and
a lifting mechanism.
http://www.srh.noaa.gov/jetstream/tstorms/ingredient.htm

Gail Combs
March 9, 2012 5:12 pm

I should add the Thermostat theory: http://wattsupwiththat.com/2009/06/14/the-thermostat-hypothesis/
(The southern US states along the Atlantic are known for their summer afternoon thunderstorms.)

Myrrh
March 9, 2012 5:50 pm

Matt G says:
March 9, 2012 at 12:44 pm
Myrrh says:
March 9, 2012 at 5:45 am
I see Tallbloke has answered that question since.
What any of you see appears to be whatever you want..
You didn’t say what Tallbloke was saying.
Your: “Tallbloke is right based on observed satellite data with trends in global cloud and temperature. Global cloud levels declined since around 1983 until 2000 then remained stable for a period after, until increasing recently. It is a disgrace that this has been ignored by alarmist climate scientsts recently. This global cloud trend has nothing to do with CO2 and is so obvious it has caused the recent warming.”
does not compute. Tallbloke was saying:
“The overall cloud feedback is negative as Spencer and Lindzen have shown with real world data. The warming from ~1980 to ~1998 was due to less cloud/more sunshine at the surface. The cooling since is due to cloud increase.”
You said the recent warming was from cloud increase.

March 9, 2012 6:48 pm

Gail Combs said @ March 9, 2012 at 5:06 pm

The Pompous Git says:
March 8, 2012 at 10:55 am
“……….Anyway, I’m asking, were there more storms during the warmer years? That would seem logical, these clearing the sky of cloud……”

Actually, I didn’t say that at all; I didn’t even think it! I do think it’s time this “mine of disinformation” called it quits and split some more firewood against the coming winter.

March 9, 2012 8:29 pm

Willis, I acknowledge that I put myself at a disadvantage just giving a brief summary of key points in the paper. It needs reading of the full 6,600 words to gain new insights – if one does not have a closed mind. So I am not offended by comments which are not based on the reading of the document. In contrast, those scientists who have reviewed it are enthusiastic about its content.

March 9, 2012 8:45 pm

Willis – don’t you think it ironic that Anthony Watt’s makes an article out of a post by Theodore in which he writes Still, by 1989, the AGW science did not make sense to me in light that it would violate the Second Law of Thermodynamics. Which I remind everyone – remains in effect to this very day.
Guess what my paper is titled: Radiated Energy and the Second Law of Thermodynamics,

Myrrh
March 9, 2012 9:55 pm

Gail Combs says:
March 9, 2012 at 5:06 pm
The Pompous Git says:
March 8, 2012 at 10:55 am
“……….Anyway, I’m asking, were there more storms during the warmer years? That would seem logical, these clearing the sky of cloud……”
_________________________________________
For what it is worth, in response to Willis’s Thermostat Theory, I took a quick and dirty look at the number of days of rain per month during the summer along the USA eastern seaboard. Florida had about 20 stormy days per month and that decreased to 10 storms per month until you hit about Fayetteville NC. At that point the number of storms became rather sporadic. The average temperature also decreased from normally in the 90F and higher to the 85F to 95F range.
So all things being equal it looks like thunderstorm formation is at least somewhat temperature dependent.
NOAA says it takes three factors:
Moisture
Instability, and
a lifting mechanism.
http://www.srh.noaa.gov/jetstream/tstorms/ingredient.htm
==========
Gail – ’twas something I said. Just making the point, to recap, re the argument about whether clouds cause warming or cooling, that hot and less clouds would have to be considered from a real science base ,would have know why the clouds formed, which is missing from models because they’ve taken out the water cycle completely. So logically, the ‘warming phase with less cloud cover’ could be explained by this: hotter, more water vapour, more cloud, storms, and then clearing skies; this could be looked at by comparing storm activity in higher temp phases with that of lower temps.
The second half of that which I didn’t go into as more complicated, would be to look at why more cloud when conditions cooler? Those arguing that it is the more clouds making this phase cooler are still arguing without taking into consideration how clouds come to be, clouds come to be because of heat evaporating water, the hotter it is the more quickly water vapour anyway lighter than air will rise, but, what conditions would need to be in place to cause it to be warm enough for cloud to form, but not shift – to hang around without coming down as precipitation?
Thanks for the NOAA page – I hope that helps those enamoured with the model fisics, and in the Willis link is helpful info on evaporation.

Matt G
March 10, 2012 2:49 am

Myrrh says:
March 9, 2012 at 5:50 pm
You didn’t say what Tallbloke was saying.
Your: “Tallbloke is right based on observed satellite data with trends in global cloud and temperature. Global cloud levels declined since around 1983 until 2000 then remained stable for a period after, until increasing recently. It is a disgrace that this has been ignored by alarmist climate scientsts recently. This global cloud trend has nothing to do with CO2 and is so obvious it has caused the recent warming.”
does not compute. Tallbloke was saying:
“The overall cloud feedback is negative as Spencer and Lindzen have shown with real world data. The warming from ~1980 to ~1998 was due to less cloud/more sunshine at the surface. The cooling since is due to cloud increase.”
You said the recent warming was from cloud increase.
REPLY
I never implied that and therefore the confusion has been when the recent warming occurred.
I was referring to recent warming from the same period roughly between 1980 and 1998. There hasn’t been any warming after 1998 (HAD3), thats why this period (1980-1998) is the recent warming. The pro CAGW are the ones claiming that warming should occur from cloud increase and clearly that has failed.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1998/plot/hadsst2gl/from:1998/plot/hadcrut3gl/from:1998/trend/plot/hadsst2gl/from:1998/trend/plot/hadcrut3gl/from:1980/to:1998/plot/hadcrut3gl/from:1980/to:1998/trend/plot/hadcrut3gl/from:1934/to:1980/plot/hadcrut3gl/from:1934/to:1980/trend/plot/esrl-co2/from:1955/normalise
Therefore the recent warming was caused by a decline in global cloud levels during this period. After 1998 with no warming cloud albedo become stable for a time while very slighty increasing, until an obvious increase over recent years mentioned in previous post.

Myrrh
March 10, 2012 5:13 am

Matt G says:
March 10, 2012 at 2:49 am
REPLY
I never implied that and therefore the confusion has been when the recent warming occurred.
That’s how it read..
Therefore the recent warming was caused by a decline in global cloud levels during this period. After 1998 with no warming cloud albedo become stable for a time while very slighty increasing, until an obvious increase over recent years mentioned in previous post.
After 1998 with no warming cloud albedo become stable for a time while very slighty increasing, until an obvious increase over recent years mentioned in previous post.

But thank you for the clarification, that’s all I thought I’d get as a reply in the first place…

Galane
March 10, 2012 5:41 am

“Once you have the answer, the questions get much easier …” yeah, but as long as ‘science’ is being done that way it is us who are in Jeopardy.
(That’s a reference to an American TV game show hosted by Alex Trebeck.)

IAmDigitap
March 10, 2012 2:49 pm

The fascination with the fiction that the Magic Gas Effect is Tyndall Radiation, and all is well in Magic Gas Hypothesis Land, would be hilarity if it wasn’t evidence of what’s happened to education.

zlop
March 10, 2012 6:05 pm

“evolves to maximize entropy”
Hence the Achilles heal
Not entropy dimension, but a 3D entropy function is needed
“Modified Feynman ratchet with velocity-dependent fluctuations”
Motivation is the ability to extract work from a gravitationally bound gas
Consider well insulated pipe loop containing H2
Heat exchanger at bottom, and downward top of the loop
Air Cp = 1.01 — H2 Cp =14.32 — lapse=-g/Cp
Guess what ? — perpetual motion !

Jurgen
March 11, 2012 9:04 am

Some more on “data-mining”.
The basic ingredient here is “start with the data as they are” – a vital attitude in research, and a grave omission when using mainly models to substantiate CAGW. Many comments on WUWT indicate this. I only learned this term “data-mining” while formulating my first comment in this thread and googling around to update on modern approaches with computers. I am no expert in data-mining at all, but I am fond of basic science, like may visitors here.
As the scientific process is a cyclic one, it is not realistic to give predominance of one of the phases of this cycle over other phases. So data-collecting being one of the steps, may very well and often is preceded by some idea, fascination, theory of even a model. The problem with a model is it is very difficult to test its validity with collecting data. Somehow you have to break down the model in testable elements, but where does this leave your model? For a complicated phenomenon like climate the “start with a model” approach seems way over the top to me.
There wouldn’t be this “climate model hype” if there weren’t computers. Rightly or wrongly people expect magic from their calculating power. But as science is a cyclic process, you cannot neglect the other phases. In a sense “data-mining” (bottom-up) is the opposite of working with models (top-down). It’s a complementary necessity I would think.

Reply to  Jurgen
March 11, 2012 9:53 am

We like making pretty charts with our data. A pretty chart is worth so many wonderful words. The problem with pretty charts is we often forget where 0 is. If we are trying to find a planet around a distant star, ignoring zero is quite useful. The question is, what do we do with that planet after we find it. Is it added to the database used to calculate the next shuttle orbit? Does that planet 50 Light years away affect the launch of a rocket?
As another poster has pointed out regularly here, in climate science we seem to not only ignore 0, we ignore the fact that we are using the wrong 0. The great “scientists” are running around making predictions about temperature using Anomalous 0, when we should be looking at 0 in terms of enthalpy. The starting point there IS NOT an anomalous enthalpy though. You have to look at the absolute magnitude of the enthalpies involved. We have started to see some discussion of enthalpies in places like Skeptical Science, but when you look closely they are still playing the anomalous game.
I do expect experts (all of them) to make judgements about when to use anomalies and when not to. If all they are doing is presenting pretty graphs, more power to them. If they are trying to make a prediction about what is going to happen next, they need to use the data to make a prediction. If anomalous analysis allows them to make more accurate predictions, then I am wrong. So far though I haven’t seen any of the anomalous analyses do anything more than say “look the haphazard results match my funky models.

Reply to  Brad Tittle
March 11, 2012 2:10 pm

Brad Tittle:
An often overlooked characteristic of the IPCC climate models is that they do not make predictions. They make projections. Unlike predictions, projections are insusceptible to being statistically validated.

Joachim Seifert
Reply to  Terry Oldberg
March 11, 2012 2:55 pm

Why dont we talk about “forecast” and “hindcast”? Much clearer…….
Projection/prediction is from what language? Does ist have English, Spanish, German,
French way of understanding? The equal spelling, as every translater knows, does NOT
signify that its meaning is identical, you can be embarrassed in English and you can be
embarrassed in Spanish… in most cases of words the meaning is somewhat different
in each language… I am not even sure that a English projection is the same as a German
“Projektion” or a Spanish “proyeccion”…..
Lets use forecast and we all what it is all about….
JS

Reply to  Joachim Seifert
March 11, 2012 4:18 pm

Thanks for taking the time to respond. In English, “prediction” and “forecast” are synonyms. In the following remarks, I use “prediction.”
It is improper to suggest that any of the IPCC climate models make predictions as none of these models do so. What these models do make are projections. “Projection” is a term from the field of ensemble forecasting but while climatologists often use “prediction” and “projection” as synonyms, the ideas that are referenced by the two words are distinct.
A model that makes predictions is susceptible to being statistically tested with the consequence that it can be either falsified or validated by the evidence, if any. A model that makes only projections is not susceptible to being statistically tested; it cannot be either falsified or validated by the evidence.
In its assessment reports, the IPCC muddies the waters by using the similar-sounding words “prediction” and “projection” as synonyms when they are not. Similarly, it muddies the waters by using the similar-sounding words “validation” and “evaluation” as synonyms when they are not. A consequence is for it to sound to many as though the models have been statistically validated when they are not even susceptible to being statistically tested.
The lack of susceptibility to being statistically tested implies that the methodology of the IPCC’s inquiry into AGW was non-scientific but many believe the opposite. This misunderstanding is a consequence from ambiguity of reference by terms in the language of climatology to the associated ideas. If the IPCC wished, it could eliminate this misunderstanding by disambiguating terms in the language of its reports. A year after I published a peer reviewed article on this topic, I have no evidence that the organization plans to do so.

March 11, 2012 7:30 pm

Reblogged this on Climate Ponderings and commented:
“The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.”

Brian H
March 11, 2012 10:23 pm

Doubting Rich says:
March 8, 2012 at 9:52 am
Great post.
However I have another idea as to what is stultifying the science: mutually-exclusive assumptions.
The whole edifice is based on the assumption that the climate was stable before human intervention. The hockey stick is the most obvious expression of this, but it is the underlying assumption, down to pre-emptive demands in secret emails that they must get rid of the MWP.
However it is at odds with the idea of positive feedback. And that is also one of the foundations of the edifice, as of course without positive feedback AGW can never be catastrophic.
However no system showing positive feedback can be stable. So there is an inherent contradiction between two basic assumptions, and work on one is necessarily pulled down by work on the other. Progress cannot be made until the researchers give up one of these.

Excellent observation. And … abandoning either assumption cuts the ground out from under the CAGW “conclusion”. To be explicit, not stable means the “natural variation” band must be acknowledged to encompass all current (interglacial) variance, and no, weak, or strongly bounded positive feedback means “sensitivity” is low and runaway tipping points are highly implausible.

Brian H
March 11, 2012 10:33 pm

Ross McKitrick says:
March 8, 2012 at 10:25 am

Modern climatologists all work in academic societies that have issued position statements on climate change that effectively make loyalty to a set of conclusions a precondition of being a member in good standing of the society. Economics societies do not issue position statements, leaving it up to individual members to speak for themselves. I think the latter tradition is more conducive to progress.

Very cogent observation! I had, along with others I think, mainly concentrated on the improper collusion between rent-seeking society “management” and rent-seeking “climatologists”. But the implicit threat of ostracism, and lack of impartial support services, that is implicit (explicit?) in the endorsements of the consensus by societies is indeed where the rubber meets the road.

Brian H
March 12, 2012 12:30 am

Philip Bradley says:
March 8, 2012 at 5:49 pm
For me, there is only one answer. The lack of progress means that there is some fundamental misunderstanding at the very base of the modern climate edifice. It means that the underlying paradigm that the whole field is built on must contain some basic and far-reaching theoretical error.
I’ve floated the idea that the forcings model/theory is wrong, because negative feedbacks operate on much shorter timescales than the forcings. Therefore, forcings have no effect on the (net) climate equilibrium

That would be consistent with the Singer satellite findings, that OLR varies quickly and smoothly with temperature, not lagged. It “short-circuits” any putative positive feedback mechanism.

Brian H
March 12, 2012 12:41 am

Has anyone with access to any of the large data pools had a go with the Eureqa software? (Searches for patterns and derives simplest equations for them, no hints or suggestions or assumptive inputs allowed.)

Brian H
March 12, 2012 12:42 am

Oops, blew the link above; s/b Eureqa (free download site).

Brian H
March 13, 2012 5:12 pm

Terry;
The “projections” were, IIRC, originally and accurately characterized as extrapolations of various ensembles of assumptions to see what happened when selected variables and co-efficients within the programmed algorithms were “tweaked”. Said ensembles of assumptions and tweaks were poorly characterized and documented, which is a problem, but IAC the initializations were also arbitrary. All in all, said “projections” could function as predictions only to the extent that the variables, algorithms, and initializations were thoroughly and explicitly vetted in advance, preferably (much preferably) by third parties.
Didn’t happen, of course.

Reply to  Brian H
March 13, 2012 6:38 pm

Brian H
Thanks for taking the time to reply. I disagree. A “prediction” has a close relationship to a statistically independent event. In particular, it is an extrapolation from the observed state at the start of this event to the unobserved state at the end of the same event. If you observe that it is cloudy and predict rain in the next 24 hours, you have made a prediction. In my example, “cloudy” is the observed state at the start of the event while “rain in the next 24 hours” is the unobserved state at the end of the same event.
The event that I have described has a duration of 24 hours. By splitting the time line into non-overlapping 24 hour long intervals that collectively cover the time line, one could provide a partial description of a complete set of statistically independent events or “statistical population.” Please note that the independent events are discrete and countable. On the other hand, projections are continuous.
The set of predictions that potentially are made by a model has a one-to-one relationship to the events in a statistical population. The existence of this population is a necessary condition for the associated model to have the potential for making predictions. By the absence of this population, you can assure yourself that none of the models that are referenced by IPCC Working Group 1 in AR4 have the potential for making predictions. On the other hand, all of them have the potential for making projections.
A conditional prediction, that is, one in which the predicted unobserved state at the end of the event depends upon the observed state at the start of the same event, is an example of a predictive inference. A model may make a predictive inference but while an IPCC climate model makes projections it makes no predictive inference. A model that makes no predictive inference conveys no information to a policy maker about the outcomes from his/her policy decisions. Thus, as vehicles for making policy, the IPCC climate models are worthless.

Brian H
March 13, 2012 8:06 pm

Terry;
I was certainly not trying to justify their procedure! In fact, I fully agree with G&T’s characterization of the models as “video games”. I was just giving the rationale, and trying to highlight the acknowledged non-relationship to reality inherent in the term “projections”.
Note the specification: all the coefficients, algorithms, and variables, and the initialization data, would have to be vetted by 3rd party analysts BEFORE any projection could be tested against reality, and THEN given a tentative valuation. None of that has ever occurred. Yet, in practice, the projections are used, treated, and cited as “predictions”. Well; live by the sword …

Reply to  Brian H
March 13, 2012 10:47 pm

Brian H:
I take it that we agree on the inappropriateness of using, treating and citing projections as predictions. I’d be relieved if you would assure me of your additional understanding that the existence of the associated statistical population is a necessity for a predictions to have been made. Climatologists on both side of the issue of CAGW, including Lord Monckton, exhibit ignorance of this necessity.

Brian H
March 13, 2012 11:40 pm

Definitionally and technically correct; but in the real world any statement with a future date on it, especially when accompanied by such phrases as “likely” and “highly likely”, is a prediction. That it is baseless is merely a characteristic that must be loudly explained to the listeners, lest they take it seriously. If some time has passed since issuance, it may also be helpful and necessary to demonstrate how much at variance the model projection is from observation.
Politically, which is where the decisions affecting Life, Funding, and Everything are made, Big Lie Projections are deadly dangerous, and have to be defeated by all effective means. Explaining loudly to the voting populace that there is no associated statistical population will possibly prove insufficient.

Reply to  Brian H
March 14, 2012 9:01 am

Brian H:
It sounds as though we’re in rough agreement on the definition of a prediction. I’ll point out that in order for the claims made by a predictive model to testable, there must be a large number of statistically independent events, some of them observed. The complete set of these events is an example of a statistical population but no such population has been identified by the IPCC.
For testability, phrases like “likely” and “highly likely” would have to be replaced by numbers, for in testing the model predicted numbers representing the relative frequencies of outcomes must be compared to observed numbers. Also, though climatologists have assumed the outcomes of events of interest to policy makers to be numerical values of the global average surface air temperature (GASAT), there are a couple of hitches in this assumption.
First, by the definition of “climatology,” the GASAT has to be averaged over a specific time period, e.g., three decades, but the IPCC has not identified this period. Second, as each value assigned to the GASAT is a real number, selection of the GASAT results in the existence of outcomes that are of infinite number. Coverage of this space by observed events would require a sample size of infinity but in the real world a sample of infinite size is unobtainable.

tallbloke
March 14, 2012 7:20 am

Looks interesting Brian.

tallbloke
March 14, 2012 7:22 am

Gail Combs says:
March 9, 2012 at 10:24 am
Tallbloke, on references to: The Earthshine Project: Measuring the earth’s albedo……
I have this PDF: http://bbso.njit.edu/Research/EarthShine/literature/Palle_etal_2006_EOS.pdf
Can Earth’s Albedo and Surface Temperatures Increase Together?
and this research Article: http://www.hindawi.com/journals/aa/2010/963650/
Automated Observations of the Earthshine
and this PDF: http://bbso.njit.edu/Research/EarthShine/literature/Palle_etal_2008_JGR.pdf
Inter-annual variations in Earth’s reflectance 1999-2007.
Hope that helps.

It does, many thanks Gail.

Brian H
March 15, 2012 9:23 am

Terry;
Statisticians, modellers, mathematicians and forecasters are among the professionals excluded from the Hokey Team’s collection of Jackasses of All Sciences, Masters of None. The violations of basic standards and quality controls are so many and so egregious that they are immediately inspired to offer some of their personal stocks of C4 to make a proper start. This is generally not taken well.

Firey
April 14, 2012 9:24 pm

The unvalidated climate models need to be put back under the microscope, if they have one!!