Prediction is hard, especially of the future.

Guest Post by Willis Eschenbach

[UPDATE]: I have added a discussion of the size of the model error at the end of this post.

Over at Judith Curry’s climate blog, the NASA climate scientist Dr. Andrew Lacis has been providing some comments.  He was asked:

Please provide 5- 10 recent ‘proof points’ which you would draw to our attention as demonstrations that your sophisticated climate models are actually modelling the Earth’s climate accurately.

To this he replied (emphasis mine),

Of note is the paper by Hansen, J., A. Lacis, R. Ruedy, and Mki. Sato, 1992: Potential climate impact of Mount Pinatubo eruption. Geophys. Res. Lett., 19, 215-218, which is downloadable from the GISS webpage.

It contains their model’s prediction of the response to Pinatubo’s eruption, a prediction done only a few months after the eruption occurred in June of 1991:

Figure 1. Predictions by NASA GISS scientists of the effect of Mt. Pinatubo on global temperatures. Scenario “B” was Hansen’s “business as usual” scenario. “El” is the estimated effect of a volcano the size of El Chichón. “2*El” is a volcano twice the size of Chichón. The modelers assumed the volcano would be 1.7 times the size of El Chichón. Photo is of Pinatubo before the eruption.

Excellent, sez’ I, we have an actual testable prediction from the GISS model. And it should be a good one if the model is good, because they weren’t just guessing about inputs. They were using early estimates of aerosol depth that were based on post-eruption observations. But with GISS, you never know …

Here’s Lacis again talking about how the real-world outcome validated the model results. (Does anyone else find this an odd first choice when asked for evidence that climate models work? It is a 20-year-old study by Lacis. Is this his best evidence he has?) But I digress … Lacis says further about the matter:

There we make an actual global climate prediction (global cooling by about 0.5 C 12-18 months following the June 1991 Pinatubo volcanic eruption, followed by a return to the normal rate of global warming after about three years), based on climate model calculations using preliminary estimates of the volcanic aerosol optical depth. These predictions were all confirmed by subsequent measurements of global temperature changes, including the warming of the stratosphere by a couple of degrees due to the volcanic aerosol.

As always, the first step in this procedure is to digitize their data. I use a commercial digitizing software called “GraphClick” on my Mac, there are equivalent programs for the PC, it’s boring tedious hand work. I have made the digitized data available here as an Excel worksheet.

Being the untrusting fellow that I am, I graphed up the actual temperatures for that time from the GISS website. Figure 2 shows that result, along with the annual averages of their Pinatubo prediction (shown in detail below in Figure 3), at the same scale that they used.

Figure 2. Comparison of annual predictions with annual observations. Upper panel is Figure 2(b) from the GISS prediction paper, lower is my emulation from digitized data. Note that prior to 1977 the modern version of the GISS temperature data diverges from the 1992 version of the temperature data. I have used an anomaly of 1990 = 0.35 for the modern GISS data in order to agree with the old GISS version at the start of the prediction period. All other data is as in the original GISS prediction. Pinatubo prediction (blue line) is an annual average of their Figure 3 monthly results.

Again from their paper:

Figure 2 shows the effect of E1 and 2*El aerosol son simulated global mean temperature. Aerosol cooling is too small to prevent 1991 from being one of the warmest years this century, because of the small initial forcing and the thermal inertia of the climate system. However, dramatic cooling occurs by 1992, about 0.5°C in the 2*El case. The latter cooling is about 3 σ [sigma], where σ is the interannual standard deviation of observed global annual-mean temperature.This contrasts with the 1-1/2 σ coolings computed for the Agung (1963)and El Chichon (1982) volcanos

So their model predicted a large event, a “three-sigma” cooling from Pinatubo.

But despite their prediction, it didn’t turn out like that at all. Look at the red line above showing the actual temperature change. If you didn’t know there was a volcano in 1991, that part of the temperature record wouldn’t even catch your eye. Pinatubo did not cause anywhere near the maximum temperature swing predicted by the GISS model. It was not a three-sigma event, just another day in the planetary life.

The paper also gave the monthly predicted reaction to the eruption. Figure 3 shows detailed results, month by month, for their estimate and the observations.

Figure 3. GISS observational temperature dataset, along with model predictions both with and without Pinatubo eruptions. Upper panel is from GISS model paper, lower is my emulation. Scenario B does not contain Pinatubo. Scenario P1 started a bit earlier than P2, to see if the random fluctuations of the model affected the result (it didn’t). Averages are 17-month Gaussian averages. Observational (GISS) temperatures are adjusted so that the 1990 temperature average is equal to the 1990 Scenario B average (pre-eruption conditions). Photo Source

One possibility for the model prediction being so far off would be if Pinatubo didn’t turn out to be as strong as the modelers expected. Their paper was based on very early information, three months after the event, viz:

The P experiments have the same time dependence of global optical depth as the E1 and 2*El experiments, but with r 1.7 times larger than in E1 and the aerosol geographical distribution modified as described below. These changes crudely account for information on Pinatubo provided at an interagency meeting in Washington D.C. on September 11 organized by Lou Walter and Miriam Baltuck of NASA, including aerosol optical depths estimated by Larry Stowe from satellite imagery.

However, their estimates seem to have been quite accurate. The aerosols continued unabated at high levels for months. Optical depth increased by a factor of 1.7 for the first ten months after the eruption. I find this (paywall)

Dutton, E. G., and J. R. Christy, Solar radiative forcing at selected locations and evidence for global lower tropospheric cooling following the eruptions of El Chichon and Pinatubo, Geophys. Res. Lett., 19, 2313-1216, 1992.

As a result of the eruption of Mt. Pinatubo (June 1991), direct solar radiation was observed to decrease by as much as 25-30% at four remote locations widely distributed in latitude. The average total aerosol optical depth for the first 10 months after the Pinatubo eruption at those sites is 1.7 times greater than that observed following the 1982 eruption of El Chichon

and from a 1995 US Geological Service study:

The Atmospheric Impact of the 1991 Mount Pinatubo Eruption ABSTRACT

The 1991 eruption of Pinatubo produced about 5 cubic kilometers of dacitic magma and may be the second largest volcanic eruption of the century. Eruption columns reached 40 kilometers in altitude and emplaced a giant umbrella cloud in the middle to lower stratosphere that injected about 17 megatons of SO2, slightly more than twice the amount yielded by the 1982 eruption of El Chichón, Mexico. The SO2 formed sulfate aerosols that produced the largest perturbation to the stratospheric aerosol layer since the eruption of Krakatau in 1883. … The large aerosol cloud caused dramatic decreases in the amount of net radiation reaching the Earth’s surface, producing a climate forcing that was two times stronger than the aerosols of El Chichón.

So the modelers were working off of accurate information when they made their predictions. Pinatubo was just as strong as they expected, perhaps stronger.

Finally, after all of that, we come to the bottom line, the real question. What was the difference in the total effect of the volcano, both in observations and in reality? What overall difference did it make to the temperature?

Looking at Fig. 3 we can see that there is a difference in more than just maximum temperature drop between model results and data. In the model results, the temperature dropped earlier than was observed. It also dropped faster than actually occurred. Finally, the temperature stayed below normal for longer in the model than in reality.

To measure the combined effect of these differences, we use the sum of the temperature variations, from before the eruption until the temperature returned to pre-eruption levels. It gives us the total effect of the eruption, in “degree-months”. One degree-month is the result of changing the global temperature one degree for one month. It is the same as lowering the temperature half a degree for two months, and so on.

It is a measure of how much the volcano changed the temperature. It is shown in Fig. 3 as the area enclosed by the horizontal colored lines and their respective average temperature data (heavier same color lines). These lines mark the departure from and return to pre-eruption conditions. The area enclosed by each of them is measured in “degree – months” (degrees vertically times months horizontally).

The observations showed that Pinatubo caused a total decrease in the global average temperature of eight degree-months. This occurred over  a period of 46 months, until temperatures returned to pre-eruption levels.

The model, however, predicted twice that, sixteen degree-months of cooling. And in the model, temperatures did not return to pre-eruption conditions for 63 months. So that’s the bottom line at the end of the story — the model predicted twice the actual total cooling, and predicted it would take fifty percent longer to recovery than actually happened … bad model, no cookies.

Now, there may be an explanation for that poor performance that I’m not seeing. If so, I invite Dr. Lacis or anyone else to point it out to me. Absent any explanation to the contrary, I would say that if this is his evidence for the accuracy of the models, it is an absolute  … that it is a perfect … well, upon further reflection let me just say that I think the study and prediction is absolutely perfect evidence regarding the accuracy of the models, and I thank Dr. Lacis for bringing it to my attention.

[UPDATE] A number of the commenters have said that the Pinatubo prediction wasn’t all that wrong and that the model didn’t miss the mark by all that much. Here’s why that is not correct.

Hansen predicted what is called a “three sigma” event. He got about a two sigma event (2.07 sigma). “Sigma” is a measure of how common it is for something to occur. However, it is far from linear.

A two sigma event is pretty common. It occurs about one time in twenty. So in a dataset the size of GISSTEMP (130 years) we would expect to find somewhere around 130/20 = six or seven two sigma interannual temperature changes. These are the biggest of the inter-annual temperature swings. And in fact, there are eight two-sigma temperature swings in the GISSTEMP data.

A three sigma event, on the other hand, is much, much rarer. It is a one in a thousand event. The biggest inter-annual change in the record is 2.7 sigma. There’s not a single three sigma year in the entire dataset. Nor would we expect one in a 130 year record.

So Hansen was not just making a prediction of something usual. He was making a prediction that we would see a temperature drop never before seen, a once in a thousand year drop.

Why is this important? Remember that Lacis is advancing this result as a reason to believe in climate models.

Now, suppose someone went around saying his climate model was predicting a “thousand-year flood”, the huge kind of millennial flood never before seen in people’s lifetimes.  Suppose further that people believed him, and spent lots of money building huge levees to protect their homes and cities and jacking up their houses above predicted flood levels.

And finally, suppose the flood turned out to be the usual kind, the floods that we get every 20 years or so.

After that, do you think the flood guy should go around citing that prediction as evidence that his model can be trusted?

But heck, this is climate science …

0 0 votes
Article Rating
169 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
December 29, 2010 6:09 am

As Maxwell Smart used to say, holding his index finger and thumb a tiny distance apart, “Missed it by that much.”

latitude
December 29, 2010 6:11 am

Here’s an example of how wonderful our computer games are………
……………fail

December 29, 2010 6:22 am

I begin to believe that everybody is supposed to take on faith, what these “prognosticators of doom” people say — They just assume no one will actually do the math. Bad assumption.
For sure the world media and to some extent, supposed real science publications, have shown their willingness to just accept what they say, as is … Such is the price of the AGW money scam.

jack morrow
December 29, 2010 6:23 am

Kinda like the Edsel.

Labmunkey
December 29, 2010 6:26 am

Careful Mr Eschenbach, i can see portions of that last paragraph being selectively quoted/paraphrased to actually SUPPORT the models’ effectivenes… think back-of the box DVD covers where one word (stupendous!) is taken as a review (where the whole review is a stupendous waste of time…. etc etc.
This is interesting and something i REALLY want to see more of; the models are their main reason for forcing these idiotic measures down our throats (at great expense)- i have never been convinced to their accuracy- lets see more, open and detailed investigation of this sort of thing.
Would REALLY love to hear back from them.
Sugegstion- perhaps instead of asking after the fact, a ‘joint’ article could be prepared? They may be more likely to respond if it’s done that way.

Editor
December 29, 2010 6:30 am

Close enough for government work.

Mike Haseler
December 29, 2010 6:36 am

I think what was meant by “accurate” was that it cooled rather than warmed … and after all when you are dealing with a doomsday religion regarding CO2 isn’t it a perfectly adequate proof to show that particulates cause it to cool!
I shall now go and make a cup of cocoa because I fancy a coffee … or at least that seems to be the way logic works these days!

Alessandro
December 29, 2010 6:41 am

All this tells us is that the model is bad at predicting the effects of volcanoes.
It could very well have been a perfect model in that regard, and still fail at modeling other key factors in our climate.
Of course you can directly observe events like eruptions, that are developing on a yearly time scale, so volcanoes are probably easier to model than long term phenomena, where direct measurements are impossible (without…lots of waiting… that reminds me of that pregnancy test designed to give you an answer in 9 months).
And they still failed.

Joe Lalonde
December 29, 2010 6:41 am

Willis,
Predictions are mathematical formulas on temperatures.
Actual physical evidence is classed as theory with no one looking into this phenomena as you cannot put a mathematical model to it.

Sully
December 29, 2010 6:44 am

It’s no wonder the warmist priesthood is secretive about the rituals going on behind the curtain, when nasty empiricists like you take every opportunity to hoist them on their own censers and beat them over the head with their own kundikas.

Jeff
December 29, 2010 6:45 am

but they got the direction right !!!

Roberto
December 29, 2010 6:46 am

It can be a lot of fun putting together a model. But then comes the all-important step of validating your results. Prove it corresponds to real life. The match doesn’t have to be perfect. Just show it has some clue. This can be a lot of hard work. In the IT world we have people called testers who do nothing else, all day long. But if the testers can’t show it worked, they haven’t done their job. And the programmers haven’t finished doing their job. And the managers haven’t done their job. And anybody who pays good money for the project hasn’t done their job.
This is all called discipline. Undisciplined work can seem to go faster and with more fun and less boredom. But it’s three steps forward, two steps back. Two very expensive steps back. In some programs which shall not be named, it’s three steps forward, three steps back.
If you can’t explain what your part of the program is doing, how do you know what all the other parts are doing? How do you know the effects of everybody else tweaking the program at the same time, if nobody can explain what their part was supposed to do, and show that it did that much and no more?
Large amounts of activity are no substitute for doing it right.

Enneagram
December 29, 2010 6:48 am

Nostradamus to his wife: “Prediction is hard, especially of the future…….if you don’t look at the stars”

Enneagram
December 29, 2010 6:50 am

Models respond to the inner necessity of some people of going back to bed with no remorse whatsoever.

Joe Lalonde
December 29, 2010 6:54 am

Willis,
I do a great deal of following the physical evidence trail that our planet has implanted as clues.
This is a very difficult trail as the actions of this planet are ALL interactive.
You have to have a HUGE knowledge base and disregard what current physics says as LAWS as these generate road blocks to understanding the planets evolution and transformation.
Physics today will not stand to billions of years ago when our planet was rotating faster and the oceans were saltier.
So, unless current science learns and changes, garbage science will be continue to be passed down to generation as it already has.

Urederra
December 29, 2010 7:02 am

Computer models are worse than we thought.

December 29, 2010 7:05 am

Yes, I think it very strange Hansen’s Pinatubo model is brought up. I thought this was dispatched to the trash heap years ago. It could be that is was a different Hansen contrivance that I’m thinking of, they all bear a commonality of failure so I easily get them confused. But they haven’t improved upon this work? What the heck have they been doing for the last 20 years? Did they get confused about what models they were working on and buy a bunch of glue and plastic parts?
As usual Willis, nice job.

amicus curiae
December 29, 2010 7:06 am

the title cracked me up…but then they RE write the past records to support their lies about the present let alone future events..
maybe they should go with chicken entrails? could’nt be any Iffier really.

Jack
December 29, 2010 7:14 am

Mobs + tar + feathers = what the AGW scare mongers deserve.
Or, loss of funding. Either one would probably improve the accuracy of the climate models.

glacierman
December 29, 2010 7:21 am

You are using the adjusted data, not the re-adjusted, value-added data. When big Jim gets through rewriting the past, the model will be a perfect fit to their alternate reality.

Sully
December 29, 2010 7:24 am

Speaking of Hansen, Figure 6 in his 1981 paper (http://www.edge.org/q2005/q05_8.html) made a projection of temperature. It might be a good idea to plot actuals on that same graph as the basis for a continuing evaluation of his predictive capabilities.

December 29, 2010 7:26 am

In a qualitative sense, the model worked: it said temps would drop and the temps did drop.
In a quantitative sense, it overestimated the temp drop. This modelling behavior could perhaps be useful for detecting anomalies and sending alerts (for further analysis), in the hands of an analyst who is aware of its “hyper-sensitivity”.
But in the hands of analysts who believe its predictions literally, it would undoubtedly lead to the “it’s worse than we thought” kind analytics, which we have been accustomed to see coming from the CAGW camp.

Craig Loehle
December 29, 2010 7:28 am

While it is nice to get the sign of the effect right, as Hansen does, the entire global warming debate is about magnitudes. The volcano effect is particularly important because it is supposedly human-emitted sulfate aerosols (also the cause of the volcano effect) that are masking the “true” anthropogenic effects. If they predict twice as strong a sulfate aerosol effect as occurs with Pinatubo, then the masking they assume (even if the forcing data were valid, which I doubt) is twice too strong and the greenhouse warming effect they are modeling is too strong (to overcome their assumed aerosol forcing).

December 29, 2010 7:30 am

“Close enough for government work”? 😉

Houston, we have a problem...
December 29, 2010 7:30 am

I have spent a large part of my 30 years in the oil industry building numerical models of oil fields and “history matching” them – that is, adjusting input data (much of which is either totally unknown or known only with high degree of uncertainty) so that the model “matches” the “history” (of course the model runs are really predictions of the past, and recently I have seen the term “hindcast” being used). This is a tedious process and the results are very non-unique, since there are so many unknowns (more unknowns than equations, to use the linear algebra analogy).
Anyway, I bring this up to say that the Mt. Pinatubo eruption is a perfect opportunity to “tune” a model – to adjust the model’s sensitivity to a specific input (“forcing”). It is obvious that the GISS model was too sensitive to the effect of aerosols, and the aerosol “knob” needs to be turned down a bit. Now doing that would of course ruin the rest of their hindcast, so other knobs would have to be adjusted to compensate, but that is the nature of history matching. The eruption data would basically allow the modelers to set the aerosol knob and then take it out of the set of unknown parameters being adjusted to match the history.
So, rather than a opportunity for patting themselves on the back for “getting the direction right,” they should have taken the opportunity to improve their model.

monroe
December 29, 2010 7:35 am

That was a great examination! Thanks to WUWT I know a bit more about this perplexing issue. I spend more and more time on this website. A donation is in order!

Steeptown
December 29, 2010 7:38 am

When I was validating thermal-hydraulics computer models against data, a result as bad as that would have meant back to the drawing-board.

Stevo
December 29, 2010 7:46 am

Few computer simulations ever match reality exactly. Glibly dismissing a model as “bad” because it did not predict the observations exactly is foolish. At no point have you discussed what kind of accuracy you think a model should have, for you to consider it “good”. If you want to assess a model realistically, you need to understand such factors as the uncertainty in the observations, the underlying assumptions, and any simplifications that have been made, among other things.

Doug Badgero
December 29, 2010 7:46 am

What could possibly account for this error where the forcing is correct but the model output is wrong? Oh ya, if the models have the climate sensitivity wrong.

Marlene Anderson
December 29, 2010 7:53 am

Perhaps warmists are so excited by the Pintaubo modeled vs actual results because it’s the closest they ever came to reality.

Roger Andrews
December 29, 2010 8:02 am

Trying to model the impacts of volcanic eruptions on temperature is futile because there’s no way of segregating volcanic impacts from the “noise” created by El Niños and La Niñas and other short-term influences. There are at least ten short-term cooling episodes in the 20th century temperature record that look just like Pinatubo but which are unrelated to volcanic eruptions. And not all volcanic eruptions caused temperatures to decrease. One of the largest (El Chichón 1982) was in fact followed by a temperature increase.
Volcanic eruptions receive so much attention because the temperature record shows cooling after 1940, and climate models can’t hindcast this cooling without a significant volcanic contribution (although they still do a pretty poor job of it). In other words, the models simulate just one short-term forcing – the one that fits the theory – and ignore all the rest.

Baa Humbug
December 29, 2010 8:06 am

Now, there may be an explanation for that poor performance that I’m not seeing. If so, I invite Dr. Lacis or anyone else to point it out to me.

Dear Willis
I’m afraid your invitation, though not fallen on deaf ears, will not be accepted.
I help horse owners (mostly ladies) deal with their “scared” horses. ( as a hobby)
Horses are both very intelligent and notorious big chickens. As soon as the owner shows up at the paddock with a lead rope and halter, the horse bolts to the other end of the paddock. A game of frustrating cat n mouse ensues.
I teach these people how to avoid all that.
You my friend, often turn up at the paddock, not just with a rope n halter in one hand, but with a saddle over one shoulder (announcing that you intend to ride him) and a whip cracking away in the other hand, announcing the ride will be hard and painful.
The rest of us love the way you crack that whip. It’s a veritable work of art. And you explain how you crack that whip enabling us to learn, rather like the demonstrations at agricultural shows.
But I’m afraid your invite will never be accepted because as they say in the horse training classics..

A horse never forgets, but he forgives.
A donkey never forgets and never forgives.

You are not dealing with horses 😉

Peter Mott
December 29, 2010 8:18 am

The truly amazing thing to me is that there are so few attempts to test the models. Because of the Pinatubo paper it can be assumed agreed that the way to test a model is to input real data from the past, run the model, finally compare the run with what was actually observed. Vast amounts of time and money have been spent, massive distortions imposed on the economies (of some countries) because of the CAGW theory. But the tests of the theory that are available have not been performed. Sure, it might take a lot of work to perfect these tests and they will no doubt be subject to criticism like all of science. But that they have not been tried …. it staggers the imagination.
The great philosopher Karl Popper demarcated science by the capacity to admit falsification. That massive insight seems to have been lost, and perhaps more recent philosophy of science is culpable there. Possibly now that CAGW is in retreat the requisite testing will start to be done?

Ronaldo
December 29, 2010 8:23 am

Willis says
“Here’s Lacis again talking about how the real-world outcome validated the model results. (Does anyone else find this an odd first choice when asked for evidence that climate models work? It is a 20-year-old study by Lacis. Is this his best evidence he has?)”.
This surely is the key question. If the predictive ability is so poor, over such a short timescale and with such well defined starting parameters, one can only speculate about 50 to 100 years predictive ability.

pat
December 29, 2010 8:28 am

That is most interesting. When astrophysicists are asked about the relationship between solar activity and weather, they often point to 1816, the year without a summer. A year of extraordinary solar quiescence. Warmists counter that solar activity had little to do with it, that it was the eruption of Tambora the year before. Given the shallow atmospheric response as well as the very quick recovery evidenced by the Pinatubo temperatures above, I suspect both are right.
http://en.wikipedia.org/wiki/Year_Without_a_Summer

Sam Hall
December 29, 2010 8:30 am

Marlene Anderson says:
December 29, 2010 at 7:53 am
Perhaps warmists are so excited by the Pintaubo modeled vs actual results because it’s the closest they ever came to reality.
Bingo! At least they got the sign right.

ZT
December 29, 2010 8:30 am

@Houston, we have a problem…
I completely agree. This is/was a good opportunity to improve their parametrization. However, the bind that Lacis et al are in is that they have claimed certainty, and now cannot improve their models, or data analysis methods, for fear of admitting that they were wrong.
Their only recourse at this point has been to ramp up the cut-and-paste publications, and hope that policy changes are enacted, such that they can claim that disaster was averted. (cf. the CFC fiasco). The situation may well still play out in this direction – the UN, politicians, and bankers are powerful allies. But whatever is going on in climatology – science left long ago.

December 29, 2010 8:32 am

“It is obvious that the GISS model was too sensitive to the effect of aerosols, and the aerosol “knob” needs to be turned down a bit.”
If Houston is correct then this puts the “aerosols were the problem” for the cooling of 1940-1970’s at risk as an explanation.

John from CA
December 29, 2010 8:32 am

Interesting post Willis,
Atmospheric mixing, the goofy notion of Global temperature, and assumptions about the SO2 extent are factors that come to mind related to the inaccuracy of the model.
The eruption occurred on June 15 from 1:45 – 10:45 pm from the Manila area dumping ash and aerosols in a South Eastern direction towards Singapore in the South to Bangkok in the North. At the time of the eruption Tropical Storm Yunya was passing 75 km (47 miles) to the northeast of Mount Pinatubo, causing a large amount of rainfall in the region.
It said to have injected millions of tons of SO2 into the Troposphere which created a “global” Sulfuric Acid haze in the upper atmosphere but the question seems to be extent?
Are the estimates wrong because most of the SO2 ended up as acid rain in the Indian Ocean and because most of the “global” temperature readings are from the Northern Hemisphere?
Global impacts take time and, as I understand it, the atmospheric mixing moves from the Northern Hemisphere to the Southern. The greatest temperature impact would have been in the Southern Hemisphere?

johanna
December 29, 2010 8:37 am

Good post.
Another brick in the wall.

son of mulder
December 29, 2010 8:40 am

Either the models are way too sensitive or maybe it was the wrong type of ash.

Wolfman
December 29, 2010 8:48 am

Stevo (7:46 AM),
I think that the point is that the model identified by a key proponent as being a vindication of the rigor of the model predictions overestimates the sensitivity by a factor of two and the duration of the effect by a substantial amount. The net result is to underscore the uncertainties of aerosol forcings used in the models. So, the “best” example of model validation is apparently not rigorous at all.
It would not be surprising if the same sensitivity errors exist for water aerosols (clouds), which is a matter of current debate.
It appears that world economies are being put under severe burdens based on theoretical modeling that doesn’t predict well at all.

J. Knight
December 29, 2010 8:49 am

“If you want to asess a model realistically, you need to understand such factors as the uncertainty in the observations, the underlying assumptions, and any simplifications that have been made, among other things.”
You should tell Hansen, et al exactly the same thing, as these are the very reasons that models and computer simulations on climate don’t work. There is just too much uncertainty and lack of understanding of the interelated processes that drive climate, not to mention the bad faith adjustments and temperature smoothing practiced by the global warming/climate chaos/climate change gang of junk scientists.
Really, I’ve lost faith in science in the public interest as it has been mostly taken over by a gang of leftist/liberal activists intent on controlling and redistributing the wealth of the Western World, mostly in an attempt to destroy the very institutions that built the wealth they enjoy the fruits of. These people hate the West, hate themselves(if only they were born a minority), and wish to destroy all vestiges of Western Civilization. One only has to look at the groups supporting the green iniciatives and global warming nonsense to see the truth, although there are a few well-meaning but useful idiots who support them as well.

NK
December 29, 2010 8:54 am

TO: Houston… and Stevo– personally I agree with your characterization of modelling. I think all honest skeptics would. The point of this sort of derision towards flawed model output– and if Willis is correct the output was flawed– is that the model output is the source of the “proof” of catastrophic AGW according to the alarmists. The alarmists make extraordinary claims of AGW and the need for the technocrats to take over the pricing and use of the world’s energy sources. Hence they must be held to a standard of extraordinary proof. In response the alarmists deliver models such as the GISS model. If the model does indeed fail this simple test, what’s left of the alarmists’ case? That’s the point. All models are continuously refined until they WORK. If the output never improves, they are discarded– at least in the REAL world. My personal skeptic case is that the global climate may be either to complex or too chaotically random to ever properly model. The GISS model has done nothing to rebut that skepticism, much less justify turning over energy production to the likes of the UN, ALGore and Jim Hansen.

John from CA
December 29, 2010 8:58 am

Another interesting factor to consider, if you roll the “way-back machine” to 1991, we appear to have been moving into an El Nino which would have compensated for a significant portion of the cooling.

December 29, 2010 8:59 am

Stevo says:
December 29, 2010 at 7:46 am
Few computer simulations ever match reality exactly. Glibly dismissing a model as “bad” because it did not predict the observations exactly is foolish. At no point have you discussed what kind of accuracy you think a model should have, for you to consider it “good”.

But the AGW community will gliby pass off “scarier-than-real” results from models like this as ‘reality’, which we foolish folks are supposed to believe.
Time for a “reality check”.

Latimer Alder
December 29, 2010 8:59 am

Cracking good first question to Lacis (he said, not allowing false modesty to overwhelm him)
And its worth noting that rather than the 5 or 10 ‘proof points’ that he was offered the chance to discuss from the whole palette of climate modelling, he came up with just the one that Willis discusses and one other, also by himself. And made no better a job of it either.
It is quite remarkable how little the individual Climatologists know of each others work, while feeling able to dismiss any ‘outsiders’ criticisms as ignorant or unqualified.

December 29, 2010 8:59 am

Taking a wider look at GCMs and volcanoes two things stand out:
1. Models underestimate the variation in temperature.
2. There are often large drops in observed temperature which are not related to volcanoes; in the case of models the only drops in temperature are volcano related.
http://www.climatedata.info/Forcing/Forcing/volcanoes.html

Steve Oregon
December 29, 2010 9:01 am

OK, here’s this high school educated, general contractor, layperson’s short summary of what this indicates.
The climate models are hyper sensitive and greatly exaggerate the climate’s reaction to atmospheric injection of material.
So I make the conclusion that the models exaggerated the reaction to volcanic injection from Pinatubo just as they do the reaction from CO2 emissions.
Therefore climate models are not “robust” and anyone claiming they are need to face ethics charges or become a contractor.
Am I missing anything?

December 29, 2010 9:04 am

“Stevo says:
December 29, 2010 at 7:46 am (Edit)
Few computer simulations ever match reality exactly. Glibly dismissing a model as “bad” because it did not predict the observations exactly is foolish. At no point have you discussed what kind of accuracy you think a model should have, for you to consider it “good”. If you want to assess a model realistically, you need to understand such factors as the uncertainty in the observations, the underlying assumptions, and any simplifications that have been made, among other things.”
yup.
“So, rather than a opportunity for patting themselves on the back for “getting the direction right,” they should have taken the opportunity to improve their model.”
yup.
If one looks at the S02 as merely a shield to incoming solar, the the earlier drop in the model could be due to the modelling of the dispersion of the aerosol after the event.
I would expect their model of the release of S02 from the volcano to not have much fidelity. The depth of the cooling would indicate that the negative forcing is off and the
quick rebound could be due to a poor model of the residency time of the aerosol.
Just the other day i was watching a great video of a statistician who was talking about emulating GCM results. What two parameters of the 32 paramaters were the MOST sensitive to perturbation.
1. whether the slab ocean was on or coupled ocean was on.
2. whether the sulfur cycle was off or on.
The model hansen used in this period had a sulfur cycle that is now 20 years old
and I believe it used a slab ocean. It would be neat to see how a current GCM with a fully coupled ocean and a much better sulfur cycle would do.
For some interesting reading on the sulfur cycle in models see this
http://www.google.com/url?sa=t&source=web&cd=2&ved=0CCMQFjAB&url=http%3A%2F%2Facdb-ext.gsfc.nasa.gov%2FPeople%2FChin%2Fchin.jgr.2000a.pdf&ei=JGYbTaHBGY_EsAOKv9DYCg&usg=AFQjCNEg4zW55YP-G_5bN5Fsa5m7ELY6_Q&sig2=lEFbnkSqnhOz3ULuo-tN3Q
some comparisons between various models and a bit of insight into the complexity of getting it exactly right.
That said, one wants to know how well sun spots or magnetic fields or the thunderstorm thermostat do in a similar test. As models of the climate they are silent on the effect of aerosols.

December 29, 2010 9:13 am

Great article and great comments too. Thank You all. Not to belabor the point or recover old ground for the tenth time, let me just add: expectations and realizations are almost always mismatched, mostly because we humans generally fail to recognize our ignorance.

Matt G
December 29, 2010 9:15 am

Comparing aerosols from climate impact of Mount Pinatubo eruption with the background changed over recent decades shows only about 0.02c per decade. This amount is far too small to explain the cooling between the 1940’s and 1970’s being caused by aerosols. Likely explaination is the change in ocean cycles where the PDO become negative with eventually the AMO. This period also had a decreasing number of El Ninos and increasing number of La Nina’s where during the period the AO and NAO also become more negative. This scientific evidence is the inconvenient truth that the alarmists don’t want you to know.

Adam Gallon
December 29, 2010 9:22 am

I have a memory of the boot being applied to this paper previously, noting the advent of an La Niña coinciding with the cooling period attributed to Pinatubo.
So, were NASA predictions based upon a “Constant Temperature” pertubation or a “Reduction in temperature” pertubation by the eruption?

Honest ABE
December 29, 2010 9:26 am

Well of course they overestimated the cooling effect – they ramped up the effects of aerosols in their models to explain away 4 decades of cooling in spite of increasing CO2 levels.

Adam Gallon
December 29, 2010 9:26 am

Whoops! Got that arse-about, ’91 an El-Nino year?

Steve Keohane
December 29, 2010 9:33 am

Thanks Willis, great post. For all who label this as nitpicking, does it not seem strange to you yet that every single error shown in every analysis of climate models always goes towards exaggerating the magnitude of the effect of CO2?

NK
December 29, 2010 9:35 am

To Willis at 09:25: I certainly got your point in the post. In fact your response to Stevo and Steve was too polite. The GISS model is used to supposedly prove an extraordinary claim of “catastrophic runaway” AGW, and justify a monumental power grab over the world’s energy supply and use. That’s what the UN/AlGore and Hansen demand. In the real world, failed models are discarded and failed modelers have to get another job. In AGW world, the alarmists demand that we ignore their failed models and give them the power anyway. Skeptics rightly say, no way no how. Prove it first, then we’ll think about what to do.

DirkH
December 29, 2010 9:36 am

The modelers face a dilemma. On one hand, they need to make their models match reality, basically by getting the hindcasting right, which would also have the side-effect of getting events like Pinatubo half way right for a short time into the future.
On the other hand, they need to deliver catastrophic outcomes in the long range, and they can’t do this in a too open manner in the source code, for instance by just increasing the amount of energy in the system over time; it would be too obvious.
When i say they NEED to deliver catastrophic outcomes, i don’t mean that all modelers are dishonest crooks; but there is a natural selection amongst researchers through the grant mechanism that favors those that deliver the catastrophic predictions. This mechanism constantly purges the moderates and prefers the extremists.
So to SURVIVE, a modeler needs to deliver catastrophic predictions while at the same time hindcasting the past correctly. It is NOT important whether a given model proves to be correct after 10 years because grants are not issued according to such evaluations. So it is not important for the survival of the modeler whether what he predicts now comes to pass in 10 years; he can refine his model many times and revise his opinion without punishment.
Nothing in the selection process rewards correct models.

Ted B.
December 29, 2010 9:39 am

In my view, predicting the future, especially many decades in advance, is not science, regardless of whether it is done using tarot cards, astrological charts or “sophisticated computer models”. Such forecasts are not science because they are impossible to refute, at least not without having to wait for many decades, after which time the climate modelers will be long since retired and possibly deceased. Predicting future events in this way is not sound science, but if people believe that it is sound science, then those who would like to make the case that a future catastrophe awaits have a foolproof device for winning any argument; if you challenge their predictions they can simply say, “Prove me wrong”. And of course, you can’t!

Stevo
December 29, 2010 9:41 am

“Because this result does nothing … to establish the fidelity and trustworthiness of models”
It certainly does do something. Only a fool would dismiss it entirely, as you seem to be doing. No model ever predicts the future to 27 decimal places, as you seem to be demanding it should.

John from CA
December 29, 2010 9:45 am

Adam Gallon says:
December 29, 2010 at 9:26 am
Whoops! Got that arse-about, ’91 an El-Nino year?
======
ENSO events:
http://ggweather.com/enso/years.htm

December 29, 2010 9:48 am

Craig Loehle says:
December 29, 2010 at 7:28 am
While it is nice to get the sign of the effect right, as Hansen does, the entire global warming debate is about magnitudes. The volcano effect is particularly important because it is supposedly human-emitted sulfate aerosols (also the cause of the volcano effect) that are masking the “true” anthropogenic effects. If they predict twice as strong a sulfate aerosol effect as occurs with Pinatubo, then the masking they assume (even if the forcing data were valid, which I doubt) is twice too strong and the greenhouse warming effect they are modeling is too strong (to overcome their assumed aerosol forcing).

—————
Craig Loehle,
I think that Hansen’s prediction for the climate effects of the 1991 Pinatubo eruption is not consistent with an assumption of someone like him having a premeditated strategy to overstate the aerosol effect in a model.
I would say that if someone like Hansen wanted to have an effective premeditated strategy in a Pinatubo effect prediction then he should have purposely and significantly underestimated the aerosol effect in his model prediction. That way one could look at the actual climate effect of Pinatubo being more than his model predictions; and then AGW supporters could say something like ‘Hey, we need to increase the AGW forcing in models because we underestimated the aerosol forcing as compared the actuals shown by the Pinatubo example.’
So, to me this means someone like Hansen really thought the aerosol forcing in their models really was going to be that large in reality and so had no premeditated strategy for the model outcome.
John

John from CA
December 29, 2010 9:59 am

LOL, I was sitting here looking at figure 3 (trying to be objective) and was about to post that the model (as crude as it probably was) did remarkably well (about 50% eyeball accurate) and matched overall trends when I realized what it actually shows.
What the GISS temperature is showing is no change or a slight decrease in global temperature from 1990 – 1997. What just happened to the hockey stick?

Houston, we have a problem...
December 29, 2010 10:02 am

@ Stevo
Dude, if the GISS model missed the hindcast of a simple event like Mt Pinatubo by 100%, does that not give you any qualms about accepting the PREDICTIONS (ie, extrapolation) from that model about a very complex event, like what is the global temperature in 2100?

Peter
December 29, 2010 10:10 am

I think you can say without argument that we understand weather more so than the climate and how they each work. Yet most long range weather forcasts (5 days say) are generally 50% wrong. If we can’t predict the weather 5 days out, how can anyone assume, or presume to be able to tell us what the temperature will be in 100 years, even 10 years. I think it’s an egotistical attitue that we can actually have that kind of an effect on climate. Extrapolating that reasoning, I think we should assume an announcement anytime now that we can stop tornado’s, even hurricanes. Those are much more localized events and should be much easier to control!
Viva la-science!

ge0050
December 29, 2010 10:26 am

Climate modeling is based on an unproven assumption. That while weather is chaotic, the long term average of a chaotic system is not. This has not been proven, Lorenz certainly didn’t agree. Trewartha and Horn (1980) (5th edition), pp. 392-95.
Show the mathematics; show that the long term average of chaos is not chaotic. Until that is done, there is no reason to believe climate is predictable. Infinity divided by N is still infinity, or it is undefined. The average of chaos is chaos, or it is undefined.
Here is what Wikipedia has to say:
“… for chaotic systems, rendering long-term prediction impossible in general. … This behavior is known as deterministic chaos, or simply chaos. … Chaotic behavior can be observed in many natural systems, such as the weather.”
http://en.wikipedia.org/wiki/Chaos_theory

Alan S. Blue
December 29, 2010 10:26 am

This comparison has the problem of “Accepting the Premise.”
The “Global Temperature” is a sufficiently fungible number with substantial slack built into it and a limited number of feasible responses anyway. If one is discussing -hindcasting-, it is easy enough to just fail or reparametrize models that refuse to pay homage to known events.
But if you actually intend to test predictive power, you want to be doing it by the gridcell. What did the model predict for the cell that actually contains Pinatubo? One 100 klicks down stream? One on the opposite side of the earth? Compare those results with the actual measurements – and keep track of the squared error instead of just adding positive errors and negative errors together. Which happens to be what everyone is doing when they move on to evaluate the Global Temperature.

R. de Haan
December 29, 2010 10:30 am

The Globe cooled 0.56 Degree Celsius in only four days
Pretty much all of 20th century global warming may have been eradicated within 4 days – the same time that Apollo 11 needed to get to the Moon, as correctly predicted by Jules Verne.
http://motls.blogspot.com/2010/12/globe-cooled-by-056-c-in-four-days.html

Hamish McDougal
December 29, 2010 10:31 am


100% error is hardly “27 decimal places”.
You would be much more credible if you avoided hyperbole.

maz2
December 29, 2010 10:31 am

AGW Progress Report.
…-
“Y2Kyoto: Would You Prefer Your Temperatures Fried Or Boiled?
Daily Bayonet;
New Zealand’s Climate Science Coalition has issued a press release detailing the end of the Kiwi-gate affair.
The outcome is that data published in 2009 by New Zealand’s National Institute of Water and Atmospheric Research (NIWA) entitled ‘Are we feeling warmer yet’ has been abandoned and replaced with real, unadjusted data that shows a picture that warmists don’t want you to see:
NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century. For all their talk about warming, for all their rushed invention of the “Eleven-Station Series” to prove warming, this new series shows that no warming has occurred here since about 1960. Almost all the warming took place from 1940-60, when the IPCC says that the effect of CO2 concentrations was trivial. Indeed, global temperatures were falling during that period.
Well, it’s only New Zealand, right? Well, there’s lots more chewy chart goodness here!”
http://www.smalldeadanimals.com/

December 29, 2010 10:35 am

Loehle says:
December 29, 2010 at 7:28 am
“While it is nice to get the sign of the effect right, as Hansen does…”
I wouldn`t bank on that. The strongest cooling around then is before the volcano erupted, a pattern that repeats itself before all big eruptions.
Andrews says:
December 29, 2010 at 8:02 am
“And not all volcanic eruptions caused temperatures to decrease. One of the largest (El Chichón 1982) was in fact followed by a temperature increase.”
And many more do too. The last nail in the coffin will be showing that the cool summer of 1816 was due do to natural variation.

Baa Humbug
December 29, 2010 10:39 am

Stevo says:
December 29, 2010 at 9:41 am

It certainly does do something. Only a fool would dismiss it entirely, as you seem to be doing. No model ever predicts the future to 27 decimal places, as you seem to be demanding it should.

What are you on about Stevo? And calling Willis a fool? I see a smackdown coming your way.

Bill Illis
December 29, 2010 10:50 am

They are even farther off on Krakatoa.
http://img297.imageshack.us/img297/16/krakatoata5.png
Temperatures didn’t actually change much after the eruption (maybe they actually did but after all the adjustments they have done to the record, they just lost track of the need to have a little dip after the August 1883 eruption).
Furthermore, net solar radiation at ground level estimated to have declined by -5.3 watts/m2 as a result of Krakatoa yet the temperature did not hardly change at all. (And in Model E, they are using an efficacy of forcing factor which reduces the impact to 0.1C/watt/m2 – about one-third of the impact of other forcings but obviously still too high taking into account the temperature impact per forcing change – which also calls into question all the aerosols impacts as Craig Loehle noted above).
[I don’t know if this table will show up or if it will only show up for a short period of time – it is the change in net solar radiation at ground level by month from GISS Model E volcanic forcing – Pinatubo peaked at -4.1 watts/m2].
http://data.giss.nasa.gov/work/modelEt/time_series/work/tmp.25_E3SAaeoM20_1_1880_2003_1951_1980-L3AaeoM20A/LTglb.txt

Gary Pearse
December 29, 2010 10:55 am

Stevo and others,
Please no accolades for getting the sign right! Heck, I predict that the next major eruption will have the same sign and I did this with the common sense model we all are equipped with.

FrankK
December 29, 2010 10:58 am

I may have missed something but aren’t the GISS temps been “adjusted” anyway and don’t represent reality. That is haven’t the temps been “adjusted” to agree with Scenario B but are in reality much lower ?? I don’t trust the GISS temps outright so this volcanic attempted “fit” is just an irelevant issue in my opinion.

Brian H
December 29, 2010 11:08 am

Stevo says:
December 29, 2010 at 9:41 am
“Because this result does nothing … to establish the fidelity and trustworthiness of models”
It certainly does do something. Only a fool would dismiss it entirely, as you seem to be doing. No model ever predicts the future to 27 decimal places, as you seem to be demanding it should.

Oooohhkayy! How about to 3 decimal places then? Do I hear 2? 1? How many would you say this bollux by Lacis achieved?

John from CA
December 29, 2010 11:11 am

Willis,
Thanks for all the insight and fun this year.
In return, I thought I’d share an antique family recipe that is best served/dunked in a hot cup of coffee or a good glass of red wine.
Anise Toast
Ingredients:
6 eggs
1 cup sugar
1 cup cake or bread flour
2 tsp Anise seeds
pinch of salt
Instructions:
6 eggs separated: beat whites stiff — set aside; beat yokes ’til light yellow and then gradually add 1 cup sugar and 2 tsp Anise seeds until throughly mixed.
Carefully fold the the egg whites into mixture.
Finally, add 1 Cup of either sifted cake or bread flour and a pinch of salt.
Lightly fold into greased pan(s) and bake at 325 degrees F for 25 minutes or until done (use a tooth pick to test)
Let stand until cool and cut in the pan into strips (bread knife works best). Remove from the pan, cover with a dish towel, and let dry overnight on a rack. Toast in the oven the next day (sides slightly brown) and serve.
Happy New Year

tallbloke
December 29, 2010 11:14 am

Ulric Lyons says:
December 29, 2010 at 10:35 am
The strongest cooling around then is before the volcano erupted, a pattern that repeats itself before all big eruptions.
Roger Andrews says:
December 29, 2010 at 8:02 am
“And not all volcanic eruptions caused temperatures to decrease. One of the largest (El Chichón 1982) was in fact followed by a temperature increase.”

I pointed out on Willis’ where did I put that energy thread that the big downswing in incoming energy around 1990 on his graph was long before the eruption.
In August I postd this thread on my blog:
http://tallbloke.wordpress.com/2010/08/05/volcanos-dont-cause-global-cooling/
Which caused a bit of a stir, as Ulric will remember.

Paddy
December 29, 2010 11:14 am

Jack: Tar and feathers are only part of the reward AGWers have earned. You forgot to include 2×6 rails upon which they get to ride our to town.

johanna
December 29, 2010 11:23 am

Ted B said:
“predicting the future, especially many decades in advance, is not science, regardless of whether it is done using tarot cards, astrological charts or “sophisticated computer models”. Such forecasts are not science because they are impossible to refute ” [at the time or soon after they are made]
—————————————————————
Well said, Ted. You have cut to the chase.

John from CA
December 29, 2010 11:42 am

RE:
John from CA says:
December 29, 2010 at 11:11 am
Note: The pan should be 8″ x 11″ x 2″ or a bit larger but not deeper. Strips are pan width and cut to preference (1/2″, 3/4″, or 1″)

December 29, 2010 11:43 am

If this is an example of how good the models are, I’d really like to see one of the ones that failed.

John F. Hultquist
December 29, 2010 11:51 am

Here’s a prediction; let’s see if they get the sign right.
On Tuesday the 28th of December of 2010 our local temperature was 42° F in the middle of the afternoon. The NWS predicted that two days out — early Friday AM — the temperature will be 6° F (about minus 15° C).
East of here (Spokane area) is to experience a “blizzard” today.
West of here (Seattle area) is to dry out and cool off.
As for Willis’ post I am – as I think he is — most surprised by the choice of the ‘proof point’ chosen by Dr. Lacis. Eruption and disruption sound alike and are alike.
rant Is there a prediction from some years ago they have made that Arctic Ocean ice would be gone, that NYC highways would be drowned, that Lake Chad would be lower or higher, that Australia would be totally dry and need desalinization plants, that Brett Favre would still be playing football? off
You can look the rest of the above up:
http://chiefio.wordpress.com/2010/12/27/lake-chad-is-rising/

Mark.r
December 29, 2010 11:54 am

They sill say its going get warmer and dryer.
A CSIRO scientist is warning authorities not to interpret floods in eastern Australia and snowstorms over Europe and North America as signalling the end of global warming.
NASA research shows that 2010 is the hottest year on record.
Barry Hunt, an honorary research fellow at the CSIRO’s Marine and Atmospheric Research unit, says global temperatures will continue to rise even if there is another cold snap.
“Over the last century, the global mean temperature has gone up by 0.8 degrees [Celsius], and that’s the extent of the global warming, but at the same time, we also have natural climatic variation, and you don’t get one or the other, you get them both. They interact,” he said.
“I found that even up to 2040 and 2050, you can still get cold snaps under greenhouse warming.
http://www.weatherchannel.com.au/main-menu/News/Breaking-News/Floods,-freeze-not-the-end-of-global-warming–CSIR.aspx
Should not each cold snap be warmer than the last one under greenhouse warming.?

Douglas
December 29, 2010 11:57 am

Baa Humbug says:
December 29, 2010 at 8:06 am
Willis
I’m afraid your invitation, though not fallen on deaf ears, will not be accepted——
But I’m afraid your invite will never be accepted because as they say in the horse training classics.
A horse never forgets, but he forgives.
A donkey never forgets and never forgives.
You are not dealing with horses 😉
————————————————————–
Baa Humbug. Ha ha! How I enjoyed that comment. Magnificent!
Douglas

Douglas
December 29, 2010 12:14 pm

Over at Judith Curry’s climate blog, the NASA climate scientist Dr. Andrew Lacis has been providing some comments. He was asked:
Please provide 5- 10 recent ‘proof points’ which you would draw to our attention as demonstrations that your sophisticated climate models are actually modelling the Earth’s climate accurately.
——————————————————————–
Willis. What about the rest? Did he provide the other 9 ‘recent’ proof points?
Just wondering.
Douglas

Dr A Burns
December 29, 2010 12:15 pm

It’s hardly rocket science ‘predicting’ that the effect of one volcano of a given size will be the same as the next.
Why don’t they try something really difficult, like forecasting whether it will rain tomorrow ? From what I’ve seen, it would be more accurate to simply say the weather tomorrow will be similar to the weather today.

MaxL
December 29, 2010 12:28 pm

Interesting post on model verification Mr. Eschenbach. If I may offer some advice that may be help if you wish to do further verification studies. I have been involved in model and forecast verification for many years. I developed one the the first forecast verification systems in our region. Forecasters get verified to death, to the point of sometimes being afraid to put out their own forecast thoughts lest they disagree with the models. From experience and the comments you can see it is a very difficult to actually define what is “good” or “bad” when it comes to a forecast. What may be good to one person may be poor for another. And parts of the forecast may be “good”, while other parts may be “bad”. This is one of the reasons that so few papers are done forecast verification. I did a paper a while back which included model verification. I had to do a lot of explaining as to what exactly I was verifying and why.
So my suggestion is that you be very specific, from the start, as to exactly what you are verifying and what your criteria of good/bad performance is. Then readers know exactly what your results are showing and cannot claim that you have misleading conclusions. They may still argue your criteria of good/bad which is always interesting when it comes to verification. I hope this helps somewhat.

Jimbo
December 29, 2010 12:37 pm

Talking of models and predictions here’s another from NASA. Don’t they realise that there is such a thing as TIME and OBSERVATIONS. Today the warmists are clasping at the global warming causes NH cooling straw.
June 4, 1999
Warm Winters Result From Greenhouse Effect, Columbia Scientists Find, Using NASA Model”
http://www.sciencedaily.com/releases/1999/06/990604081638.htm
More failure:
http://www.c3headlines.com/predictionsforecasts/

December 29, 2010 12:42 pm

@R. de Haan says:
December 29, 2010 at 10:30 am
“The Globe cooled 0.56 Degree Celsius in only four days”
That`s climate disruption for you. If you look between 50-250mb it has gone up by a similar amount. Seems like there was a crack in the magnetosphere on the 28th;
http://www.spaceweather.com/
http://hirweb.nict.go.jp/sedoss/solact3/

December 29, 2010 12:43 pm

@R. de Haan says:
December 29, 2010 at 10:30 am
http://discover.itsc.uah.edu/amsutemps/

Julian in Wales
December 29, 2010 12:58 pm

Almost everyday is damning day for the AGW theorists; this is one in a long line of good articles that further discredit the “science” of A climate chnage. It really is looking like a busted flush. The question is will we have a Kings new clothes moment, or are we so far into politics that the establishment will go on and on pretending even after the crowds have all realised the kiing is naked.

December 29, 2010 1:14 pm

As Fritz RW Dressler said: “Predicting the future is easy. It’s trying to figure out what’s going on now that’s hard.”

Theo Goodwin
December 29, 2010 1:22 pm

Baa Humbug says:
December 29, 2010 at 8:06 am
Now, there may be an explanation for that poor performance that I’m not seeing. If so, I invite Dr. Lacis or anyone else to point it out to me.
“Dear Willis
I’m afraid your invitation, though not fallen on deaf ears, will not be accepted.
The rest of us love the way you crack that whip. It’s a veritable work of art. And you explain how you crack that whip enabling us to learn, rather like the demonstrations at agricultural shows.”
If the sentiment expressed here has become widespread then science is surely dead and maybe all of Western culture with it. Willis did not crack a whip. He merely pointed out failure where some scientist had claimed success. A genuine scientist expects failure in most everything that he/she or other scientists do. A genuine scientist has among his/her chief virtues Humility. If we expect failure most of the time, as we must, there is no harshness in pointing out another scientist’s failure. By contrast, so-called climate scientists have never experienced failure, as best as I can tell. Does anyone have a description from Mann, Hansen, you name it, of one or more failures that he experienced. I believe not. As Hansen revises his temperature data, he takes the attitude that criticism of his work amounts to Redneck Rabble Rousing and probably sedition and should be punished accordingly.
Let me crack a whip for you. Because we should assume that the scientist who created the graph that Willis discussed understands his own creation, and because that scientist claims for public consumption that his work is a success, yet that work is clearly a failure, as Willis explained, then that scientist is engaging in the chief moral error of all so-called climate science: presenting to the public work that is in some important way deficient as science yet calling it “successful science.” That is moral error. It is a great moral failing in a scientist.

Ben D.
December 29, 2010 1:25 pm

MaxL says:
December 29, 2010 at 12:28 pm
Very interesting, and I took note. Although in the end it is entirely subjective on what makes a model successful or not. Often times even the experts can not be the ones to decide that, as its the people paying for it which decide that. In the case of climate models, we obviously have a conflict of interest where good results are not wanted, but mainly skewed results to show support for various environmental agendas…which is of course paid for by politians who want these agendas to go through…a vicious cycle which leaves science behind and the occult in. Hence, models which show global warming to be happening regardless of what reality shows.

thingadonta
December 29, 2010 1:30 pm

Governments aren’t in the business of describing natural variation, they are in the business of imposing order. Never been any different.

Theo Goodwin
December 29, 2010 1:30 pm

MaxL says:
December 29, 2010 at 12:28 pm
“So my suggestion is that you be very specific, from the start, as to exactly what you are verifying and what your criteria of good/bad performance is.”
MaxL, your general comments are appreciated. But don’t you think Willis nailed this one, as he explains in the following:
“So their model predicted a large event, a “three-sigma” cooling from Pinatubo.
But despite their prediction, it didn’t turn out like that at all. Look at the red line above showing the actual temperature change. If you didn’t know there was a volcano in 1991, that part of the temperature record wouldn’t even catch your eye. Pinatubo did not cause anywhere near the maximum temperature swing predicted by the GISS model. It was not a three-sigma event, just another day in the planetary life.”
That degree of divergence from the prediction surely demands explanation. For the scientist to offer this graph to the public as a success suggests that he is not paying attention, for whatever reason.

DocMartyn
December 29, 2010 1:40 pm

Climate is to science what Vista is to operating systems.

mac
December 29, 2010 2:02 pm

I’m surprised they didn’t claim that global warming was worse than they thought since the eruption couldn’t counter the effects of CO2 as they predicted.

Alex
December 29, 2010 2:12 pm

There is no evidence there was a drop in temperature when we are talking about tiny decimal values.
There is no evidence that the drop in temperature – if real- was due to Mount Pinatubo eruption.
Besides -if we assume that temperature was well measured – why Mount Pinatubo supposedly made some parts of World colder and others hotter?
The NASA sells us crap, and Willis answers with their crap.

MaxL
December 29, 2010 2:21 pm

Theo Goodwin says:
December 29, 2010 at 1:30 pm
MaxL says:
December 29, 2010 at 12:28 pm
“So my suggestion is that you be very specific, from the start, as to exactly what you are verifying and what your criteria of good/bad performance is.”
MaxL, your general comments are appreciated. But don’t you think Willis nailed this one, as he explains in the following:
“So their model predicted a large event, a “three-sigma” cooling from Pinatubo.
———
Yes, you are absolutely correct, Willis showed reality did not match a 3-sigma cooling as the model predicted. That is what the conclusion should reinforce if that was the original intent. It is not up to the author to decide whether this is then a universally bad model…and does not deserve a cookie. Whether this is a good model or bad model would depend on the individual user’s needs. In some cases it may be good enough where other cases may demand much more accuracy. Maybe this is a bit nit-picky but that is what science research should be about.

Baa Humbug
December 29, 2010 2:24 pm

Theo Goodwin says:
December 29, 2010 at 1:22 pm
Thnx for taking the time to respond to my post. You say..

Willis did not crack a whip. He merely pointed out failure where some scientist had claimed success.

That’s a matter of perception. I’ve followed Willises challenges to scientists for a while now and tried to encourage them to engage with Willis. They don’t. At best one may have a go at Willis for posting at “non-science” forums like WUWT but they just don’t engage him in the substance of his articles.
TO THEM, Willis is cracking a whip (which makes a horse unused to a whip crack bolt.)
I used a whip crack as an analogy for direct no BS critique and engagement in the science presented. This is what Willis does, to the pleasure and education of the rest of us. But as you say…

“By contrast, so-called climate scientists have never experienced failure, as best as I can tell. Does anyone have a description from Mann, Hansen, you name it, of one or more failures that he experienced. I believe not.”

In other words, these people, all belonging to the same club, have never critiqued each others work as Willis does. They are not used to the whip cracking.
In fact they often support each others work pig-headedly despite numerous studies pointing out their flaws. The hockey stick being a prime example, and this post by Willis yet another.
So we are in agreement then, no?

DocMartyn
December 29, 2010 2:31 pm

if there was no drop, or on one sigma drop) in the temperature following a known increase in aerosols what are we to make of the models including a huge level of aerosol cooling. Indeed, in many models aerosol dimming balances GHG increases.

DonS
December 29, 2010 2:34 pm

Robert E. Phelan says:
December 29, 2010 at 6:30 am . Government work, indeed. Measure it with a micrometer, mark it with a grease pencil, cut it with an axe.

Theo Goodwin
December 29, 2010 3:06 pm

Baa Humbug says:
December 29, 2010 at 2:24 pm
Do not label what Willis is doing as whip cracking. Willis is doing ordinary science. Pointing out errors in other scientists’ work is part of the duties of a scientist. Presenting your work fully and openly as a means of helping others find errors in your work is the fundamental duty of a scientist. Scientific Method demands humility of its practitioners. What the Climategate folks did, especially Jones and Mann, is totally unacceptable. Now, you are blaming Willis because he does not avoid criticism of scientists for the purpose of enticing them into a conversation? Converation about what? If he can never point out their mistakes, there is no scientific conversation.

December 29, 2010 3:07 pm

Willis,
Nice article.
Thanks.
John

KV
December 29, 2010 3:34 pm

Good post again Willis. I would appreciate your comments and those of other posters on something slightly OT but very much related. 1988 was reportedly the hottest summer in the US for 52 years and James Hansen, without the backing of NASA, had put his reputation and credibility on the line by appearing before a US Committee to propound his alarmist AGW views.
1989, 1990 and early 1991 showed significant cooling in many parts of the world before the June 15 1991 eruption of Mt.Pinatubo which no doubt exacerbated the downward fall in temperatures. Check the clickable map on the GISS Surface Temperature Analysis site to confirm this for yourselves. The Surface Stations in my home State of Tasmania, Australia show the falls particularly well.
With the cooling that was already underway, Hansen, other scientists and believers supporting the AGW hypothesis must have been put under extreme pressure by critics within and outside NASA. Luckily for them, the eruption seems to have allowed them to get away with ignoring the fact of cooling starting in 1989 and falsely claiming Pinatubo as the main, or even entire reason for the fall. Most temperature graphs of the period, even on Dr.Roy Spencer’s site, perpetuate what is IMHO this somewhat misleading position.
Significantly, following this four-year plunge in temperatures, what E.M.Smith on his excellent Musings from Chiefio website aptly termed “The Great Dying of Thermometers”, took place. In Tasmania, the number of stations from which data was used reduced from the mid-twenties to just two, Hobart and Launceston Airports.
Many may well see this period as the real start of the bastardisation of the raw data.

R. de Haan
December 29, 2010 3:48 pm
R. de Haan
December 29, 2010 3:50 pm

From Joe Bastardi:
WEDNESDAY, DEC. 29
CAN I BOTHER YOU FOR A MINUTE?
First of all, watch closely, boys and girls, how the core of the worst cold the rest of the winter is southeast of where it has been. The thaw you see now in the northwest is not the end of winter, but the end of that part of the winter… more back and forth now for the UK and Ireland, which is fun and certainly not done, but the worst is over relative to averages. However, over the heart of the continent, you’ve seen bad, and you may again see just as bad (I don’t have the heart to say worse).
I want to ask you a question. If you were in a fight and thought your opponent was finished then all of a sudden he hit you with some thundering shots, wouldn’t you at least think that the fight was not finished. At the least… okay? Common sense? Now even though I BELIEVE this is the start of the cooling over the next 20-30 years in a jagged fashion down, so we are back in the late 1970s according to satellite temps (again all the adjustment to temps being made by people is in the pre satellite era where they are free to do whatever they want with no current measuring crosscheck, which should also make you wonder), I am not willing to say, okay you guys are cooked. You know why? Because even though I think they are, I understand that no fight is done until it’s over and one side is driven from the field. So my personal feeling that it’s over really doesn’t matter, what matters is that you have people that are ignoring major physical realities either by being deceptive, or ignorant of what temperature really is… a measure of energy! And the fact is the higher the average temp is the more the variance in temp has an effect on the global energy budget. I have talked with Joe D’Aleo about a work-up of this to drive home the point about the blocking. It takes much less energy to warm a gas 10 degrees from from the surface up when the average temp is let’s say 0, then it does to cool the atmosphere a few degrees where the normal temp is 40. And when we try to quantify the amount of energy being lost in the tropical Pacific by the cooling there, it BLOWS AWAY the warming in the Arctic. It’s an effective governor on the Earth’s temps and is a precursor to what will be a major switch in the Northern Hemisphere… and once that happens, with the land masses, the temps will really fall. Anyone been watching the Southern Hemisphere, where a lot of the first warning shots started to be fired a few years ago? You can make all the excuses you want, but if you are going to argue the contraction of Northern Hemisphere sea ice is a sign of warming, since the continents are warmed because of previous ocean cycles, then you can’t walk away from the reality of what has to be going on in the hemisphere with the most ocean, and hence a higher energy consideration where sea ice is increasing! Only in a world of fantasy can you think you can have it both ways!!! And the physics of the situation argues against you trying to use the temperature as a metric to determine whether the climate is actually warming in a permanent fashion, or there is simply a distortion of where temperatures are being measured higher, since the amount of energy DECREASES rapidly with temp loss. It takes next to nothing to raise temps that much in the Arctic; it takes a heck of a lot to drop them in the tropics!!!
But all this being said, you can see the crash already starting as forecast here back in the spring on the temps. So if you want to use temps as the metric, I say the fight is still on, and on big time and these people saying it’s over, or explaining that a fight back is a sign they are winning, are either being deceptive or delusional to the idea that they are absolutely right and what is happening is because of what they say. At the very least, it’s a sign that we should let it play out.
The real thing we should be looking at is if there is an accumulation of energy in the Earth’s atmosphere system. Simple temperatures given equal weight energy wise to low and high values would be laughed out of any classroom if one is trying to quantify the total energy! It’s basic. Why do you think there is weather? Because of the constant fight to even out imbalance. Why is there overrunning? Warmer, more moist air with more energy cannot push out a cold, stable air mass with less energy, so it’s forced up and over. The molecules get more excited when they are warmed… etc., etc.
In a way, the whole thing is a bit amusing, if it wasn’t that it could be enslaving.
Ciao for now.

JJB MKI
December 29, 2010 4:28 pm

I feel sorry for the little electron people in the GISS mainframe. Not only did they have to suffer a long hard winter after the simulated Pinatubo eruption, they had to do so in the face of catastrophic climate change after it was arbitrarily decided by their gods that CO2 was the most significant forcing in their world. To compound their misery, even if they halt their simulated GHG emissions, the warming bias built into their universe will burn them all to a crisp in a few hundred years anyway. Won’t somebody think of their poor simulated children’s children?
Seriously, was the Pinatubo justification really the best they could come up with?

adrian smits
December 29, 2010 4:43 pm

Why don’t we just look at the record for signs of forcing? Since the end of the last mini ice age we have warmed about 8ths of a degree.Our carbon dioxide has gone up about 50% the last 150 years so we should be seeing half the warming we would get with a doubling of c02.So I am assuming about an additional 8ths of a degree of warming with a full doubling of our c02.This theory assumes no natural variation in our climate and no tipping points either.Lets just pretend they cancel each other out for the sake of this discussion.Some how 8ths of a degree isn’t that scary.

kim
December 29, 2010 5:18 pm

Whip is metaphor. Whatever that wicked sound, it’s not in their ken.
==========

Alex Heyworth
December 29, 2010 5:30 pm

Even if Dr Lacis was correct, and the GISS model’s “prediction” (shouldn’t it really be “postdiction”?) of the response to the Pinatubo eruption was accurate, what does it tell us about the accuracy of the current GISS model? Nothing. I’m sure there have been thousands, if not hundreds of thousands, of changes made to the GISS model since 1992. Dr Lacis would no doubt say that they have improved the model. I say this begs the question.
This is the dilemma eternally faced by those who model natural phenomena. The temptation to “improve” the model always takes precedence over the need to keep the model static to verify its accuracy.

MikeA
December 29, 2010 7:17 pm

It is also worth noting that the IPCC projections are also based on more than 100% error in the models, those ensemble predictions give warming at around 3°C plus or minus 2°C (or roughly that). So you have a 100% error with even trying. Aren’t numbers great!

December 29, 2010 7:31 pm

I am not familiar with the details of the NASA model that was used to predict the effects of a volcano. However, even if the temperature had agreed totally with the effects on temperature predicted by the model as shown in Figure 2 as a blue line, it would not mean that the model predicts anything else with the same degree of accuracy. The question of validation of climate model designed to make long term predictions of temperature based on relatively short term temperature change from a single atmospheric event is questionable at best. Even beyond that is the weighting factors used to predict the cooling are not the same as heating from CO2. If this the best example of validation of the NASA computer model that Dr. Lacis can suggest, he should start looking for another. This is clearly a bad choice. Or perhaps there aren’t any.

MAK
December 29, 2010 10:10 pm

GISS model seems to estimate almost flat response to strong El Nino that was going on during Pinatubo eruption. If you model the real response of El Nino that was missing when comparing this to response of 97/98 El Nino – for example – you can see that GISS model strongly underestimates the cooling that occured.
The same applies to El Chichon eruption during which the strongest El Nino of satellite era was occuring.
The real cooling peak cooling of Pinatubo is almost 1 degree.

Baa Humbug
December 29, 2010 10:19 pm

Theo Goodwin says:
December 29, 2010 at 3:06 pm
Willis is doing ordinary science. Pointing out errors in other scientists’ work is part of the duties of a scientist. Presenting your work fully and openly as a means of helping others find errors in your work is the fundamental duty of a scientist. Scientific Method demands humility of its practitioners. What the Climategate folks did, especially Jones and Mann, is totally unacceptable.
Agreed, I didn’t need a lesson on the scientific method

Now, you are blaming Willis because he does not avoid criticism of scientists for the purpose of enticing them into a conversation? Converation about what? If he can never point out their mistakes, there is no scientific conversation.

No, re-read my post at 2:24pm where I say “TO THEM Willis is cracking a whip.” That’s their perception IMO. I used that particular analogy. Use whatever you like, in any case we end up with the following…

“I posed some questions to Andrew Lacis over on Judiths fine blog, but all that got proven was that do I have a mystery power. Not as good as Superman, I can’t see through walls, but my scientific questions have been scientifically proven to make climate scientists disappear. Neat, huh?”

That was Willis in a reply in the Where did I Put That Energy thread
Willis Eschenbach says:
December 23, 2010 at 4:45 pm
RobB says:
December 23, 2010 at 5:31 am
So, yes Willis is having a scientific conversation Theo, BUT THEY ARE NOT JOINING HIM.
I am as big a Willis fan as any, I say at 2:24pm
“I’ve followed Willises challenges to scientists for a while now and tried to encourage them to engage with Willis. They don’t”
Maybe you’ve got some bright ideas on how to get these people to engage Willis, I’m all ears.

WA777
December 29, 2010 10:31 pm

Perhaps Dr. Andrew Lacis did not adjust the effects of the SO2 and other aerosol parameters. As a refresher, here are a few references, including how the models were tuned to historical data by adjusting the parameters for SO2.
Lindzen, Richard S. 2007. Taking GreenHouse Warming Seriously. Energy & Environment 18, no. 7 (12): 937-950. doi:10.1260/095830507782616823.
https://www.cfa.harvard.edu/~wsoon/ArmstrongGreenSoon08-Anatomy-d/Lindzen07-EnE-warm-lindz07.pdf

Introduction. “In science, there is an art to simplifying complex problems so that they can be meaningfully analyzed. If one oversimplifies, the analysis is meaningless. If one doesn’t simplify, then one often cannot proceed with the analysis.
When it comes to global warming due to the greenhouse effect, it is clear that many approaches are highly oversimplified. This includes the simple ‘blanket’ picture of the greenhouse effect shown in Figure 1. We will approach the issue more seriously in order to see whether one can reach reasonably rigorous conclusions. It turns out that one can….
(On tuning the models, page 9) “Despite this, it was still necessary to arbitrarily remove half the anthropogenic greenhouse forcing. The need to cling to the high sensitivities is readily explained by Thorpe’s insistence on policy relevance. Without high sensitivity, this would be greatly diminished. Indeed, to maintain the ominous projections, it is necessary to assume that the aerosol cancellation will soon disappear (Wigley and Raper, 2002). However, these arguments are only possible if one chooses to ignore the fact that observations are failing to display the distribution of warming that is associated with greenhouse warming….
(Page 11) “Ultimately, however, one must recognize how small the difference is between the estimation that the anthropogenic contribution to recent surface warming is on the order of 1/3, and the iconic claim that it is likely that the human contribution is more that ½. Alarm, we see, actually demands much more that the iconic statement itself. It requires that greenhouse warming actually be larger than what has been observed, that about half of it be canceled by essentially unknown aerosols, and that the aerosols soon disappear. Alarm does not stem directly from the iconic claim, but rather from the uncertainty in the claim, which lumps together greenhouse gas additions and the canceling aerosol contributions (assuming that they indeed cancel warming), and suggests that the sum is responsible for more than half of the observed surface warming.”
To be sure, current models can simulate the recent trend in surface temperature, but only by invoking largely unknown properties of aerosols and ocean delay in order to cancel most of the greenhouse warming (Schwartz et al, 2007). Finally, we note substantial corroborating work showing low climate sensitivity.

WA777
December 29, 2010 10:36 pm

Wigley, T. M. L., and S. C. B. Raper. 2002. Reasons for Larger Warming Projections in the IPCC Third Assessment Report. Journal of Climate 15, no. 20 (October 15): 2945-2952.
http://journals.ametsoc.org/doi/abs/10.1175/1520-0442%282002%29015%3C2945%3ARFLWPI%3E2.0.CO%3B2

Abstract: Projections of future warming in the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report (TAR) are substantially larger than those in the Second Assessment Report (SAR). The reasons for these differences are documented and quantified. Differences are divided into differences in the emissions scenarios and differences in the science (gas cycle, forcing, and climate models). The main source of emissions-related differences in warming is aerosol forcing, primarily due to large differences in SO2 emissions between the SAR and TAR scenarios.
For any given emissions scenario, concentration projections based on SAR and TAR science are similar, except for methane at high emissions levels where TAR science leads to substantially lower concentrations. The new (TAR) science leads to slightly lower total forcing and slightly larger warming. At the low end of the warming range the effects of the new science and the new emissions scenarios are roughly equal. At the high end, TAR science has a smaller effect and the main reason for larger TAR warming is the use of a different high-end emissions scenario, primarily changes in SO2 emissions.

WA777
December 29, 2010 10:39 pm

IPCC Task Group on Data and Scenario Support for Impact and Climate Assessment (TGICA). 2007. General Guidelines On The Use Of Scenario Data For Climate Impact And Adaptation Assessment. IPCC, June.
http://www.ipcc-data.org/guidelines/TGICA_guidance_sdciaa_v2_final.pdf
Aerosols: AOGCM experiments which account for both the negative forcing associated with historically observed concentrations of aerosols and greenhouse gas forcing over the same period have achieved a close correspondence of global mean temperature changes compared to observations (e.g. Mitchell et al., 2001 – Figure 10). These experiments have also been projected into the future on the basis of the assumed concentrations of sulphate aerosols, usually under the assumption of the IS92a or SRES scenario SO2 emissions profiles. The effect on climate when aerosols are included, compared to experiments forced by greenhouse gases only, is to suppress global warming. However, none of the SRES emissions scenarios shows regional SO2 concentrations as high as for the IS92a scenario, and by the end of the 21st century all scenarios show that the effects of greenhouse gas forcing dominate over the aerosol effect.

randomengineer
December 29, 2010 10:48 pm

Willis —
I don’t get what the problem is. Seems to me that what you want the model to do is know precisely what compounds were emitted and in what volumes. From there as per George E.P. Box (all models are wrong but some are useful) the model shows that it’s useful. Did it get the eruption perfect? No. Would I expect it to? No. Dr Lacis says the signs were right which I take as being on the right track.
Stevo and others —
I really sorta doubt that NASA sat around and pronounced the model “good enough” and indeed set about to doing tweaks and hacks and whatnot so as to be better than what it was. From what I can tell from what I read, models are always in a state of development. This should be no different.
Willis —
I’m all for this sort of examination, but I’m hesitant to pronounce a model as useless. In fact I’d like to see about 3x more money spent on these things (as per the Judith Curry site threads) properly validated etc so that we can determine whether or not man is having an adverse effect as claimed. The effort to understand climate is going to require a lot of modeling effort one way or another. This examination, if it really is pointing out weakness as claimed, is helpful to the overall effort. The question is whether or not this has already been superceded by NASA and/or other shops. Any idea?

WA777
December 29, 2010 10:50 pm

Johnston, Jason Scott, and Robert G. Fuller, Jr. 2010. Global Warming Advocacy Science: a Cross Examination. Research Paper. University Of Pennsylvania: University Of Pennsylvania Law School, May.
http://www.probeinternational.org/UPennCross.pdf
Global Circulation Model Parameters conflict, addressing tinkering with the parameters (esp. aerosols) to force the models to simulate 20th century “observations”. Since the tinkering is unique to each model, there is no way to tell which “climate sensitivity parameter” is most accurate.

(Page 29) “As recent work has shown, if the (negative) aerosol forcing turns out to be much smaller than assumed, then the ensemble of GCM’s used by the IPCC would have to have a much larger climate sensitivity (with the mean moved up a full 2 degrees centigrade) in order to remain consistent with observations. On the other hand, if the negative aerosol forcing is even larger (more negative), then the ensemble GCM’s would fail on the other side, simulating too little warming. This “mismatch” between observed and simulated 20th century warming would mean that “current agreement between simulated and observed warming trends would be partly spurious, and indicate that we are missing something in the picture of causes and effects of large scale 20th century surface warming.”
(Page 30) “That the models are essentially using aerosol parameterizations to offset variations in presumed climate sensitivity is far from an innocuous technical detail. As Richard Lindzen has explained, because a high climate sensitivity implies (other things equal) a big CO2 – induced warming, in order to have significant policy relevance climate models “cling” to high climate sensitivities, . And yet as just discussed here, the sensitivities are so high that the models simulate too much 20th century warming. To get a better reproduction of past temperatures, the models cancel out about half of simulated warming by imposing a compensating assumption about the cooling effect of aerosols. But then apparently to preserve “alarm” about the future, climate models assume that the aerosols will soon disappear. Even if the models are correct that aerosols have had a net cooling effect in the twentieth century, this series of parameter adjustments and assumptions about future changes in aerosols can hardly inspire confidence in climate models.”

(Page 30, Footnote 122) “Recent work suggests that aerosol cooling is so significant that reduction in aerosols due to pollution control and attendant solar brightening was responsible for two thirds of the warming that occurred since the mid-1980’s over Switzerland and Germany. (Philipona, 2009)”

Philipona, Rolf, Klaus Behrens, and Christian Ruckstuhl. “How declining aerosols and rising greenhouse gases forced rapid warming in Europe since the 1980s.” Geophysical Research Letters 36 (January 20, 2009): 5 PP.
http://www.agu.org/pubs/crossref/2009/2008GL036350.shtml
“Mainland Europe’s temperature rise of about 1°C since the 1980s is considerably larger than expected from anthropogenic greenhouse warming. Here we analyse shortwave and longwave surface forcings measured in Switzerland and Northern Germany and relate them to humidity- and temperature increases through the radiation- and energy budget. Shortwave climate forcing from direct aerosol effects is found to be much larger than indirect aerosol cloud forcing, and the total shortwave forcing, that is related to the observed 60% aerosol decline, is two to three times larger than the longwave forcing from rising anthropogenic greenhouse gases. Almost tree [sic, “three”] quarters of all the shortwave and longwave forcing energy goes into the turbulent fluxes, which increases atmospheric humidity and hence the longwave forcing by water vapour feedback. With anthropogenic aerosols now reaching low and stable values in Europe, solar forcing will subside and future temperature will mainly rise due to anthropogenic greenhouse gas warming.”

BlueIce2HotSea
December 29, 2010 11:01 pm

Mount Pinatubo released 20 Million metric tons of SO2 in 1991. That amount is less than China’s annual release of SO2 going back ten years.
China currently has an aggressive desulphurization program. I wonder what Dr. Lacis prediction is of the warming impact should the program prove successful.

WA777
December 29, 2010 11:07 pm

Apparently the EPA wishes to assure Global Warming will continue for the indefinite future (together with their jobs and power). On the one hand, they wish to reduce CO2 emissions to forestall AGW. On the other hand, they wish to reduce SO2 emissions which reduce global warming.
EPA. 2009. SO2 Reductions and Allowance Trading under the Acid Rain Program. Governmental. Clean Air Markets. April 14. http://www.epa.gov/airmarkets/progsregs/arp/s02.html
The link above provides an overview of how reductions in SO2 emissions are to be achieved under the Acid Rain Program. Below is a report of the EPA doubling down on Command-And-Control in 2010.

The Clean Air Act Amendments of 1990 set a goal of reducing annual SO2 emissions by 10 million tons below 1980 levels. To achieve these reductions, the law required a two-phase tightening of the restrictions placed on fossil fuel-fired power plants.
On July 6, 2010, the United States Environmental Protection Agency (EPA) proposed new rule, called the “Transport Rule”, which will require significant reductions in sulfur dioxide (SO2) and nitrogen oxide (NOx) emissions that cross state lines into neighboring states. The new Transport Rule will replace a Bush-era regulation called the Clean Air Interstate Rule (CAIR), which a federal court reversed, but allowed to remain in place until EPA prepared a replacement rule. This new rule applies to power plants in 28 states, including Michigan.
EPA developed this new regulation because SO2 and NOx react in the atmosphere to create fine particles and ground-level ozone in states downwind of the sources, causing jurisdictions downwind of the sources to fail to meet compliance with fine particulate matter (PM) and ozone national air quality standards (NAAQS). If a state allows activity that impairs another state to comply with NAAQS, then under the Clean Air Act it must institute plans to prevent it (known as the “good neighbor” provision).
http://www.mlive.com/environment/index.ssf/2010/07/post_31.html
— July 07, 2010, Saulius Mikalonis

Editor
December 29, 2010 11:32 pm

Houston wrote: “the aerosol ‘knob’ needs to be turned down a bit.”
Yes, and the “aerosol knob” is mostly the climate sensitivity knob. They multiply any warming or cooling effect by water vapor feedbacks. Evidently a lot less of that multiplying happens than they assume.

TomFP
December 30, 2010 12:49 am

Willis, thanks for another great post. Statistics is largely a mystery to me, but is it possible to adjust other parameters (such as CO2 sensitivity) in Lacis’ model, to see if doing so produces a better fit?

Grumbler
December 30, 2010 4:08 am

Alec Rawls says:
December 29, 2010 at 11:32 pm
Houston wrote: “the aerosol ‘knob’ needs to be turned down a bit.”
……..
When discussing models, parametisation and ‘knob turning’ with my students one of them commented, ” a bit like etch a sketch”. How true.

Stevo
December 30, 2010 4:39 am

Willis: please quote the uncertainty on the number that the model predicted, and the uncertainty on the number you determined from the observations.

KD
December 30, 2010 7:33 am

randomengineer says:
December 29, 2010 at 10:48 pm
Willis –
I don’t get what the problem is. Seems to me that what you want the model to do is know precisely what compounds were emitted and in what volumes. From there as per George E.P. Box (all models are wrong but some are useful) the model shows that it’s useful. Did it get the eruption perfect? No. Would I expect it to? No. Dr Lacis says the signs were right which I take as being on the right track.
——————–
Sure would like to know where you work random… I want to avoid products and services from your company at all costs!
If I gave a prediction or forecast to my company that was off at its peak by 100% they’d show me the door.
Here’s the crux: if the model was off 100% when modeling a fairly well understood (and relatively rare) climatic event, how far off will it be in modeling far more complicated and nuanced events? Two hundred percent? Three hundred percent? More? Less?
Here’s a simple test of the voracity of your trust in the model: run the model today and have it predict the average global temperature for 2013. Will you bet a years pay that the model will be correct?
KD, PhD Engineer and former modeler

KD
December 30, 2010 7:39 am

Stevo says:
December 30, 2010 at 4:39 am
Willis: please quote the uncertainty on the number that the model predicted, and the uncertainty on the number you determined from the observations.
———–
I think you just reinforced Willis’ point.
If the model and observation errors mean the model was NOT statistically different from the observation, then the whole of the AGW worry is within the noise.
If the model and observation ARE statistically different, then the model is, well, crap.
In either case, before the world makes trillion dollar policies I think we need a whole lot more confidence in our understanding of the climate and man’s impact (or lack of impact) on it.

December 30, 2010 8:01 am

I agree – “Prediction is hard, especially of the future.”
Much easier to predict the past.
[grin]

Up North Out West
December 30, 2010 8:12 am

Clearly what’s needed here is a trick to hide the lack of decline.

George E. Smith
December 30, 2010 8:58 am

“”””” JohnWho says:
December 30, 2010 at 8:01 am
I agree – “Prediction is hard, especially of the future.”
Much easier to predict the past. “””””
Well unfortunately, climate models don’t even predict the past. Surely if the models were any good, they would be able to recalculate the missing climate data that the EA CRU couldn’t fnd space to store, so they trashed it.
Well since they completely ignore the Nyquist Sampling Theorem, in their data gathering; there is no way they can even reconstruct the past; they can’t even correctly recover the averages.

randomengineer
December 30, 2010 9:27 am

(Dr) KD — Here’s the crux: if the model was off 100% when modeling a fairly well understood (and relatively rare) climatic event, how far off will it be in modeling far more complicated and nuanced events? Two hundred percent? Three hundred percent? More? Less?
Couple of points:
1. I’m not starting with a presumption that Dr Lacis input exact values of the eruption volume, just that there was an eruption. If he/they input exact counts then obviously the model ought to have performed better. How much better? I don’t know. The question is whether or not the eruption causes it to be off 5 years later.
2. If the point of the model is to be able to precisely replicate eruptions, there’s an obvious problem. It seems that the point of this model is to have some sense of how eruptions impact global climate drivers, in which case short term eruption recovery precision may not be a big factor for longer term usability.
I don’t know how you got a PhD assuming that all models are exact for all possible inputs. Heaven knows that we in R&D engineering can sometimes be wildly wrong even when performing mundane tasks like estimating development time, and depending on what you’re building, more often than not. As per Dilbert “it’s logically impossible to schedule the unknown.” Plenty of models that are good at one factor may not be at others. So? The point is whether the model is useful for the factor set it was designed for. Has anyone asked whether or not the point of the model in question was to accurately replicate all possible short term eruptions?

December 30, 2010 9:33 am

willis, OT.
Thought about the issue of bias in the control run.
simple: control run would then be subtracted from every projection for bias removal
have a look at this
http://sms.cam.ac.uk/media/872375?format=flv&quality=high&fetch_type=stream

December 30, 2010 10:42 am

Baa Humbug says:
December 29, 2010 at 8:06 am
Willis
I’m afraid your invitation, though not fallen on deaf ears, will not be accepted——
But I’m afraid your invite will never be accepted because as they say in the horse training classics.
A horse never forgets, but he forgives.
A donkey never forgets and never forgives.
You are not dealing with horses 😉

Well, at least not the front end of horses.
🙂

KD
December 30, 2010 2:35 pm

Willis Eschenbach says:
December 30, 2010 at 12:38 pm
randomengineer says:
December 30, 2010 at 9:27 am
—————————————
Thanks, you saved me a lot of typing with your (excellent) response to random.
Random, it may surprise you to know that I have been in corporate R&D as well as product engineering and software engineering. I hold seven patents. My work is in products you can buy today, specifically in very high tech medical imaging devices.
I can assure you the requirements for accuracy in the world where the companies I’ve worked for would contemplate investing millions of dollars is much higher than a 100% error. Again I ask, where do you work?

Tenuc
December 30, 2010 2:53 pm

Great post Willis, looks like Lacis shot himself in the foot picking a 20yo model which failed to get even close to predicting the effect of the eruption on climate as his only example of the success of climate models. If that’s the best they’ve got it’s a travesty!
However, due to the deterministic chaos inherent in our climate system I would have been very surprised if they had nailed this event. Averaging climate using an arbitrary period is simply a mental construct which has no meaning in our dynamic climate system. Climate behaves like a complex driven pendulum where average behaviour and trends mean nothing – much like our financial markets.

Joel Shore
December 30, 2010 3:18 pm

Willis Eschenbach says:

Thanks, Mosh. If you take a look at Fig. 3 you’ll see that during the time in question, the control run varied little from zero. As a result, there is little difference between what I show, and the (projection minus control run) that you are talking about.

But, as you know, the control runs do not correctly predict the time and size of El Nino and La Nina events, which are extremely sensitive to initial conditions…i.e., essentially chaotic. Given that there was a quite strong El Nino event during that time and that we know what the general effect of such strong El Nino events are on global temperatures, the value that you have obtained for the response to the eruption is almost surely an underestimate…quite possibly a fairly significant underestimate.

That’s what Dr. Lacis said, that the performance of this model on Pinatubo is an exemplar.

Not really. The reason why Mt. Pinatubo is chosen is not because there hundreds of clean tests to choose from and this one happened to turn out better than the rest but rather because such clean tests when the predictions are clearly made beforehand (and thus no arguments of tuning, no matter how misguided can be made). It is an unfortunate fact that we don’t have an ensemble of earths to experiment with.

KD noted that the model was off by 100%, and (since you had previously said a 100% error was somehow acceptable)

Well, if you are going to demand very high precision, then you don’t need to do any tests. After all, different models have climate sensitivities varying by over a factor of 2, so you could just say right away, “I demand more accuracy before I am willing to do anything.” Of course, others might have a different point-of-view and say “Since the full range of model sensitivities points to the need to get off of fossil fuels, certainly before we use up anything near all the stores of coal that we have (let alone the more exotic sources like tar sands), we think we should do something now.” Large financial decisions are always made with uncertainties and using models that are imperfect. Do you think the economic modeling done to predict the impact of various tax and spending policies are better?
Essentially, only if one assigns infinite weight to the desire to burn fossil fuels unfettered and as cheaply as possible will one come to the conclusion that one should do nothing to lower our carbon emissions as long as there is any uncertainties about the effects (which means, of course, forever).

Jim D
December 30, 2010 4:22 pm

Interesting post. If I read the paper right, they assumed the volcanic effect immediately covered 0 to 30 N latitude bands at the time of eruption. My view is that this “ambitious” spread rate may have led to their over-prediction.

BlueIce2HotSea
December 30, 2010 4:53 pm

Well there is something that might have thrown off their calculations. From 1989 to 1992 anthropogenic SO2 emissions dropped around 15M metric tons, somewhat negating the Mt. Pinatubo SO2 injection. This happened after a 15 yr. plateau in emissions since the early ’70’s.

KD
December 30, 2010 6:13 pm

Joel Shore says:
December 30, 2010 at 3:18 pm
Large financial decisions are always made with uncertainties and using models that are imperfect. Do you think the economic modeling done to predict the impact of various tax and spending policies are better?
————————————————————-
OK, please name a financial decision as large as is contemplated by the AGW crowd that was made based on a model projection 100 years into the future using a model that had a 100% error when modeling an event over a five year period.
Any example will do.

Joel Shore
December 30, 2010 8:50 pm

KD: I am not saying that climate change is not a monumental and in some ways unique challenge. However, there are potentially large costs either way. The most intelligent thing to do is not to wait until we know exactly what will happen for sure before doing anything but to hedge our bets so that we have maximum flexibility to either speed up or back off on the transformations of our use of energy (and implementation of sequestration). If you think making the transformation will be expensive if we do it gradually over the next several decades, imagine how much more expensive it will be if we continue on our orgy of burning fossil fuels at ever-increasing rates and then have to do a crash-diet once we find out that, yes, the problem is about as serious as most scientists have thought it to be.
Also, the prediction aspect is not as dire as you make it sound. It’s not like we need a weather forecast for a hundred years out. In that case, you would have errors that accumulate in time. Here, we just need (at least as a first understanding of the scope of the problem) to understand roughly how large a perturbation we would be making to the climate system. For that, all we need to understand is if the basic picture of radiative forcings governing things is correct and roughly how large the climate sensitivity is. And, the evidence from Mt. Pinatubo, along with evidence from the glacial – interglacial periods, and so forth tend to confirm this view and to point to at least a moderate climate sensitivity.

December 30, 2010 9:43 pm

Willis,
“Thanks, Mosh. If you take a look at Fig. 3 you’ll see that during the time in question, the control run varied little from zero. As a result, there is little difference between what I show, and the (projection minus control run) that you are talking about.”
I wasnt suggesting that here ( cause you dont have the control run) I’m suggesting that in general. Have a look at the video I linked to. Some interesting stuff in there
WRT this article. If Lacis offers this up as evidence that we should have trust in the models, I’d say its a fairly slim reed, but a reed nonetheless.

Matt G
December 31, 2010 6:53 am

Not notice if anyone did mention this, but SO2 emissions affect global temperatures only when in the high atmosphere. SO2 emissions in the troposphere has no noticable affect on temperature. It is quickly removed out of the air with precipitation and when in strong enough concentrations causes acid rain. Therefore how much a volcanoe affects global temperatures depends of how many tons reach the stratosphere. Human SO2 emissions never reach the stratosphere and therefore can’t affect global temperatures, but can affect the local environment.

Joel Shore
December 31, 2010 7:29 am

Matt G: Your statement is not quite right. You are right that for the SO2 from volcanoes, it is that which makes it into the stratosphere that has most of the effect because of the quick removal of that which stays in the troposphere. However, human SO2 (and other aerosol) emissions are emitted continuously and thus at any time there is some that is not yet washed out…and this concentration is enough to have an effect on global temperatures too. What is true is that SO2 concentrations in the troposphere are essentially proportional to current emissions, as opposed to CO2 concentrations that are essentially proportional to the cumulative amount of emissions (i.e., the integral of emissions with respect to time).
It is also worth noting that, while there is considerable uncertainty in regards to the radiative forcing due to aerosols in the troposphere, there is less uncertainty for the radiative forcing due to aerosols in the stratosphere. This is primarily because of the so-called “indirect effect” of aerosols in the troposphere due to their effect on the nucleation of liquid droplets and hence on clouds. I think another issue is that to the extent that aerosols absorb rather than reflect solar radiation, this doesn’t really produce a negative forcing when in the troposphere but do when they are in the stratosphere. (This is because the heat produced when the energy is absorbed in the troposphere can mix down to the surface.)

BlueIce2HotSea
December 31, 2010 9:53 am

Matt G says:
December 31, 2010 at 6:53 am
“Human SO2 emissions never reach the stratosphere and therefore can’t affect global temperatures…”

Are you sure about this?
According to the IPCC Third Assessment, the global mean radiative cooling due to anthropogenic sulphate emissions in 2000 relative to 1750 was 0.5 W/sq. m. By definition, this figure does not include volcanic emissions.

KD
December 31, 2010 11:44 am

Joel Shore says:
December 30, 2010 at 8:50 pm
KD: I am not saying that climate change is not a monumental and in some ways unique challenge. However, there are potentially large costs either way. The most intelligent thing to do is not to wait until we know exactly what will happen for sure before doing anything but to hedge our bets so that we have maximum flexibility to either speed up or back off on the transformations of our use of energy (and implementation of sequestration). If you think making the transformation will be expensive if we do it gradually over the next several decades, imagine how much more expensive it will be if we continue on our orgy of burning fossil fuels at ever-increasing rates and then have to do a crash-diet once we find out that, yes, the problem is about as serious as most scientists have thought it to be.
Also, the prediction aspect is not as dire as you make it sound. It’s not like we need a weather forecast for a hundred years out. In that case, you would have errors that accumulate in time. Here, we just need (at least as a first understanding of the scope of the problem) to understand roughly how large a perturbation we would be making to the climate system. For that, all we need to understand is if the basic picture of radiative forcings governing things is correct and roughly how large the climate sensitivity is. And, the evidence from Mt. Pinatubo, along with evidence from the glacial – interglacial periods, and so forth tend to confirm this view and to point to at least a moderate climate sensitivity.
——————
So no example the? Falling back on the precautionary principle? Assuming there is no consequence or opportunity cost to the path you propose?
Wish there was enough wealth in the world to throw $s after the POSSIBILITY that there may be a problem, but in the world I know there isn’t. Just a bit of a financial crisis these days.
Love your last paragraph. Don’t need to project out 100 years, just a few. But the model has shown us a 100% error in just a few years. So why should we believe it? Better still, why should we base policy decisions on it?
I find your arguments to be lacking of facts and circular in nature. Given that, I’ll keep my money and my carbon fuels for now, thank you.

Bill Illis
January 1, 2011 6:27 am

I still don’t see how this simulation is representative of the capability of climate model predictions.
The Pinatubo forcing change was a reduction in solar radiation of -4.1 watts/m2 (eventually OLR fell as well as we ended up with a net forcing change of -2.9 watts/m2).
Temperatures declined by 0.4C or 0.1C/W/m2 to 0.14C/W/m2
The climate models are saying +3.7 W/m2 in GHG forcings eventually results in +3.0C in temperature increases or 0.81C/W/m2.
How exactly does a short-term climate response of 0.1C/W/m2 confirm 0.81C/W/m2.
If anything, Hansen had 5 or 6 volcanoes from history which could be used to guess at a 0.4C or 0.6C decline. So, when the first GISS Model runs were done (in the 1980s) and came back with -2.0C for a Pinatubo-like valcano – they just adjusted the forcing by two-thirds to get them closer to the -0.5C (see GISS “efficacy of forcings” paper – they just changed it).
So, it does not validate the basis of the climate models. It invalidates them.

Joel Shore
January 1, 2011 10:04 am

Willis Eschenbach says:

Well, you know, speculation is fun, but I tend to look at the numbers. So I got the El Nino numbers from here, and I regressed the GISSTEMP temperature on them, and subtracted the regression from the GISSTEMP temperature. That left me the temperature without El Nino effects.
The surface air temperature (GISSTEMP) dropped from June of ’91 to September of ’92. Without considering the El Nino the drop was slightly more (by 0.04°C) than when the El Nino effect was removed. So the effect, rather than being “perhaps a significant underestimation”, is actually a slight overestimation.

But, September 1992 is a rather special point…It falls in the one short period of time between mid-1991 and mid-to late-1993 when the ENSO indices were close to neutral. In fact, only 6 months before that, they were STRONGLY positive and 6 months later, they were reasonably strongly positive again. The primary discrepancy between the temperature data and the prediction appears to be in both of these regions of time when the ENSO index was quite strongly positive.

The history of carbon in our fuels is one of constantly decreasing carbon ratio. We started with wood and coal, lots of carbon. We moved to diesel and gasoline, less carbon. Now the planet is moving to natural gas, less, carbon still.

…Which is good and suggests that it is not necessary to have high carbon ratio to have prosperous economies. However, there is also a lot of coal that will be used. The market system at present is blind to the fact that there are other significant potential costs associated with the use of carbon, and with such blindness comes a system in which such sources of energy remain artificially cheap. On the other hand, the costs associated with getting off fossil fuels are going to have to be borne eventually since they are finite resources. The only question is whether we get off fossil fuels before or after we have caused major changes to our environment. (If sequestration is cost-effective enough to become a significant component of the solution, we don’t even have to wean ourselves off fossil fuels as quickly but can go through a stage of using them but sequestering the CO2, giving us more time to develop the alternative energy sources. These, of course, are questions that a free market, given the proper signals to account for externalized costs, can decide.)

Here’s how I’d put the odds and the cost:
Odds of a 2°C temperature rise this century if CO2 is stabilized – one in ten
Odds of a 2°C temperature rise this century if CO2 is not stabilized – one in nine
Cost of a 2°C temperature rise this century – zero ± hundreds of trillions of dollars
Mathematical Expectation – zero ± tens of trillions of dollars
Now, we can argue about the numbers.

Yes, we can. Those numbers are nowhere close to where most of the scientists in the field would put those numbers. Do you really think that the world’s policymakers should base their decisions on Willis Eschenbach’s view? Now, you can try to argue your view in the scientific literature and try to get the scientific consensus to move in your direction, but that’s not where it is now (and I would say for good reason although I recognize that you will disagree).

I advise, as always, the “No Regrets” path. All of the foreordained horrors of the climate Thermageddon are with us now. We have storms and droughts and floods and cyclones wreaking untold human misery today.

This is only a “no regrets” path if one believes that the only thing we would ever want to do is adapt to climate change rather than make any attempts to mitigate it.

As for carbon reduction, the EPA regs are about to kick in. They will cost untold billions of dollars. The EPA itself says that by 2030 the regs will make the world cooler by three hundredths of a degree …

That is because they are…
(1) not aimed at making a significant difference to the temperature in 2030 but rather in getting us on a path that will (in concert with the actions of other countries) help make a significant difference in the latter part of the century. We are pretty much stuck with what warming we are going to get over the next 20 years or so.
(2) not aimed at solving the problem without participation from the rest of the world too. It is a global problem. It is like voting…Why should I ever vote in elections when my one vote makes such an infinitesimal difference to the outcome that it almost never matters. Even in Florida in 2000, it did not come down to one single vote.
Happy New Year’s, Willis!

Dave F
January 1, 2011 11:06 pm

I had a random thought for your thunderstorm hypothesis Willis. Have you considered that changing radiation may change the residence time of water vapor in the atmosphere?
Hotter water vapor will rise faster, release its energy to space, and fall. This would mean that warming can actually shrink the amount of time that water vapor resides in the atmosphere.
Likewise, colder water vapor will rise slower, releasing less of its energy to space, and precipitate less.
Observations bear this idea out, but the idea of changing the residence time of water vapor hasn’t been factored into any discussion I have been a part of. Stephen Wilde’s assertions about speeding up the hydrological cycle seem to be a grasp at this concept, and are part of what made me think of this. I’d like to see what you make of this, given that you have the time/patience I lack to tinker with the models. What happens when you assign a variable to, what I assume to be constant in models, the constant of residence time for H2O that increases the residence time when colder, and decreases the residence time when warmer? This would be a great mechanism for maintaining energy in=energy out. And a new wrinkle for the man behind the curtain.

Matt G
January 2, 2011 12:38 pm

The claim that little continuous human SO2 reaches the stratosphere is based on aerosol data that in low levels has not been distinguished between different types until the 21st century.
While there has been a large reduction in SO2 human production from the 1970’s until 2000’s. In USA, Europe and other cities around the world with a significant reduction in fuel and industrial based sources. Depsite this there has been very little change in stratopshere levels so far and it is not having any noticable affect on the stable tiny rise. Just a few links below showing examples.
http://www.ace.mmu.ac.uk/Resources/Fact_Sheets/Key_Stage_4/Air_Pollution/pdf/Air_Pollution.pdf
http://www.mfe.govt.nz/publications/ser/enz07-dec07/chapter-7.pdf
http://enhealth.nphp.gov.au/council/pubs/pdf/suldiox.pdf
http://www.cleanair.hamilton.ca/default.asp?id=22#Sulphur
Over recent years developing countries have increased SO2 production, but still some cities there have reduced SO2 levels. There are exceptions with especially China having had significant rises in SO2 levels.
http://ww2.unhabitat.org/istanbul+5/68.pdf
http://www.ess.co.at/GAIA/CASES/IND/CAL/CALpollution.html
“According to the IPCC Third Assessment, the global mean radiative cooling due to anthropogenic sulphate emissions in 2000 relative to 1750 was 0.5 W/sq. m. By definition, this figure does not include volcanic emissions.”
Satellites have only being detecting aerosols in the stratosphere since the late 1970’s/early 1980’s and up to 2004 were not able to distinguish low amounts of SO2 compared with ozone.
http://denali.gsfc.nasa.gov/research/so2/article.html
That IPCC claim was based on SO2 levels detected near the surface and all aerosol data in the stratosphere over recent decades. (with low levels of SO2 not distinguishable from ozone) Hence, it was a guess with little scientific evidence.
This changed in 2004 with the launch of a new satellite.
http://www.nasa.gov/vision/earth/lookingatearth/aura_update.html
Overall the rise and fall of human SO2 production is not detected in the stratopshere aerosol levels.

Matt G
January 2, 2011 12:41 pm

Just testing, can’t get my post to work?

Matt G
January 2, 2011 3:02 pm

“According to the IPCC Third Assessment, the global mean radiative cooling due to anthropogenic sulphate emissions in 2000 relative to 1750 was 0.5 W/sq. m. By definition, this figure does not include volcanic emissions.”
Satellites have only being detecting aerosols in the stratosphere since the late 1970’s/early 1980’s and up to 2004 were not able to distinguish low amounts of SO2 compared with ozone.
http://denali.gsfc.nasa.gov/research/so2/article.html
That IPCC claim was based on SO2 levels detected near the surface and all aerosol data in the stratosphere over recent decades. (with low levels of SO2 not distinguishable from ozone) Hence, it was a guess with little scientific evidence.
This changed in 2004 with the launch of a new satellite.
http://www.nasa.gov/vision/earth/lookingatearth/aura_update.html
Overall the rise and fall of human SO2 production is not detected in the stratopshere aerosol levels.

Matt G
January 2, 2011 3:30 pm

The claim that little continuous human SO2 reaches the stratosphere is based on aerosol data that in low levels has not been distinguished between different types until the 21st century.
While there has been a large reduction in SO2 human production from the 1970’s until 2000’s. In USA, Europe and other cities around the world with a significant reduction in fuel and industrial based sources. Depsite this there has been very little change in stratopshere levels so far and it is not having any noticable affect on the stable tiny rise. Just a few links below showing examples.
http://www.ace.mmu.ac.uk/Resources/Fact_Sheets/Key_Stage_4/Air_Pollution/pdf/Air_Pollution.pdf
http://www.mfe.govt.nz/publications/ser/enz07-dec07/chapter-7.pdf
http://enhealth.nphp.gov.au/council/pubs/pdf/suldiox.pdf
http://www.cleanair.hamilton.ca/default.asp?id=22#Sulphur
Over recent years developing countries have increased SO2 production, but still some cities there have reduced SO2 levels. There are exceptions with especially China having had significant rises in SO2 levels.
http://ww2.unhabitat.org/istanbul+5/68.pdf
http://www.ess.co.at/GAIA/CASES/IND/CAL/CALpollution.html

maksimovich
January 2, 2011 10:12 pm

One of the unintended consequences of not understanding assumptions raised in probalistic forecasts is that they can get very wrong,very fast,at great cost to both business and consumers.
The information is often considered to have predictive qualities,when in reality it is a odds based wager.
As we see here by Niwa
December 17, 2010
La Niña conditions are likely to continue through to autumn of 2011 and then to ease. Such La Niña conditions have been indicated by NIWA since mid-2010.
La Niña conditions tend to be associated with below-normal inflows into the main hydro-electricity generating lakes. The graphic below shows the range of total summer inflows (in terms of generation capacity in MW) for non-La Niña and La Niña years. In the summer of 2007/08, La Niña conditions prevailed, much as they do now, and the total inflow was almost exactly the median value indicated for La Niña years in Figure 1.
As outlined in the NIWA seasonal climate outlook, there is a significant risk of below-normal inflows over the summer, especially for the South Island alpine region. The outlook states: seasonal rainfall is likely to be below normal in the western South Island [including the Southern Alps and the headwaters of the main South Island rivers]. River flows and soil moisture levels are very likely to be below normal in the west and south of the South Island.
http://www.niwa.co.nz/news-and-publications/news/all/la-niAa-and-hydro-electricity-supply,-summer-20102011
The consequences were the electricity market (who had both forecast prior to public release) reacted by pricing spikes in December seen here.
http://www.electricityinfo.co.nz/media_releases/261210.pdf
Around the same time 19th heavy rain started falling in the hydro catchments in inflows reached record levels by the end of the month.
http://www.electricityinfo.co.nz/comitFta/ftaPage.hydrology
And as we see pricing fell again to record lows.