A Modest Proposal—Forget About Tomorrow

Guest Post by Willis Eschenbach

There’s a lovely 2005 paper I hadn’t seen, put out by the Los Alamos National Laboratory entitled “Our Calibrated Model has No Predictive Value” (PDF).

Figure 1. The Tinkertoy Computer. It also has no predictive value.

The paper’s abstract says it much better than I could:

Abstract: It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way.

Using an example from the petroleum industry, we show that cases can exist where calibrated models have no predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability.

We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.

There are three results in there, one expected and two unexpected.

The expected result is that models that are “tuned” or “calibrated” to an existing dataset may very well have no predictive capability. On the face of it this is obvious—if we could tune a model that simply then someone would be predicting the stock market or next month’s weather with good accuracy.

The next result was totally unexpected. The model may have no predictive capability despite being a perfect model. The model may represent the physics of the situation perfectly and exactly in each and every relevant detail. But if that perfect  model is tuned to a dataset, even a perfect dataset, it may have no predictive capability at all.

The third unexpected result was the effect of error. The authors found that if there are even small modeling errors, it may not be possible to find any model with useful predictive capability.

To paraphrase, even if a tuned (“calibrated”) model is perfect about the physics, it may not have predictive capabilities. And if there is even a little error in the model, good luck finding anything useful.

This was a very clean experiment. There were only three tunable parameters. So it looks like John Von Neumann was right, you can fit an elephant with three parameters, and with four parameters, make him wiggle his trunk.

I leave it to the reader to consider what this means about the various climate models’ ability to simulate the future evolution of the climate, as they definitely are tuned or as the study authors call them “calibrated” models, and they definitely have more than three tunable parameters.

In this regard, a modest proposal. Could climate scientists please just stop predicting stuff for maybe say one year? In no other field of scientific endeavor is every finding surrounded by predictions that this “could” or “might” or “possibly” or “perhaps” will lead to something catastrophic in ten or thirty or a hundred years. Could I ask that for one short year, that climate scientists actually study the various climate phenomena, rather than try to forecast their future changes? We still are a long ways from understanding the climate, so could we just study the present and past climate, and leave the future alone for one year?

We have no practical reason to believe that the current crop of climate models have predictive capability. For example, none of them predicted the current 15-year or so hiatus in the warming. And as this paper shows, there is certainly no theoretical reason to think they have predictive capability.

The models, including climate models, can sometimes illustrate or provide useful information about climate. Could we use them for that for a while? Could we use them to try to understand the climate, rather than to predict the climate?

And 100 and 500 year forecasts? I don’t care if you do call them “scenarios” or whatever the current politically correct term is. Predicting anything 500 years out is a joke. Those, you could stop forever with no loss at all

I would think that after the unbroken string of totally incorrect prognostications from Paul Ehrlich and John Holdren and James Hansen and other failed serial doomcasters, the alarmists would welcome such a hiatus from having to dream up the newer, better future catastrophe. I mean, it must get tiring for them, seeing their predictions of Thermageddon™ blown out of the water by ugly reality, time after time, without interruption. I think they’d welcome a year where they could forget about tomorrow.

Regards to all,

w.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
299 Comments
Inline Feedbacks
View all comments
November 3, 2011 6:25 pm

Willis Eschenbach says:
November 3, 2011 at 5:57 pm
No, they picked the months of June and July, not when the insolation is highest.
You are saying that the insolation at the two polar areas where there is a lot of ice is not highest in June and July [or December/January for the other hemisphere – also in the code]? – oh, well, how does one react to such a claim? Perhaps Wikipedia? http://en.wikipedia.org/wiki/Insolation

November 3, 2011 6:31 pm

TimTheToolMan says:
November 3, 2011 at 6:18 pm
But meanwhile many aspects of climate models are known to be deficient and many others are no doubt not-yet-known to be deficient but are. This paper tells us that none of those models represent any form of reality with their predictions.
What paper? The one Willis referred to does not demonstrate that climate models are defective. That models have defects is a property of all models. The important thing is whether there is research aiming at improving the models. As I remarked [but nobody responded to], I have not seen any models by skeptics. Perhaps, they are think that they can’t compete [not enough money, knowledge, or motivation]…

November 3, 2011 6:47 pm

Leif writes “The important thing is whether there is research aiming at improving the models.”
That is apparently a widely held misunderstanding. It is certainly generally held that as we improve the models, they give us better results but that simply is not true. The models can never represent reality when it comes to predicting the future. That is what this paper is showing.
All the models can and are doing is showing that we can represent historic temperatures with some level of accuracy based on an incomplete set of tuned physics based algorithms.
I’m quite certain you believe that a tweek closer to reality in one area of a model leaves that model a little better off. It doesn’t.
Take another look at the paper and think very carefully about the implication of introducing an error of 1% in the model and its subsequent inability to predict anything in the future no matter how much tuning is applied and how well it “predicts” the past.

November 3, 2011 7:48 pm

TimTheToolMan says:
November 3, 2011 at 6:47 pm
That is apparently a widely held misunderstanding. It is certainly generally held that as we improve the models, they give us better results but that simply is not true. The models can never represent reality when it comes to predicting the future.
We have models that predict stellar evolution. We are quite sure what the luminosity of the Sun will be a billion years from now. We are quite sure that the oceans of the Earth will boil away some 2-3 billion years from now, etc. To say that something ‘is simply not true’ is ‘simply’ unfounded. Models can predict the future, even the very distant future.

November 3, 2011 7:50 pm

Leif says “You are saying that the insolation at the two polar areas where there is a lot of ice is not highest in June and July [or December/January for the other hemisphere – also in the code]?”
Well the Wiki says this …Insolation is a measure of solar radiation energy received on a given surface area in a given time.
And that seems pretty reasonable to me. So you are agreeing that as far as the model goes, cloud cover is irelevent and that if cloud cover were to drop (or increase) as a result of all sorts of unexpected (and incidentally wrong) interactions in the rest of the model then its safe to ignore that fact when looking at the ice ponds.

November 3, 2011 7:55 pm

Leig writes “We are quite sure that the oceans of the Earth will boil away some 2-3 billion years from now”
Sorry, was that Tuesday in 2-3 billion years time? Or Wednesday? Of course there are some things we can predict. I can predict I’ll be dead then too…but that is a complete strawman argument and you know it.

November 3, 2011 8:07 pm

TimTheToolMan says:
November 3, 2011 at 7:50 pm
And that seems pretty reasonable to me. So you are agreeing that as far as the model goes, cloud cover is irelevent
No, I’m not arguing that cloud cover or CO2 or land-use or whatever is irrelevant. I’m saying that if you want to parameterize ice melt then making that higher when insolation is higher is perfectly valid physics. Whether that parameter itself is reasonable was not the issue.
TimTheToolMan says:
November 3, 2011 at 7:55 pm
Of course there are some things we can predict. I can predict I’ll be dead then too…but that is a complete strawman argument and you know it.
Actually, it was not a strawman. I really mean this. We probably can never predict whether it will rain in Petaluma on a given hour 100 years from now, but the climate is easier to predict [I’ll predict that July 2112 will be warmer than January 2112] and the far future destiny easier still. Your statements that models c an never predict anything is [as you say] ‘simply not true’.

November 3, 2011 8:47 pm

Leif predicts “[I’ll predict that July 2112 will be warmer than January 2112]”
I’m assuming you’re doing that from your personal “I live in the Northern hemisphere” perspective and not a global temperature anomoly perspective. Strawman much? The point is that there is no additional energy involved in the CO2 argument, only modelled increased temperature gradients. Increased energy doesn’t require a model to see the answer, we have perfectly good laws of thermodynamics for that…
“and the far future destiny easier still.”
Well this is where we disagree. And you disagree with the paper presented at the start. C’est la vie.
“Your statements that models c an never predict anything is [as you say] ‘simply not true’.”
I’m assuming you’re extrapolating my statement to all models and not GCMs. Strawman much?

u.k.(us)
November 3, 2011 8:48 pm

Leif Svalgaard says:
November 3, 2011 at 6:13 pm
“I was just at a conference http://sdo3.lws-sdo-workshops.org/ where we were discussing how to model the solar atmosphere and interior and could only marvel at the enormous progress we have made the last ten years. I fully expect progress in climate modeling too.”
======
With all do respect, Leif.
What will you model, and what data will you include.
The dearth of data would seem to be a problem.
What is the expected output of the model.
Are we not still collecting data to set a “baseline”.
I guess we have to start somewhere, but I’ll bet you wish you had 10,000 years of the kind of data you are collecting now.
Keep us informed, please.

Brian H
November 3, 2011 8:51 pm

Legatus says:
October 31, 2011 at 4:59 pm

[Many] times, the economists are simply telling the government types what they want to hear.

And here’s (another) kicker: What model can incorporate the effects of taking the model’s output seriously? There’s a kind of corrosive feedback loop in there.

November 3, 2011 9:20 pm

TimTheToolMan says:
November 3, 2011 at 8:47 pm
I’m assuming you’re doing that from your personal “I live in the Northern hemisphere” perspective and not a global temperature anomoly perspective
I thought that was clear from mentioning Petaluma [in California]
we have perfectly good laws of thermodynamics for that…
The radiative properties of CO2 and H2O are also perfectly well known.
Well this is where we disagree. And you disagree with the paper presented at the start. C’est la vie.
The starting paper does not do [as far as I can see] what the climate models do: solve the differential equations that describe the problem, so cannot be compared.
I’m assuming you’re extrapolating my statement to all models and not GCMs.
You did not qualify your statement. And you have not demonstrated that GCMs are different from all other models.
u.k.(us) says:
November 3, 2011 at 8:48 pm
Assuming you mean about the Sun.
What will you model, and what data will you include.
We’ll model the behavior of the solar cycle, the sunspots, the coronal mass ejections, solar flares, everything we can observe.
The dearth of data would seem to be a problem.
We collect 1000 Gigabytes of data every day, so we have the opposite problem
What is the expected output of the model.
How the solar cycle will evolve. Will the next cycle be large or small. When will a given sunspot flare and send bad stuff our way. The amount of solar radiation, …
Are we not still collecting data to set a “baseline”.
No, we are observing solar features in unprecedented detail and cadence, and seeing brand new things, ‘wonderful things’ [Carter, 1923: http://en.wikipedia.org/wiki/Howard_Carter ]
I guess we have to start somewhere, but I’ll bet you wish you had 10,000 years of the kind of data you are collecting now.
Of course, but that would not be 10,000 times better.

Richard S Courtney
November 4, 2011 1:10 am

Leif Svalgaard:
I appreciate your posts addressed to me at November 3, 2011 at 9:35 am and November 3, 2011 at 9:49 am.
They prove the reason for your specious debate of Willis in this thread is that you think your opinions are facts and, therefore, you think demonstrable facts presented by others are merely opinions.
And you attempt to justify your opinions by evasion and bombast which seem to be convincing in your own mind. But I write to inform you that they provide discredit of you by impartial observers.
Richard

Rational Debate
November 4, 2011 2:42 am

re: Leif Svalgaard says: November 3, 2011 at 9:35 am and November 3, 2011 at 4:45 pm

“No model (of any kind) should be assumed to have more predictive skill than it has been demonstrated to possess.”

The issue was that models are supposed to be ‘tuned’ to agree with observations and thus will always agree [eventually with a lag]. The assumption is that we [by working hard on this] can get better and better models and that they have predictive skill until proven otherwise. Every time a prediction fails we learn something new and can improve the models. This is how science [of any stripe] operates.

Talk about putting the cart before the horse. The statement is fine, with the massive exception of “and that they have predictive skill until proven otherwise. That is quite clearly not how science works, unless perhaps it’s in the world of “post-normal science.” I have to think that this was a typo on Leif’s part. The assumption must be that the model does not have predictive skill until proven otherwise.

Eric Anderson says: November 3, 2011 at 3:29 pm
That’s not science; it is insanity.

The insanity is that people vote for politicians that exploit the ignorance of the people. Which one did you vote for?

Talk about switching the subject to avoid addressing the issue.
As an aside – sure, it’s insane to vote for politicians that exploit the ignorance of the people – and when the only candidates fit into this category, then which do you vote for Leif? Or when the candidate who doesn’t appear to exploit, has other gross failings that make them an even worse choice – then which do you vote for?
Voting and politicians are irrelevant to the point Eric was making – which is that it makes no sense to assume a model has predictive ability before it has been soundly proven to actually make accurate predictions. To assume a model has predictive power without proof when its use is associated with anything important just begs for a huge waste of money at best and serious failures at worst.
Spend a few trillion dollars on AGW. And let’s do away with airplane test flights and just send up new airliners for their maiden flight with several hundred passengers aboard. Or send up astronauts in a rocket without ever having done test launches of that type of rocket. Why bother testing scramjet engines in unmanned drones? Just put ’em in a jet with a live pilot and navigator. Don’t bother doing any tests on that new bridge design, just christen it “Galloping Gertie II.” After all, the models say they’ll all work just fine.

Rational Debate
November 4, 2011 3:03 am

re: Leif Svalgaard says: November 3, 2011 at 5:05 pm

“Willis Eschenbach says: November 3, 2011 at 4:55 pm
It has no physical basis. The months have been selected to match historical observations.”
The assumption that melt accumulates in the summer when the insolation is highest is a sound physical basis and that that makes for a better match… Or perhaps you disagree with that.

If melt accumulation has been recorded during other months, then it’s clearly not a sound physical basis. It may make the model appear to match better by artificially eliminating melt the model would predict without the bounding parameters (e.g., the two month limit) added, but the only way it’s a sound physical basis is if it matches reality. It’s a tweak, not an accurate representation of the physics involved in melt accumulation. Is it a ‘good enough’ tweak? Well, I suppose that all depends on just how often melt occurs during other months, the effect on the final output results, and just how accurate one requires it to be. But it’s still a manually imposed tweak that isn’t an accurate programming representation of the actual physics involved in melt accumulation.

Rational Debate
November 4, 2011 3:10 am

re: Leif Svalgaard says: November 3, 2011 at 7:48 pm

We have models that predict stellar evolution. We are quite sure what the luminosity of the Sun will be a billion years from now. We are quite sure that the oceans of the Earth will boil away some 2-3 billion years from now, etc. To say that something ‘is simply not true’ is ‘simply’ unfounded. Models can predict the future, even the very distant future.

I have no doubt you believe that – but unless you have a time machine to actually go and see the final results, there is simply no way to know what the solar luminosity will be or when the Earth’s oceans will boil off. A very large asteroid impact could certainly completely change the future of the Earth, and I’m sure there are things that could happen that could similarly change the progression of the solar luminosity. Unless your model can account for anything and everything that might occur, they most certainly can’t predict the future – especially the very distant future.

Rational Debate
November 4, 2011 3:19 am

re: Leif Svalgaard says: November 3, 2011 at 8:07 pm

No, I’m not arguing that cloud cover or CO2 or land-use or whatever is irrelevant. I’m saying that if you want to parameterize ice melt then making that higher when insolation is higher is perfectly valid physics. Whether that parameter itself is reasonable was not the issue.

“No, I’m not arguing that cloud cover or CO2 or land-use or whatever is irrelevant. I’m saying that if you want to parameterize ice melt then making that higher when insolation is higher is perfectly valid incomplete physics. Whether that parameter itself is reasonable was not the issue.”
There, fixed that for you.

Gary Swift
November 4, 2011 7:12 am

To Leif:
There is a difference between a simulation and a predictive model. For example, it is possible to create a computer generated person. You can give it a human appearance, give it the ability to respond to questions, even simulate expression of emotions. That is a far cry from being a predictive model though. You could keep improving the quality of your simulation forever, and still not have a predictive tool. You could add complex algorithms for biological chemistry, neurology, physiology, statistics from sociology, etc. but you would still not be able to predict what I will do an hour from now, or how a group of people will act. The reason for this is that the number of variables is large, and a large portion of them are parameterized in the simulation. The computer generated person might look a lot like the real thing, and you can make it as complex as you want, but it is still not able to predict anything. You could add all sorts of physics and biology equations, but it would still not be a predictive model of a person. You might be able to predict how a body would bounce off of an object with fair accuracy, but you can’t predict whether a person will decide to jump out of a window. Similarly, you can predict the weather tomorrow or the track of a tropical storm with fair accuracy, but despite the ability of a climate model to look a lot like the real climate, it is still unable to predict because there are too many variables that must be parameterized and other variables that are simply not included. I ask, for example, can any climate model explain the start or end of an ice age? The answer is no. That seems to indicate that there are fundamental deficiencies in our knowledge about how the climate works. Climate models cannot predict because they are not predictive models. They are simulations, and are only able to create a fair representation of something that looks a lot like the real climate, but that doesn’t make them predictive.

November 4, 2011 7:33 am

Richard S Courtney says:
November 4, 2011 at 1:10 am
But I write to inform you that they provide discredit of you by impartial observers.
At least I am civil and provide comments that explain my position rather than just parroting others.
Rational Debate says:
November 4, 2011 at 2:42 am
The assumption must be that the model does not have predictive skill until proven otherwise.
People that build the models make a great effort to do the best job possible. Skill can be measured as a ‘skill score’. One definition involves the mean squared error MSE = sum ([prediction(i)-observation(i)]^2)/N. Then the skill score is SS = 1 – MSE(prediction)/MSE(climatology). A perfect prediction has a SS of 1.0. A prediction that is no better than just averaged climatology has a SS of 0, while a prediction that is worse than climatology has negative SS. The absolute value of the SS will usually decrease as the time interval covered by the prediction increases. You can learn more about skill scoring here: http://www.mmm.ucar.edu/events/ISP/presentations/Semazzi_endusers.pdf or here: http://www.swpc.noaa.gov/forecast_verification/Assets/Bibliography/i1520-0493-117-03-0572.pdf This [comparing many models] may also be of interest: http://www.arm.gov/science/highlights/R00175/pdf especially the conclusion that “The mean model does so well mostly because errors in individual models are distributed on both sides of the observations. […] it’s difficult to imagine accurate projections of future change coming from a model that does a poor job in simulating the present climate – and now there’s a way to measure the success of a model at doing the latter job.”
Thus skill can be measured and current ensemble of models do have positive skill [“The mean model does so well”]. It is also to be expected that when people put solve the equations governing the evolution of the climate that there will be some skill. This expectation is normally fulfilled in all other scientific endeavors. That is what science is: the ability to predict something from ‘laws’ that have been deduced from observations.
Eric Anderson says: November 3, 2011 at 3:29 pm
Talk about switching the subject to avoid addressing the issue.
I think you brought up insanity…
As an aside – sure, it’s insane to vote for politicians that exploit the ignorance of the people – and when the only candidates fit into this category, then which do you vote for
A people has the government they deserve.
it makes no sense to assume a model has predictive ability before it has been soundly proven to actually make accurate predictions.
See comment to Richard. Some skill is better than no skill. If I can predict the stock market with 51% accuracy, I’ll come out ahead in the long run.
Rational Debate says:
November 4, 2011 at 3:03 am
but the only way it’s a sound physical basis is if it matches reality. […] But it’s still a manually imposed tweak that isn’t an accurate programming representation of the actual physics involved in melt accumulation.
It is an approximation [compared to using the actual insolation] that is based on sound physics. The model builders seem to have concluded that is is good enough for their purpose. Perhaps in later version they will use a better approximation, especially if is turns out that this parameter is important [which I doubt].
Rational Debate says:
November 4, 2011 at 3:10 am
but unless you have a time machine to actually go and see the final results, there is simply no way to know what the solar luminosity will be
Prediction is not about ‘knowing’. but about having a skill score that is high enough to take seriously, e.g. 0.99999999999999999 or even 0.9. We have great confidence in our prediction of the Sun’s luminosity because we observe millions of stars at all ages and at all phases of their evolution and can directly verify that they behave as predicted.
A very large asteroid impact could certainly completely change the future of the Earth
Again, prediction is about being good enough, not perfect. Your argument is of the kind that it does not make sense to lay up supplies for the coming winter, because we may all be wiped out by an asteroid anyway. Not exactly a ‘Rational Debate’.
Rational Debate says:
November 4, 2011 at 3:19 am
when insolation is higher is perfectly valid incomplete physics.
All models have incomplete physics to some degree. As long as the physics is valid, an approximation – even if incomplete (to be improved in the next version, perhaps) is better than none. You might want to compare your ‘fixed’ version “perfectly valid incomplete physics’ to Willis’s “have no physical basis”, and note that we have made progress via our discussion. I take that as a positive sign.

November 4, 2011 7:39 am

Gary Swift says:
November 4, 2011 at 7:12 am
Climate models cannot predict because they are not predictive models. They are simulations, and are only able to create a fair representation of something that looks a lot like the real climate, but that doesn’t make them predictive.
If a simulation can show the correct behavior of a system when starting from given initial conditions [this is what we use simulations for: to examine the system under varying conditions], then when those conditions are taken as actual conditions as of now, then the simulation [if it is any good at all] becomes a prediction.

Frank
November 4, 2011 7:55 am

Willis, Tim,
I’m still not convinced that the paper proves what you think, i.e. that even if you have the physics completely right, you can’t predict the future because of a bit noise.
First off, they predict ahead 7 years based on a 3-year history set. That’s just unreasonable and unrealistic extrapolation, especially under noise conditions. There is no information whatsoever about how well the history-tuned model does in predicting, e.g. 2 years into the future (a much more reasonable time span).
The only thing they really do is show that when they tune their model to the future set, they get a different set of optimum parameters than when they tune their model to the history set. They never show what is in fact the error made by the prediction, i.e. how much (in absolute terms) do the future production rates differ from the predicted ones? The values in the Figures are normalized, so it might be quite a small difference, one a reservoir engineer might laugh at.
See, there is just too much weight on looking at peak positions. But the peak positions are produced by tuning to a particular data set, so every time they look at a different future set (5, 6, 7 years into the future), they will have a different set of peaks. The position of the peaks (i.e. the parameter values) doesn’t say much about how far the prediction is really off. Suppose my history-tuned model is off by 1%, while the optimally tuned future model is off by 0.1% (and therefore, has a much higher peak), does that mean that my history-tuned model can’t predict the future? I think not.
I want to see graphs of production rates over time using the history-tuned model and the “truth” case before I’ll say their model can’t predict the future. I’ll probably end up saying something like “Given 3 years of historic data and 1% of noise, the prediction window is X years”.
I’m not saying that models can solve everything if you tune and tweak long enough, but I am just not convinced that this paper proves the opposite.

Eric Anderson
November 4, 2011 8:27 am

Leif: “The insanity is that people vote for politicians that exploit the ignorance of the people. Which one did you vote for?”
I share your concern with the politicization of the issue and exploitation for ulterior motives. However, that does not address the issue at hand, which was your suggestion that the models should be treated as having predictive value until proven otherwise. That is the insanity I was talking about, because it is exactly backwards from rational science.
Or maybe your comment was meant to say that you think, as a personal preference, that the models have predictive value, but that it would be insane for politicians to pay any attention to the models and to take any actions that might be based on the outputs of the models? Kind of strange, but I guess I could live with that approach. 🙂

Ged
November 4, 2011 8:39 am

@Leif,
Actually, almost all those references specifically mention particular parameters which are tuned, but maybe you missed this one which is even more explicit and potentially what you are looking for. Here we see a very specific equation parameter being changed to match observations after the model has already been made:
http://iopscience.iop.org/1748-9326/3/1/014001/fulltext
” A tuning experiment is carried out with the Community Atmosphere Model version 3, where the top-of-the-atmosphere radiative balance is tuned to agree with global satellite estimates from ERBE and CERES, respectively, to investigate if the climate sensitivity of the model is dependent upon which of the datasets is used.”
That’s explicit, that’s a value in the GCM’s equations, and that fits every definition I can see you giving and arguing about. All the other references did too, if you read them, but at least this one doesn’t quite carry as much of the vague language as most scientific papers/publications.
If I am still misunderstanding you, or what tuning is (as everyone is using the term) from my extensive reading, then please elucidate you position more and correct my error.

November 4, 2011 9:18 am

Eric Anderson says:
November 4, 2011 at 8:27 am
which was your suggestion that the models should be treated as having predictive value until proven otherwise. That is the insanity I was talking about, because it is exactly backwards from rational science.
In all science, when we think we know the physics [and there is no ‘new’ physics in climate] and we can write down the equations that describe the physics we fully expect that when solving those equations we get results that agree with observations. This is not ‘backwards’ and not ‘insanity’. If we get results that do not agree, it is a sign that the models must be upgraded or improved and that is ongoing.
Ged says:
November 4, 2011 at 8:39 am
” A tuning experiment is carried out with the Community Atmosphere Model version 3, where the top-of-the-atmosphere radiative balance is tuned to agree with global satellite estimates from ERBE and CERES, respectively, to investigate if the climate sensitivity of the model is dependent upon which of the datasets is used.”
That’s explicit, that’s a value in the GCM’s equations

There is no such ‘value’ in the equations. Show me if you think otherwise. What they mean is that they are trying to figure out which of the two data sets to compare with. The radiative balance is a computed value [output of the model – not a value in the equations]. The experiment is to try and see if the model can be changed such as to agree with one or the other estimates. As far as I know the experiment came out negative. If you know otherwise let me know, i.e. which data set was chosen and which actual changes were made.

November 4, 2011 10:23 am

Ged says:
November 4, 2011 at 8:39 am
” A tuning experiment is carried out with the Community Atmosphere Model […]” That’s explicit, that’s a value in the GCM’s equations
What they actually did was:
“The tuning is done in the atmospheric component of a coupled GCM […] The cloud microphysics of the model, as in any GCM, is highly parameterized and the tuning is carried out through alterations of parameter values in these physics descriptions. There are numerous non-restricted parameters that affect the model cloud properties and thereby the radiative fluxes, and hence there are numerous ways to tune the model to a chosen level of radiative balance. We modify a number of parameters that are commonly used for tuning (Hack et al 2006), including relative humidity thresholds for cloud formation, thresholds for autoconversion of liquid and ice to rain and snow, efficiency of autoconversion in convective and stratiform clouds, efficiency of precipitation evaporation and adjustment timescales associated with convection….
there is presently no way to determine which is more correct. The small magnitude of the difference must not be used as an excuse for not continuing to refine the measurements of the TOA radiative balance or imbalance and definitely not as an excuse for not making further attempts to restrict free parameters like cloud water content in models…”
So, they play around with different combinations [all the time staying within reasonable physical limits] to see if it makes any real difference and find that it does not. This shows that the current calibrations of these parameters are either not too far off or do not have any great impact on the result. Such experiments with those and any other parameterizations must be carried out all time time [it is called research] and should lead to better models with higher skill scores.

November 4, 2011 10:35 am

Ged says:
November 4, 2011 at 8:39 am
” A tuning experiment is carried out with the Community Atmosphere Model […]” That’s explicit, that’s a value in the GCM’s equations
To bring home their conclusion:
“Although this limited study offers no conclusive evidence, it indicates that the CAM is rather robust to tuning changes and that climate sensitivity is not strongly dependent on what level the TOA radiative balance is tuned to.” and “within the realm of reasonable agreement with reality there is no unique way to reach a certain level of TOA radiative balance.”
So ‘tuning’ is not the answer. The way ahead is better empirical determination of the calibration of the various relationships that have been parameterized. That is: don’t play dice with random numbers hoping to find some that fit, but go after the physics and try to understand the processes involved, e.g. try to get a better approximation [actual insolation] of the ice melt function than the simple 2-month step function now employed.