Guest Post by Willis Eschenbach
There’s a lovely 2005 paper I hadn’t seen, put out by the Los Alamos National Laboratory entitled “Our Calibrated Model has No Predictive Value” (PDF).
Figure 1. The Tinkertoy Computer. It also has no predictive value.
The paper’s abstract says it much better than I could:
Abstract: It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way.
Using an example from the petroleum industry, we show that cases can exist where calibrated models have no predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability.
We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.
There are three results in there, one expected and two unexpected.
The expected result is that models that are “tuned” or “calibrated” to an existing dataset may very well have no predictive capability. On the face of it this is obvious—if we could tune a model that simply then someone would be predicting the stock market or next month’s weather with good accuracy.
The next result was totally unexpected. The model may have no predictive capability despite being a perfect model. The model may represent the physics of the situation perfectly and exactly in each and every relevant detail. But if that perfect model is tuned to a dataset, even a perfect dataset, it may have no predictive capability at all.
The third unexpected result was the effect of error. The authors found that if there are even small modeling errors, it may not be possible to find any model with useful predictive capability.
To paraphrase, even if a tuned (“calibrated”) model is perfect about the physics, it may not have predictive capabilities. And if there is even a little error in the model, good luck finding anything useful.
This was a very clean experiment. There were only three tunable parameters. So it looks like John Von Neumann was right, you can fit an elephant with three parameters, and with four parameters, make him wiggle his trunk.
I leave it to the reader to consider what this means about the various climate models’ ability to simulate the future evolution of the climate, as they definitely are tuned or as the study authors call them “calibrated” models, and they definitely have more than three tunable parameters.
In this regard, a modest proposal. Could climate scientists please just stop predicting stuff for maybe say one year? In no other field of scientific endeavor is every finding surrounded by predictions that this “could” or “might” or “possibly” or “perhaps” will lead to something catastrophic in ten or thirty or a hundred years. Could I ask that for one short year, that climate scientists actually study the various climate phenomena, rather than try to forecast their future changes? We still are a long ways from understanding the climate, so could we just study the present and past climate, and leave the future alone for one year?
We have no practical reason to believe that the current crop of climate models have predictive capability. For example, none of them predicted the current 15-year or so hiatus in the warming. And as this paper shows, there is certainly no theoretical reason to think they have predictive capability.
The models, including climate models, can sometimes illustrate or provide useful information about climate. Could we use them for that for a while? Could we use them to try to understand the climate, rather than to predict the climate?
And 100 and 500 year forecasts? I don’t care if you do call them “scenarios” or whatever the current politically correct term is. Predicting anything 500 years out is a joke. Those, you could stop forever with no loss at all
I would think that after the unbroken string of totally incorrect prognostications from Paul Ehrlich and John Holdren and James Hansen and other failed serial doomcasters, the alarmists would welcome such a hiatus from having to dream up the newer, better future catastrophe. I mean, it must get tiring for them, seeing their predictions of Thermageddon™ blown out of the water by ugly reality, time after time, without interruption. I think they’d welcome a year where they could forget about tomorrow.
Regards to all,
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
There used to be a saying when I was young “Only a fool tries to predict the future”! Looks like it was accurate then.
TimTheToolMan says:
October 31, 2011 at 11:00 pm
Parametisation is fitting not “sound physics”.
It is if the parameter is based on the physics involved. An example is the parameterization of the ‘Surface roughness’ [see chapter 8 of http://www.stanford.edu/group/efmh/FAMbook/Chap8.pdf ].
TimTheToolMan says:
October 31, 2011 at 11:00 pm
Parametisation is fitting not “sound physics”.
It is if the parameter is based on the physics involved.
An example is the Charnock relation:
http://www.termwiki.com/EN:Charnock's_relation
Leif Svalgaard says:
October 31, 2011 at 9:54 pm
Jaye Bass says:
October 31, 2011 at 9:21 pm
Define “sound physics” in terms of a complete, validated and verified, system model for the climate. Get back to me when you are finished.
Updated presentation at: http://www.stanford.edu/group/efmh/FAMbook2dEd/index.html
and ‘come back to me when you are finished’.
Nice reference Leif – Thanks.
Does Dr. Jacobson have a GCM climate code that he’s written (I couldn’t find anything online)? If so, perhaps he has listed for us the specific differential equations and numerical algorithms he is solving in his model and the associated initial/boundary conditions. That would be highly informative. I can wait…
@E.M.Smith
“There are ‘hydraulic computers’ for various uses ”
+++++++++
My father was involved in the early 1960’s in the development of a ‘teaching machine’ that used hydraulic logic circuits that were impressed onto the side of a plastic card the size of a standard business card. These could be removed (as from a countertop PayPoint credit card machine) and swapped for another one, which had the effect of reprogramming the ‘computer’. It had no electronics in it.
Perhaps this was developed from the hydraulic transmission science, or preceeded it. Not sure. It had control gates, amplifiers and controllable ‘current’. The circuits looked like squashed spiders and had common water connection points arranged in a grid so they were interchangeable.
That’s an unassailable point and you will get no argument from me. However, it does not address the issue of whether I should place any confidence in the current predictive value of computer models.
Jeff D says:
October 31, 2011 at 10:47 pm
Cracks me up. We cant even measure temperature properly. Why would we think that a model could even come close for long term predictions. Is it worth the effort to try? Yes, but don’t confuse this with reality and try and convince me the end of the world is nigh. Just last Tuesday the models said that it was going to be clear and sunny sky’s for the next 5 days. It was raining on Thursday. Most of the time they get pretty close but even short term it is so chaotic as to almost be useless.
Speaking of proper measurements. How accurate are the CO2 readings say for the last 20 years?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The first question is WHERE.
One of the first “ASSumption” made is that CO2 is well mixed and uniform. This ASSumption is accepted by most skeptics despite the efforts of a few scientists.
If I get the physics at least sort of correct, the interaction of Long Wave Radiation emitted by land. is much higher at the bottom of the atmosphere then at the top. This is because the density of air follows the Gas Laws.
Therefore the “Green House Gas” interaction that is the center of all this is MORE dependent on the CO2 at the bottom of the atmosphere then at the top. I am sure there is some sort of height above sea level vs interaction math or there should be.
So what is the CO2 concentration at the ground level????
WHEAT:
“…The CO2 concentration at 2m (~6 ft) the crop was found to be fairly constant during the daylight hours on single days or from day-to-day throughout the growing season ranging from about 310 to 320 p.p.m. Nocturnal values were more variable and were between 10 and 200 p.p.m. higher than the daytime values….”
CO2 depletion “…Plant photosynthetic activity can reduce the CO2 within the plant canopy to between 200 and 250 ppm… I observed a ppm drop within a tomato plant canopy just a few minutes after direct sunlight at dawn entered a green house (Harper et al 1979) … photosynthesis can be halted when CO2 concentration approaches 200 ppm… (Morgan 2003) Carbon dioxide is heavier than air and does not easily mix into the greenhouse atmosphere by diffusion… “
From this it is obvious that:
1. Plants can RAPIDLY change the concentration of CO2 from ~400ppm down to as low as 200ppm.
2. The change in CO2 depends on the presents or absence of sunlight.
3. Wind bringing more CO2 to the plants is a bigger factor than diffusion.
4. The amount of sunlight changes every day of the year and through out each day.
5. Wind changes
6. Plant activity changes with temperature. (Hard frost can stop some but not all )
So even if there is an “even distribution of CO2” somewhere higher in the atmosphere, at the lowest level of the atmosphere where a lot of the “action” is, the amount of CO2 is chaotic.
(By the way do not forget all the sources like volcanoes or termites emitting CO2)
Seems to me “Modeling” CO2 is a lot more complicated then is ever admitted. Not surprising since the whole objective from the very start was to show industrialization by man was hurting the environment.
From Mauna Loa’s methodology:
4. In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step…. http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html
…..They also pointed out that the CO2 measurements at current CO2 observatories use a procedure involving a subjective editing (Keeling et al., 1976) of measured data, only representative of a few tens of percent of the total data. … http://www.co2web.info/esef3.htm
The information presented to the public is Less than one percent of the data??? This is the “cherry picked” data used in the climate models??? Most “Skeptics” with few exceptions agree with “the requirement that CO2 in background air should be steady” despite an ever changing amount of sources and sinks in the real CO2 dynamics Changes mixed only by diffusion and the wind.
Graph of CO2 measurements by different labs: http://img27.imageshack.us/img27/3694/co2manytrends.png
THE POLITICS
Discussion of the politics and science of CO2 measurement : http://www.co2web.info/
For example:
“…The acknowledgement in the paper by Pales & Keeling (1965) describes how the Mauna Loa CO2 monitoring program started:
“The Scripps program to monitor CO2 in the atmosphere and oceans was conceived and initiated by Dr. Roger Revelle who was director of the Scripps Institution of Oceanography while the present work was in progress. Revelle foresaw the geochemical implications of the rise in atmospheric CO2 resulting from fossil fuel combustion, and he sought means to ensure that this ‘large scale geophysical experiment’, as he termed it, would be adequately documented as it occurred. During all stages of the present work Revelle was mentor, consultant, antagonist. He shared with us his broad knowledge of earth science and appreciation for the oceans and atmosphere as they really exist, and he inspired us to keep in sight the objectives which he had originally persuaded us to accept.”…” http://www.co2web.info/ESEF3VO2.pdf
OBJECTIVES???? PERSUADED us to ACCEPT???? What do those words have to do with science?
This is just the CO2, then there is Water in all its forms not to mention the shennanigans with temperature.
In other words the input data is bogus. GIGO
Frank K. says:
November 1, 2011 at 7:20 am
perhaps he has listed for us the specific differential equations and numerical algorithms
He does list all of those that are routinely used. He does not [as far as I know] run a model [expensive] himself.
Leif Svalgaard said:
“If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
Leif, you ought to read a good book on Chaos theory. In a system described EXACTLY by numerous non-linear equation (describing the physics of the problem), with extremely accurately known boundary and initial conditions, the calculated evolution forward still diverges over time. If not all of the equations (physics) are well know, but just approximated (as is true for ocean currents, clouds, cosmic ray effects, aerosols, etc.), and with limited accuracy in initial and boundary condition, the calculated solution diverges rapidly. That is why the best we can do on weather prediction, even using satellite data, generally falls apart in just a few days. Increasing the time and looking at long time averages (i.e., climate) does not improve the accuracy, since chaotic systems vary at all scales. There is no indication that climate is not chaotic. Your statement is falsified.
Jeremy says:
November 1, 2011 at 7:42 am
it does not address the issue of whether I should place any confidence in the current predictive value of computer models.
Your confidence [or lack thereof] could come from several sources:
1) intimate and detailed personal knowledge of the models and the physics
2) hearsay [parroting what other people say]
3) your agenda [if any]
4) other [please specify]
Which of these is it?
I really hate this dinosaur of a computer.
The quote with out the word drop was: “I observed a 50 ppm drop in within a tomato plant canopy just a few minutes after direct sunlight at dawn entered a green house (Harper et al 1979) “ Source
Leif: “If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
I understand your point, namely that if *all* the physics were included then we could perhaps expect a decent predictive model. This might be true, as an abstract idea, but in the real world as a practical matter the point about lack of predictive capability still holds for GCM’s for a couple of reasons. First, we are not even close to having all the physics included. We don’t know if all the physics included are fully “sound” (though of course the models do include much sound physics); further, the models are almost certainly too simple. Second, the climate system is essentially made up of particles, many of which interact at the atomic level. Thus, even if *all* the physics principles were understood to the nth degree and included in the model, we would still have to know, for a specific point in time, the location and direction of movement of all of these particles in order to accurately model how the physics would influence these particles over time. In other words, it isn’t just physics, it’s physical facts: location and movement of particles; interaction with landforms such as mountains, valleys, deserts, oceans; volcanic eruptions; vegetation growth; etc. I personally have no confidence that there is an ability to model climate 50 or 100 years out when there are a massive number of unknowns, both in physics and in physical facts.
Leif at 8:04 a.m.
You left out a couple of the most obvious (at least as they relate to lack of confidence):
5) A basic realization of the kinds of issues that need to be treated in the models, which are admitted to not be fully treated well (clouds, for example).
6) The models’ abysmal failure to accurately predict the last decade plus.
7) A general healthy skepticism about complex, multi-paramater models, which to date have not been demonstrated to have predictive value.
Leif Svalgaard says:
October 31, 2011 at 1:46 pm (Edit)
Leif, did you actually read the linked paper?
I ask because it expressly points out that a “tuned” or “calibrated” model may have the physics exactly right, perfectly correct, and it may still have absolutely no predictive capability. In addition, if the model is even a little bit wrong, it may not be possible to find any settings that give it predictive capacity.
This idea seems to disturb you greatly, for unknown reasons, and you keep saying it’s wrong. Unfortunately, instead of showing how the linked paper is incorrect, you’ve just repeated your claim over and over.
So did you read the paper, and if so, where is it wrong?
w.
Leonard Weinstein says:
November 1, 2011 at 8:04 am
There is no indication that climate is not chaotic.
There is no indication that it is.
Willis Eschenbach says:
November 1, 2011 at 9:18 am
So did you read the paper, and if so, where is it wrong?
Yes, I did read it, and it is not the paper that is wrong, just you in assuming the conclusion carries over to climate models.
Frank says:
November 1, 2011 at 2:03 am
Since you have not pointed out how or where this is occurring, I’d say you don’t like this result but have no flaws to point out in their reasoning.
Look, it’s an ugly idea, that a tuned model that has the physics perfect may still give wrong predictions. But unless you can show where they are are wrong instead of just asserting that they must be wrong, I’m going with their conclusions.
Why? Because I couldn’t find anything wrong with either their experiment or their logic or their results. I didn’t like the answer either, I have used tuned models in the past, I too wanted to find something wrong with their work.
But I couldn’t.
w.
Willis Eschenbach says:
November 1, 2011 at 9:18 am
So did you read the paper, and if so, where is it wrong?
To assume that the physics is ‘perfect’ is a stretch as all I see is just fitting [even using genetic algorithms] with no physics.
Leif Svalgaard,
I agree that models are worth some investment. But given all the diagnostic issues that have been reported: correlated surface albedo bias, under representation of precipitation, under representation of solar cycle signatures, poor regional performance, questionable cloud feedback, etc., they shouldn’t be being used for attribution and projection yet, and that is before the issue of local vs global optima raised by the paper is considered. There is a good chance that climate might be predictable, given that even a 1960s geography textbook will probably only be off by one to two hundred miles in climate zone locations in the year 2100, even in worst case scenarios.
If you have come this far, you have to agree that the IPCC conclusions and confidence were premature, and reporting model projections without error range estimates due to the diagnostic issues was unscientific.
Leif Svalgaard says:
November 1, 2011 at 9:22 am
Thanks, Leif, for the clarification. In other words, you agree with their summary
Their results are for a simple tuned model. The climate models are complex tuned models. The paper starts out by saying in the first paragraph:
and finishes by saying:
So … they start the paper by specifically mentioning climate models. They finish the paper by talking about “our models are more complex, have substantive modelling errors, and we have poor quality measurement data.”
… and you say that their conclusions DO NOT “carry over to climate models”?
Really?
Then why did they start by mentioning “climate models” in the very first paragraph of their work? And if not climate models, then which models are they talking about that are “more complex, have substantive modelling errors, and we have poor quality measurement data”? Boeing’s CFD models? I don’t thinks so. The main models that description brings to mind for me are climate models.
So obviously the authors definitely think that their conclusions “carry over to climate models”.
Yet you don’t think that they are right, you think climate models are different. So … why are they wrong? What about climate models makes them somehow immune to this problem?
Thanks,
w.
Leif Svalgaard says:
November 1, 2011 at 9:39 am
I think you may have misunderstood what the authors have done. They have used the same model in both cases, to remove the physics from the equation entirely.
They used a model that had assumed parameters for three different variables.
Then, using the same model they tried to back-calculate the assumed parameter values by “fitting” or “calibrating” the model to the same data. They expected this would be a trivial test of their model.
Instead, they found something unexpected. They found that using the same model (so that the physics cancel out of the equation), they were generally unable to reproduce the assumed parameters that they had begun with. Additionally, they found that even that ability, to occasionally get the right answers, was greatly degraded by even the smallest errors in the model.
Given the simplicity of their model, this was a surprising outcome, and one that clearly applies to more complex models.
w.
Willis Eschenbach says:
November 1, 2011 at 9:43 am
Then why did they start by mentioning “climate models” in the very first paragraph of their work?
They and you and most others go wrong by assuming that climate models are curve fitting as in the paper. They are not, they are solving differential equations forwards with a small time step [every ~5 minutes]. That is why the models do not compare. This is a qualitative difference.
Leif Svalgaard,
The fluid and mass balance equations are just a small part of the climate models. Most of the rest of the physics is parameterized. BTW, I believe the times steps are much longer than 5 minutes, which is why they use “white sky albedo” another parameterization. The time steps are in the range of two to four hours, at least for the AR4 models. Maybe they are doing shorter time steps now.
Martin Lewitt says:
November 1, 2011 at 10:24 am
The fluid and mass balance equations are just a small part of the climate models. Most of the rest of the physics is parameterized. BTW, I believe the times steps are much longer than 5 minutes,
Check Jacobsen’s book I referred you to in order to calibrate your ‘belief’ and reality.
For time step check slide 7 of http://uscid.us/08gcc/Brekke%201.PDF
The problem is that even if we perfectly understood the physics, the computer to perfectly calculate the climate would have to be the size of the known universe. 😉
Surely you would agree that cosmic rays influence the climate. That means that, not only do we have to predict the sun’s activity (because that modulates the cosmic rays), we have to predict the activity of the heavenly bodies supplying the cosmic rays in the first place.
What we need is not a knowledge of the underlying physical principles, it is a knowledge of how the climate works in general.
An example would be Ohm’s law. It is empirical. You don’t have to know anything about physics to use Ohm’s law and get a useful result. With a knowledge of Ohm’s law, you can model a linear circuit with great accuracy.
A similar case is Piers Corbyn. He seems to get accurate (more than average for weather forecasters) climate predictions without resorting to complicated physical models. His main thing seems to be a correlation between solar activity and the climate. As far as I can tell, it is empirical.
It seems likely to me that it impossible to accurately model a complex chaotic system based on physical properties alone unless the computer is as complex as the system itself. For the climate, that would seem to imply that each element in the model would have to be quite small in order to be strictly deterministic rather than acting chaotic. A wild ass guess would be each cubic meter of the atmosphere, ocean and land. OTOH, perhaps we are dealing with butterfly sized elements.