A Modest Proposal—Forget About Tomorrow

Guest Post by Willis Eschenbach

There’s a lovely 2005 paper I hadn’t seen, put out by the Los Alamos National Laboratory entitled “Our Calibrated Model has No Predictive Value” (PDF).

Figure 1. The Tinkertoy Computer. It also has no predictive value.

The paper’s abstract says it much better than I could:

Abstract: It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way.

Using an example from the petroleum industry, we show that cases can exist where calibrated models have no predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability.

We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.

There are three results in there, one expected and two unexpected.

The expected result is that models that are “tuned” or “calibrated” to an existing dataset may very well have no predictive capability. On the face of it this is obvious—if we could tune a model that simply then someone would be predicting the stock market or next month’s weather with good accuracy.

The next result was totally unexpected. The model may have no predictive capability despite being a perfect model. The model may represent the physics of the situation perfectly and exactly in each and every relevant detail. But if that perfect  model is tuned to a dataset, even a perfect dataset, it may have no predictive capability at all.

The third unexpected result was the effect of error. The authors found that if there are even small modeling errors, it may not be possible to find any model with useful predictive capability.

To paraphrase, even if a tuned (“calibrated”) model is perfect about the physics, it may not have predictive capabilities. And if there is even a little error in the model, good luck finding anything useful.

This was a very clean experiment. There were only three tunable parameters. So it looks like John Von Neumann was right, you can fit an elephant with three parameters, and with four parameters, make him wiggle his trunk.

I leave it to the reader to consider what this means about the various climate models’ ability to simulate the future evolution of the climate, as they definitely are tuned or as the study authors call them “calibrated” models, and they definitely have more than three tunable parameters.

In this regard, a modest proposal. Could climate scientists please just stop predicting stuff for maybe say one year? In no other field of scientific endeavor is every finding surrounded by predictions that this “could” or “might” or “possibly” or “perhaps” will lead to something catastrophic in ten or thirty or a hundred years. Could I ask that for one short year, that climate scientists actually study the various climate phenomena, rather than try to forecast their future changes? We still are a long ways from understanding the climate, so could we just study the present and past climate, and leave the future alone for one year?

We have no practical reason to believe that the current crop of climate models have predictive capability. For example, none of them predicted the current 15-year or so hiatus in the warming. And as this paper shows, there is certainly no theoretical reason to think they have predictive capability.

The models, including climate models, can sometimes illustrate or provide useful information about climate. Could we use them for that for a while? Could we use them to try to understand the climate, rather than to predict the climate?

And 100 and 500 year forecasts? I don’t care if you do call them “scenarios” or whatever the current politically correct term is. Predicting anything 500 years out is a joke. Those, you could stop forever with no loss at all

I would think that after the unbroken string of totally incorrect prognostications from Paul Ehrlich and John Holdren and James Hansen and other failed serial doomcasters, the alarmists would welcome such a hiatus from having to dream up the newer, better future catastrophe. I mean, it must get tiring for them, seeing their predictions of Thermageddon™ blown out of the water by ugly reality, time after time, without interruption. I think they’d welcome a year where they could forget about tomorrow.

Regards to all,

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

299 Comments
Inline Feedbacks
View all comments
October 31, 2011 2:41 pm

Willis Eschenbach says:
October 31, 2011 at 2:08 pm
Leif, the paper is about “tuned”, also called “calibrated” models like the current GCMs.
The paper does not say anything about GCMs. And most models are ‘calibrated’ in any case.

Kev-in-Uk
October 31, 2011 2:45 pm

Surely the term ‘calibrated model’ a bit of an oxymoron insofar as it is limited to the data for the calibration and the extension of said data via the output/prediction is only dependent on the available data? So it’s a bit of a chicken and egg situation in respect of output reliability – of course, when one adds in the chaotic non-linear relationships of something say, like ‘the climate’ – well, the world is your funding oyster…..
I agree with Leif – a sound physics based model is the only likely useable/useful one. (in terms of predictive capabilities).

Kev-in-Uk
October 31, 2011 2:47 pm

Zac says:
October 31, 2011 at 2:37 pm
Ah but Zac – that RN data would not perhaps fit the models! LOL (of course, even if someone like Jones had it – it’s probably been lost by now!)

Jeremy
October 31, 2011 2:56 pm

Leif Svalgaard says:
October 31, 2011 at 2:37 pm
All physics must be included. For the spin-stabilized Pioneers thermal emission from the spacecraft are likely the cause. For the attitude stabilized Voyagers there is no anomaly.

So, to apply this to the previous topic then, are we sure all physics in their first and second order influences are included in any model of earths climate? It took how many decades to arrive at RTG radiation imbalance (still not universally accepted as the explanation, btw), and with fewer decades for climate models (a much more complicated mathematical problem) we’re expected to believe there is predictive value.
Newtons laws are simple. Einsteins improvements on that can be taught to high-schoolers. Yet still the best and the brightest failed to predict what happened with something they built. How much confidence can anyone place in predicting the direction of that which was built by entropy?

Septic Matthew
October 31, 2011 3:02 pm

Leif Svalgaard: If the model is based on sound physics it usually will have predictive capability, unless it is too simple.
“Usually” is not that good a criterion.
“Unless it is too simple” — It’s the failure to make accurate predictions that will reveal that the model is too simple. This applies to calculations of the “climate sensitivity” — whether the omission of cloud dynamics results results in “too simple” is yet to be revealed, but will be revealed if model predictions turn out to have been bad enough.

October 31, 2011 3:04 pm

Willis – don’t you forget : they’re still giving prizes out to Ehrlich. Being right was never their business, being gloomy it was.

October 31, 2011 3:05 pm

Jeremy says:
October 31, 2011 at 2:56 pm
So, to apply this to the previous topic then, are we sure all physics in their first and second order influences are included in any model of earths climate?
We are not sure, of course, but the remedy is to keep trying to improve them.

Leon Brozyna
October 31, 2011 3:05 pm

Tinkertoy … now that’s something I hadn’t seen in a long time. Used to have so much fun building things with ’em. Let the climate scientists build their models out of tinkertoys … that’ll keep ’em busy for awhile.

Another Gareth
October 31, 2011 3:10 pm

Apologies for the long quote but…
As the Kevin Trenberth said in 2007:
“I have often seen references to predictions of future climate by the Intergovernmental Panel on Climate Change (IPCC), presumably through the IPCC assessments (the various chapters in the recently completedWorking Group I Fourth Assessment report ican be accessed through this listing). In fact, since the last report it is also often stated that the science is settled or done and now is the time for action.
In fact there are no predictions by IPCC at all. And there never have been. The IPCC instead proffers “what if” projections of future climate that correspond to certain emissions scenarios. There are a number of assumptions that go into these emissions scenarios. They are intended to cover a range of possible self consistent “story lines” that then provide decision makers with information about which paths might be more desirable. But they do not consider many things like the recovery of the ozone layer, for instance, or observed trends in forcing agents. There is no estimate, even probabilistically, as to the likelihood of any emissions scenario and no best guess.
Even if there were, the projections are based on model results that provide differences of the future climate relative to that today. None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models.”
In 2007 the problem with the climate models wasn’t that the models were accurate but somehow unreliable even when calibrated – it’s that they were nothing like Earth. They were idealised caricatures of what might be. Linear extrapolations, fudges and guesswork that had at their heart the SRES family of projections.
Have they improved since then? The paper Willis has highlighted and the Scientific American article linked to by timg56 suggest that even if things have improved in terms of calibrations and replicating the climate properly the model output may not be any more predictive. Even so this was not, at least as late as 2007, a capability provided by the models. All they could do was provide a whiff of credibility for policy makers.

DocMartyn
October 31, 2011 3:13 pm

“Leif Svalgaard
If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
This is what the enzymologists believed, then they found out that even if you knew the rate constants of even simple enzymatic pathways, classical descriptions didn’t work. The led to metabolic control theory, then control theory. The biochemists were of course re-inventing the wheel to a large extent, economists had already made the leap some 50 years earlier. The ides of control coefficients and elasticity, along with a switch to the use steady state kinetic analysis, rather than using near-equilibrium thermodynamics to explain non-equilibrium states,
Hard that you may find it to believe Leif, complex model that are trained are in fact fits and are about as useful as a one legged man at an arse kicking party.
This has been shown time and again.

Baa Humbug
October 31, 2011 3:40 pm

Willis, Forgetaboutit
[youtube=http://www.youtube.com/watch?v=Zf0ZyoUn7Vk&w=640&h=360]

kwik
October 31, 2011 3:42 pm

IPCC TAR 3 ;
“In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible”
http://www.realclimategate.org/2010/11/are-computer-models-reliable-can-they-predict-climate/

Richard S Courtney
October 31, 2011 3:43 pm

Leif Svalgaard:
Several people have rebutted your assertion at October 31, 2011 at 1:46 pm which said;
“If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
And you have attempted to refute the rebuttals. However, you miss the important point; viz.
No model (of any kind) should be assumed to have more predictive skill than it has been demonstrated to possess.
Assuming undemonstrated model predictive skill is pseudoscience of precisely the same type as astrology. But no climate model has existed for 30, 50 or 100 years and, therefore, it is not possible for any of them to have demonstrated predictive skill over such times.
In other words, the predictions of future climate by
(a) the climate models
and
(b) examination of chicken entrails
have equal demonstrated forecast skill (i.e. none) and, therefore, they are deserving of equal respect (i.e. none).
Another Gareth:
At October 31, 2011 at 3:10 pm you quote something Kevin Trenberth said in 2007: i.e.
“I have often seen references to predictions of future climate by the Intergovernmental Panel on Climate Change (IPCC), presumably through the IPCC assessments (the various chapters in the recently completedWorking Group I Fourth Assessment report ican be accessed through this listing). In fact, since the last report it is also often stated that the science is settled or done and now is the time for action.
In fact there are no predictions by IPCC at all. And there never have been. The IPCC instead proffers “what if” projections of future climate that correspond to certain emissions scenarios. etc.”
A decade ago, the same assertion of “there are no predictions by IPCC at all” was put to me by a questioner at a Meeting in the US Congress, Washington, DC. I replied saying;
“Sir, you say the IPCC does not make predictions. The IPCC says the world is going to warm. I call that a ;prediction.”
The questioner studied his shoes.
Richard

October 31, 2011 4:00 pm

E.M.Smith says:
BTW, love the tinker toy computer. Folks forget that if you can make an AND and OR gate (or even just a NAND gate) you can make a computer out of anything. Even ropes and pulleys. There are ‘hydraulic computers’ for various uses (even in old non-electronic automatic transmissions – of a sort) …

The newer ones are all electronic, but the fuel control units on older jet engines are powered by fuel pressure.

DocMartyn
October 31, 2011 4:17 pm

This is a computer; The MONIAC (Monetary National Income Analogue Computer) also known as the Phillips Hydraulic Computer and the Financephalograph, was created in 1949 by the New Zealand economist Bill Phillips. It is a water based analogue computer which used fluidic logic to model the workings of an economy.
http://en.wikipedia.org/wiki/MONIAC_Computer
It actually worked as well as economic forecasts normally work.

Stephen Goldstein
October 31, 2011 4:31 pm

I don’t object to the effort to build, tune and validate the models . . . modelers should be free to model to their heart’s content.
What I object to is taking action on the basis of predictions from predictive models the accuracy of which has not been demonstrated over even short-for-climate periods. Ooops! That’s wrong!
What I object to is taking action on the basis of predictions from predictive models the inaccuracy of which has been amply demonstrated.
Thus, with apologies to Robert Goodloe Harper, I propose, millions for research but not one cent for remediation.

Paddy
October 31, 2011 4:32 pm

Willis, I am the guy that suggested in a personal e-mail that you are the Eric Hoffer of climate science. Every post or comment by you that I have read reinforces my opinion. As for this post, all that I can say is “Wow, you sure know how to stir the pot.”
Please continue. The climate science pot needs constant attention.

Legatus
October 31, 2011 4:46 pm

Leif Svalgaard says:
If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.

Considering what we now know about the climate, such things as the variability of solar UV radiation, the effects of biology on climate, variability of cosmic rays and their possible effect on clouds, and especially the fact that clouds appear to have much more complex effects than the models show, how can current models not be too simple? Some of these factors appear to dwarf the effects of CO2. There is also the fact that we have large data holes and areas where the data is very suspect. And those are just some of the known unknowns…
In addition, I can make a model with perfect physics, but if you allow me to hide my data and my code, I can get you any prediction you like, highest bidder gets it.
Finally, when it comes to CO2 and it’s effects, what we are primarily concerned with, the increase of downward longwave radiation due to increasing CO2 over time has never actually been directly measured. While the physics seems perfect for the models, until we measure it out there, in the real world, and see whether it is actually increasing or whether some other effects (such as evaporation and clouds, for instance) are overwhelming it, we really don’t have any way to tell if the models output is correct. After all, this idea, increasing downward radiation, is the very central idea of AGW. If we could somehow directly observe this, the central idea of AGW, and see if it is in line with the model predictions, then at least we would have something to verify or falsify the central idea behind the models. Right now, we are merely imperfectly and partially measuring what we believe are secondary effects from that. We are merely making grand assumptions that it will work out there the way it works in our models.
Note that one of the big problems is that, indeed, the models were specifically made to show that increasing CO2 has a catastrophic effect on climate. What we need are models that are there simply to understand the climate, not to deliberately try to show the catastrophic effect of just one variable among a great many that effect the climate. We need to stop deliberately trying to prove something and start to do the spadework of understanding the climate enough to be able to prove anything.
In short, the models are, in fact, too simple.

kuhnkat
October 31, 2011 4:51 pm

“The model may have no predictive capability despite being a perfect model. The model may represent the physics of the situation perfectly and exactly in each and every relevant detail. But if that perfect model is tuned to a dataset, even a perfect dataset, it may have no predictive capability at all.”
DANG!!! I missed that one. Of course, since the data set will be limited, and possibly not perfect also, it will not be able to represent the full range of the possible data and activity/cycles. It will impart a limited range to the model and not an accurate representation of the interactions or functions.

Legatus
October 31, 2011 4:59 pm

It actually worked as well as economic forecasts normally work.

I have seen this headline many times economists were suprised today when…”. Predicting the economy is similar to predicting future climate, especially when the climate predicters are not trying to predict what will happen, but what they wish to happen. Basically, when you try to predict the economy, you ae trying to predict the actions of a very great many unpredictable people, and one of the chief unpredictabilities are your won biases. There are simply too many variables, and you are one of them, and your variables (biases and wishes) can overwelm all the others.
Mnay times, the economists are simply telling the government types what they want to hear. The universities, paid for by the government, are teaching them how to.

Konrad
October 31, 2011 5:03 pm

Models can be very useful tools. I use computer modelling daily in engineering and design. These are models of mechanical structures and they do have useful predictive capability. How useful or potentially misleading they are in part depends on the user. A good user knows many of the models limitations. As someone who has worked on the shop floor, I am well aware of the difference between reality and a simplified model prepared for FEA.
Understanding the limitations of resolution and inaccuracies in initial conditions has led users of weather models to use a Monte Carlo approach to weather prediction with computer models. Multiple runs with intentionally perturbed initial conditions can show if a prediction is robust or if minor differences in the initial conditions will cause widely differing results, and thereby an unreliable prediction.
Climate modelling seems to be a comprehensive collection of all known problems with computer modelling. First the basic physics engine is wrong. The effect of backscattered LWIR is set over 200% too high for 71% of the Earth’s surface. Then water vapour feedback is set to strongly positive and aerosols are used as a tuning knob. Additionally the resolution is poor. Worse, data for initial conditions is limited and for some climate modelling is itself the output of previous models. Further the users seem unwilling to accept the limitations of the models. Meteorologists would have called the resulting spaghetti graphs a non robust result with no predictive value. Climatologists average the mess and call it a “scenario” By using computer models as a political tool, climate modellers seem determined to destroy all trust in computer modelling.
On the other hand, a cynical person could claim that climate models work perfectly. They have exploited the previous successes in other areas of computer modelling. They look like science and produce authoritative looking graphs. They have successfully generated over 80 billion dollars in funding and could defeat democracy where force of arms has failed. As economic and political tools climate models have been very effective.
Willis’ request for a prediction hiatus may seem reasonable, however it has been long said “Never try and explain something to someone who’s job depends on them not understanding it”

peter_dtm
October 31, 2011 5:04 pm

Zac says:
October 31, 2011 at 2:37 pm
Ref the use of RN deck log books and ‘met obs’
I asked a long time ago why the 100s of years of Merchant Navy deck log books weren’t being used as climate data source; along with ‘Met Obs’ (which I think started not long after Radio).
I was trampled on – no quality control – bad readings – unknown errors – the anemometers were wrong; the chronometers were wrong (so the synoptic hours were wrong); the barometers were wrong and not calibrated (ha bloody ha !); the water temperature was wrong ‘cos the bucket wasn’t deep enough (SURFACE temp); temperature was wrong ‘cos of the engine cooling water…. .
I think the responses I got from the AGW advocates that I asked (initially in all innocence) where the final nail in the personal journey of becoming an AGW denier.
I spent too many hours as a ship’s Radio Officer (helping to prepare and) sending OBS to accept being told so many people had spent so much time and effort for nothing – especially as what they claimed was patently wrong. ( I remember being complained at by a 3rd mate about ‘bloody sparkies’ always want the OBS done properly; not fudged … MET OBS took about 10 minutes to tabulate – 3 minutes to code and anything up to 30 minutes to send – 3 minutes on the key to actually send after 25 minutes of calling & queuing…))
And of course; following Anthony’s surface station expose; I would bet that ship’s OBS are a hell of a lot more accurate and reliable than many land based weather stations – even after some of the lousy morse operators had managed to mangle it all !

Chris B
October 31, 2011 5:11 pm

Zac says:
He also went on to say what I posted only a few days ago, that every RN ship has recorded the weather (temp, pressure, wind speed and direction, cloud type etc ), seawater temp, sea state, ship’s postion, speed and course on every watch without fail for a very long time and no doubt other navies have too.
—————
Dammit Zac, you just gave Muller another project to inflate his ego. BErkeley Aggregate Naval Observations (BEANO). For non UK readers the BEANO was a kid’s comic in the 1950s. Muller’s BEST project is about as useful but less entertaining.

October 31, 2011 5:12 pm

DocMartyn says:
October 31, 2011 at 3:13 pm
Hard that you may find it to believe Leif, complex model that are trained are in fact fits and are about as useful as a one legged man at an arse kicking party.
This has been shown time and again.

I think that is what I have been saying that mere fits or training won’t work.

October 31, 2011 5:39 pm

Final Line in the paper :
“Our concern is that if we cannot successfully calibrate and make predictions with a model as simple as this, where does this leave us when are models are more complex, have substantive modelling errors, and we have poor quality measurement data.”
To say climate models are “more complex ” than this little reservoir model would be the understatement of all times. The climate would be 10’s of orders of magnitude more complex. Throw in substantial modeling errors & poor data quality & you see where we are today.
I think another key statement is that calibrated models do have some utility over some period of time. We have all see this – GFS (or choose your favorite med range model) usually is pretty good days 1-3 but get out to day 10 & beyond & it’s predictive capability has been severely diminished. So the question is what is the utility period for the current set of climate models? It would be interesting to do a similar study to this with various subsets of past data & see how long the utility period is – I am guessing it is substantially less than the 100 yrs commonly seen on IPCC graphs.
Another interesting statement is that the authors clearly see the connection to climate models here with a direct reference to them at the start of the paper.