A Modest Proposal—Forget About Tomorrow

Guest Post by Willis Eschenbach

There’s a lovely 2005 paper I hadn’t seen, put out by the Los Alamos National Laboratory entitled “Our Calibrated Model has No Predictive Value” (PDF).

Figure 1. The Tinkertoy Computer. It also has no predictive value.

The paper’s abstract says it much better than I could:

Abstract: It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way.

Using an example from the petroleum industry, we show that cases can exist where calibrated models have no predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability.

We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.

There are three results in there, one expected and two unexpected.

The expected result is that models that are “tuned” or “calibrated” to an existing dataset may very well have no predictive capability. On the face of it this is obvious—if we could tune a model that simply then someone would be predicting the stock market or next month’s weather with good accuracy.

The next result was totally unexpected. The model may have no predictive capability despite being a perfect model. The model may represent the physics of the situation perfectly and exactly in each and every relevant detail. But if that perfect  model is tuned to a dataset, even a perfect dataset, it may have no predictive capability at all.

The third unexpected result was the effect of error. The authors found that if there are even small modeling errors, it may not be possible to find any model with useful predictive capability.

To paraphrase, even if a tuned (“calibrated”) model is perfect about the physics, it may not have predictive capabilities. And if there is even a little error in the model, good luck finding anything useful.

This was a very clean experiment. There were only three tunable parameters. So it looks like John Von Neumann was right, you can fit an elephant with three parameters, and with four parameters, make him wiggle his trunk.

I leave it to the reader to consider what this means about the various climate models’ ability to simulate the future evolution of the climate, as they definitely are tuned or as the study authors call them “calibrated” models, and they definitely have more than three tunable parameters.

In this regard, a modest proposal. Could climate scientists please just stop predicting stuff for maybe say one year? In no other field of scientific endeavor is every finding surrounded by predictions that this “could” or “might” or “possibly” or “perhaps” will lead to something catastrophic in ten or thirty or a hundred years. Could I ask that for one short year, that climate scientists actually study the various climate phenomena, rather than try to forecast their future changes? We still are a long ways from understanding the climate, so could we just study the present and past climate, and leave the future alone for one year?

We have no practical reason to believe that the current crop of climate models have predictive capability. For example, none of them predicted the current 15-year or so hiatus in the warming. And as this paper shows, there is certainly no theoretical reason to think they have predictive capability.

The models, including climate models, can sometimes illustrate or provide useful information about climate. Could we use them for that for a while? Could we use them to try to understand the climate, rather than to predict the climate?

And 100 and 500 year forecasts? I don’t care if you do call them “scenarios” or whatever the current politically correct term is. Predicting anything 500 years out is a joke. Those, you could stop forever with no loss at all

I would think that after the unbroken string of totally incorrect prognostications from Paul Ehrlich and John Holdren and James Hansen and other failed serial doomcasters, the alarmists would welcome such a hiatus from having to dream up the newer, better future catastrophe. I mean, it must get tiring for them, seeing their predictions of Thermageddon™ blown out of the water by ugly reality, time after time, without interruption. I think they’d welcome a year where they could forget about tomorrow.

Regards to all,

w.

0 0 votes
Article Rating
299 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
DirkH
October 31, 2011 1:33 pm

The AGW modelers are just a money sink. Money sinks are crucial for Keynesian economics to work. Without money sinks, the money wouldn’t know where to go.

E.M.Smith
Editor
October 31, 2011 1:35 pm

Wonderfully done. Concise and clean. Bravo.
BTW, love the tinker toy computer. Folks forget that if you can make an AND and OR gate (or even just a NAND gate) you can make a computer out of anything. Even ropes and pulleys. There are ‘hydraulic computers’ for various uses (even in old non-electronic automatic transmissions – of a sort) and you could make one out of jelly beans and Rube Goldberg buckets et. al. if you liked. Jevon’s made a “logic piano” back in the 1800s.
http://www.rutherfordjournal.org/article010103.html
Nice to see folks keeping up the ‘vision’ in a modern form 😉

John Robertson
October 31, 2011 1:36 pm

Dear Mr. Eschenbach
A slight typo:
“So it looks like John Von Neumann was right, you can fit an elephant with three parameters, and with four parameters, make him wiggle his trunk.”
however the PDF states:
“you can fit an elephant with four parameters and with five, make him wiggle his trunk.”.
Appreciate the article and the PDF!
John :-#)#

Martin Clauss
October 31, 2011 1:37 pm

Willis,
Great article. As far as the your proposal, rather than no predictions for just ONE year, I would like to see none for at least THREE years . . . with the latest in the solar cycle and ocean cycles, one year to me isn’t long enough to hold off on the predictions . . .

Mark ro
October 31, 2011 1:38 pm

“I would think that after the unbroken string of totally incorrect prognostications from Paul Ehrlich and John Holdren and James Hansen and other failed serial doomcasters, the alarmists would welcome such a hiatus from having to dream up the newer, better future catastrophe.”
Nope Mann is hard at it but this time Manns appearance in Minneapolis gets occupied by protesters!
http://www.msnbc.msn.com/id/45107032/ns/us_news-environment/#.Tq8DE_SP5LI

Shevva
October 31, 2011 1:39 pm

Good luck with that Willis, it’s already known there’s no clothes if there’s no models either this fashion show of showing the evil ways of man would be for nothing.
Although you would hope that at some point these model experts would realise that there lifes work really doesn’t cut the mustard and simply put there feet up and milk the cow.
Oh wait…..never mind.

Septic Matthew
October 31, 2011 1:39 pm

That is a great find! many thanks.

John T
October 31, 2011 1:40 pm

“Could climate scientists please just stop predicting stuff for maybe say one year?”
My finely tuned skeptic model predicts that won’t happen.

October 31, 2011 1:46 pm

If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.

Septic Matthew
October 31, 2011 1:47 pm

Willis: I would think that after the unbroken string of totally incorrect prognostications from Paul Ehrlich and John Holdren and James Hansen and other failed serial doomcasters, the alarmists would welcome such a hiatus from having to dream up the newer, better future catastrophe.
That has been said many times, many ways (That is not a criticism of you or of your post.) After having been wrong now for 4+ decades, shouldn’t Holdren and Ehrlich at least be modest? Except for the possibility that they exemplify the effects of the same neurological mechanism by which other people think they understand The Revelation of Saint Paul, it’s mystifying.

Pete in Cumbria UK
October 31, 2011 1:48 pm

Even the inventor of the computer ‘had problems’….
“On two occasions, I have been asked [by members of Parliament],
‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’
I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question.”

– Charles Babbage (1791-1871)

October 31, 2011 2:00 pm

Nice post, Willis. Unfortunately all you’ll get from the peanut gallery is lots of fingers in ears and off key renditions of “La, la, la, I can’t hear you!”

Latitude
October 31, 2011 2:02 pm

When their sound physics models start doing something useful, like winning the lotto…
…I’ll start paying attention
..or even predicting tomorrow’s weather….or hurricanes

DirkH
October 31, 2011 2:04 pm

From what I read it looks like their model has a grid for the simulation, each cell in the grid represents a block of “good sand” or “poor sand”, the permeability of each cell is randomly perturbed. So one simulation run corresponds to a run of a nontrivial 2 D cellular automate. (At least I think they do only 2 dimensions, somebody correct me if I’m wrong)
Wolfram has in “A New Kind Of Science” demonstrated that even one-dimensional cellular automata with only one bit of state information per cell can show irreducibly complex behaviour; i.e. behaviour that cannot be predicted without performing the entire computation of the cellular automaton; there is no possible shortcut. He actually has a classification of his automata, a lot show trivial behaviour but then there are some that fall into this complex class. (One could say chaotic if the automaton also had an infinite number of cells; but that’s not necessary for the complex behaviour)
I won’t even go into this calibration of parameters… I think it’s safe to say that their model is, compared to what Wolfram did, already hopelessly complex; climate models even more so.
Maybe the people who still believe in predictive power of climate models would be well advised to get themselves a copy of Wolfram’s ANKOS.
Here’s a lecture by Wolfram about it.

More Soylent Green@
October 31, 2011 2:05 pm

Willis:
The models are tuned to match past climate, without a strong understanding of the factors involved and the complex relationships between those factors. There is quite a bit of fudge factoring and outright guessing in the models. For example, Hansen recently talked about the failures of the models and their overestimation of ocean heating. The problem was, they just made up that ocean heating based upon their theories, not any facts. It was the only way they could think of to make the models work.
It’s sort of like how GISSTemp works — the models assume the poles will warm faster, and since there aren’t any surface stations at the poles, just project the temps based upon the models. Viola! Global warming!
Climate models should best be considered as hypothesis. As we’ve seen time and time again, they fail to properly predict the real climate. This should take the modelers back to the drawing board, but their is too much time, money, prestige and ego invested, and too much at stake, to admit they are wrong and start over.

Editor
October 31, 2011 2:15 pm

Pete in Cumbria
I can see the church where Babbage was married from my living room window
http://en.wikipedia.org/wiki/Charles_Babbage
tonyb

timg56
October 31, 2011 2:15 pm

Willis,
This looks like it may be the paper referenced in the recent Scientific American article.
http://www.scientificamerican.com/article.cfm?id=finance-why-economic-models-are-always-wrong

Jeremy
October 31, 2011 2:17 pm

Leif Svalgaard says:
October 31, 2011 at 1:46 pm
If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.

And then there’s Pioneer 10… off course with no explanation. Sound physics, prediction wrong.

pwl
October 31, 2011 2:20 pm

As Stephen Wolfram has shown in Chapter Two of A New Kind of Science, some simple systems can generate complexity as complex as any complex system can… and this is the result of the simple system generating internal randomness. What this means is that you can not predict the next state of the system let alone states equivalent to a year or fifty years or five hundred years into the future. The only way to know the future states of these systems is to observe them as they unfold one state change at a time.
Many if not all of Natural systems such as we see in climate systems are this kind of simple system that generate internal randomness and maximal complexity, which means that the only way to know the future states of the climate systems is to watch them unfold in real time. This renders predicting the future of climate impossible due to first principles.
Note that this internal randomness is not the same as randomness from outside of the system ( which can still be occurring in climate systems for sure) and doesn’t derive from initial conditions.
http://pathstoknowledge.net/2009/05/01/a-new-kind-of-science-by-stephen-wolfram

Larry Fields
October 31, 2011 2:20 pm

I’d like to add another Larry’s Law. (I’ve lost count of what the number should be.)
No clothes does not necessarily imply no emperors.

D. Cohen
October 31, 2011 2:22 pm

Two points from someone who has worked inside “Big Science”:
1) Asking climate scientists to stop making people worried by predicting possibly bad outcomes is equivalent to asking them to stop looking for new grants to replace the ones that are running out.
2) When a group of scientists and engineers are sitting around a table trying to reach a decision, those with a model (however bad) to back up their suggestions will always have a tremendous advantage over those who have nothing but skepticism and general principles. The whole point of a decision by committee is to lessen the blame for bad decisions, and when committee members can in turn shift blame to a model — by saying they were just following its suggestions — well, let’s just say that this is almost always too tempting to resist. A corollary to this point is that models with more bells and whistles are more influential than those with fewer.

Eric Dailey
October 31, 2011 2:34 pm

Willis, all these doom casters are good for media sales of newspapers and TV shows so they get plenty of encouragement for what they do. If you don’t know how deeply corrupt the media is then you’re not paying attention.

Zac
October 31, 2011 2:37 pm

Interesting. Only this evening did a chap on BBC radio 4’s material world suggest that climate models should be used to predict the past and that way they can be checked for accuracy.
He also went on to say what I posted only a few days ago, that every RN ship has recorded the weather (temp, pressure, wind speed and direction, cloud type etc ), seawater temp, sea state, ship’s postion, speed and course on every watch without fail for a very long time and no doubt other navies have too.

October 31, 2011 2:37 pm

Jeremy says:
October 31, 2011 at 2:17 pm
And then there’s Pioneer 10… off course with no explanation. Sound physics, prediction wrong.
All physics must be included. For the spin-stabilized Pioneers thermal emission from the spacecraft are likely the cause. For the attitude stabilized Voyagers there is no anomaly.

October 31, 2011 2:41 pm

Willis Eschenbach says:
October 31, 2011 at 2:08 pm
Leif, the paper is about “tuned”, also called “calibrated” models like the current GCMs.
The paper does not say anything about GCMs. And most models are ‘calibrated’ in any case.

Kev-in-Uk
October 31, 2011 2:45 pm

Surely the term ‘calibrated model’ a bit of an oxymoron insofar as it is limited to the data for the calibration and the extension of said data via the output/prediction is only dependent on the available data? So it’s a bit of a chicken and egg situation in respect of output reliability – of course, when one adds in the chaotic non-linear relationships of something say, like ‘the climate’ – well, the world is your funding oyster…..
I agree with Leif – a sound physics based model is the only likely useable/useful one. (in terms of predictive capabilities).

Kev-in-Uk
October 31, 2011 2:47 pm

Zac says:
October 31, 2011 at 2:37 pm
Ah but Zac – that RN data would not perhaps fit the models! LOL (of course, even if someone like Jones had it – it’s probably been lost by now!)

Jeremy
October 31, 2011 2:56 pm

Leif Svalgaard says:
October 31, 2011 at 2:37 pm
All physics must be included. For the spin-stabilized Pioneers thermal emission from the spacecraft are likely the cause. For the attitude stabilized Voyagers there is no anomaly.

So, to apply this to the previous topic then, are we sure all physics in their first and second order influences are included in any model of earths climate? It took how many decades to arrive at RTG radiation imbalance (still not universally accepted as the explanation, btw), and with fewer decades for climate models (a much more complicated mathematical problem) we’re expected to believe there is predictive value.
Newtons laws are simple. Einsteins improvements on that can be taught to high-schoolers. Yet still the best and the brightest failed to predict what happened with something they built. How much confidence can anyone place in predicting the direction of that which was built by entropy?

Septic Matthew
October 31, 2011 3:02 pm

Leif Svalgaard: If the model is based on sound physics it usually will have predictive capability, unless it is too simple.
“Usually” is not that good a criterion.
“Unless it is too simple” — It’s the failure to make accurate predictions that will reveal that the model is too simple. This applies to calculations of the “climate sensitivity” — whether the omission of cloud dynamics results results in “too simple” is yet to be revealed, but will be revealed if model predictions turn out to have been bad enough.

October 31, 2011 3:04 pm

Willis – don’t you forget : they’re still giving prizes out to Ehrlich. Being right was never their business, being gloomy it was.

October 31, 2011 3:05 pm

Jeremy says:
October 31, 2011 at 2:56 pm
So, to apply this to the previous topic then, are we sure all physics in their first and second order influences are included in any model of earths climate?
We are not sure, of course, but the remedy is to keep trying to improve them.

Leon Brozyna
October 31, 2011 3:05 pm

Tinkertoy … now that’s something I hadn’t seen in a long time. Used to have so much fun building things with ’em. Let the climate scientists build their models out of tinkertoys … that’ll keep ’em busy for awhile.

Another Gareth
October 31, 2011 3:10 pm

Apologies for the long quote but…
As the Kevin Trenberth said in 2007:
“I have often seen references to predictions of future climate by the Intergovernmental Panel on Climate Change (IPCC), presumably through the IPCC assessments (the various chapters in the recently completedWorking Group I Fourth Assessment report ican be accessed through this listing). In fact, since the last report it is also often stated that the science is settled or done and now is the time for action.
In fact there are no predictions by IPCC at all. And there never have been. The IPCC instead proffers “what if” projections of future climate that correspond to certain emissions scenarios. There are a number of assumptions that go into these emissions scenarios. They are intended to cover a range of possible self consistent “story lines” that then provide decision makers with information about which paths might be more desirable. But they do not consider many things like the recovery of the ozone layer, for instance, or observed trends in forcing agents. There is no estimate, even probabilistically, as to the likelihood of any emissions scenario and no best guess.
Even if there were, the projections are based on model results that provide differences of the future climate relative to that today. None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models.”
In 2007 the problem with the climate models wasn’t that the models were accurate but somehow unreliable even when calibrated – it’s that they were nothing like Earth. They were idealised caricatures of what might be. Linear extrapolations, fudges and guesswork that had at their heart the SRES family of projections.
Have they improved since then? The paper Willis has highlighted and the Scientific American article linked to by timg56 suggest that even if things have improved in terms of calibrations and replicating the climate properly the model output may not be any more predictive. Even so this was not, at least as late as 2007, a capability provided by the models. All they could do was provide a whiff of credibility for policy makers.

DocMartyn
October 31, 2011 3:13 pm

“Leif Svalgaard
If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
This is what the enzymologists believed, then they found out that even if you knew the rate constants of even simple enzymatic pathways, classical descriptions didn’t work. The led to metabolic control theory, then control theory. The biochemists were of course re-inventing the wheel to a large extent, economists had already made the leap some 50 years earlier. The ides of control coefficients and elasticity, along with a switch to the use steady state kinetic analysis, rather than using near-equilibrium thermodynamics to explain non-equilibrium states,
Hard that you may find it to believe Leif, complex model that are trained are in fact fits and are about as useful as a one legged man at an arse kicking party.
This has been shown time and again.

Baa Humbug
October 31, 2011 3:40 pm

Willis, Forgetaboutit
[youtube=http://www.youtube.com/watch?v=Zf0ZyoUn7Vk&w=640&h=360]

kwik
October 31, 2011 3:42 pm

IPCC TAR 3 ;
“In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible”
http://www.realclimategate.org/2010/11/are-computer-models-reliable-can-they-predict-climate/

Richard S Courtney
October 31, 2011 3:43 pm

Leif Svalgaard:
Several people have rebutted your assertion at October 31, 2011 at 1:46 pm which said;
“If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
And you have attempted to refute the rebuttals. However, you miss the important point; viz.
No model (of any kind) should be assumed to have more predictive skill than it has been demonstrated to possess.
Assuming undemonstrated model predictive skill is pseudoscience of precisely the same type as astrology. But no climate model has existed for 30, 50 or 100 years and, therefore, it is not possible for any of them to have demonstrated predictive skill over such times.
In other words, the predictions of future climate by
(a) the climate models
and
(b) examination of chicken entrails
have equal demonstrated forecast skill (i.e. none) and, therefore, they are deserving of equal respect (i.e. none).
Another Gareth:
At October 31, 2011 at 3:10 pm you quote something Kevin Trenberth said in 2007: i.e.
“I have often seen references to predictions of future climate by the Intergovernmental Panel on Climate Change (IPCC), presumably through the IPCC assessments (the various chapters in the recently completedWorking Group I Fourth Assessment report ican be accessed through this listing). In fact, since the last report it is also often stated that the science is settled or done and now is the time for action.
In fact there are no predictions by IPCC at all. And there never have been. The IPCC instead proffers “what if” projections of future climate that correspond to certain emissions scenarios. etc.”
A decade ago, the same assertion of “there are no predictions by IPCC at all” was put to me by a questioner at a Meeting in the US Congress, Washington, DC. I replied saying;
“Sir, you say the IPCC does not make predictions. The IPCC says the world is going to warm. I call that a ;prediction.”
The questioner studied his shoes.
Richard

October 31, 2011 4:00 pm

E.M.Smith says:
BTW, love the tinker toy computer. Folks forget that if you can make an AND and OR gate (or even just a NAND gate) you can make a computer out of anything. Even ropes and pulleys. There are ‘hydraulic computers’ for various uses (even in old non-electronic automatic transmissions – of a sort) …

The newer ones are all electronic, but the fuel control units on older jet engines are powered by fuel pressure.

DocMartyn
October 31, 2011 4:17 pm

This is a computer; The MONIAC (Monetary National Income Analogue Computer) also known as the Phillips Hydraulic Computer and the Financephalograph, was created in 1949 by the New Zealand economist Bill Phillips. It is a water based analogue computer which used fluidic logic to model the workings of an economy.
http://en.wikipedia.org/wiki/MONIAC_Computer
It actually worked as well as economic forecasts normally work.

Stephen Goldstein
October 31, 2011 4:31 pm

I don’t object to the effort to build, tune and validate the models . . . modelers should be free to model to their heart’s content.
What I object to is taking action on the basis of predictions from predictive models the accuracy of which has not been demonstrated over even short-for-climate periods. Ooops! That’s wrong!
What I object to is taking action on the basis of predictions from predictive models the inaccuracy of which has been amply demonstrated.
Thus, with apologies to Robert Goodloe Harper, I propose, millions for research but not one cent for remediation.

Paddy
October 31, 2011 4:32 pm

Willis, I am the guy that suggested in a personal e-mail that you are the Eric Hoffer of climate science. Every post or comment by you that I have read reinforces my opinion. As for this post, all that I can say is “Wow, you sure know how to stir the pot.”
Please continue. The climate science pot needs constant attention.

Legatus
October 31, 2011 4:46 pm

Leif Svalgaard says:
If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.

Considering what we now know about the climate, such things as the variability of solar UV radiation, the effects of biology on climate, variability of cosmic rays and their possible effect on clouds, and especially the fact that clouds appear to have much more complex effects than the models show, how can current models not be too simple? Some of these factors appear to dwarf the effects of CO2. There is also the fact that we have large data holes and areas where the data is very suspect. And those are just some of the known unknowns…
In addition, I can make a model with perfect physics, but if you allow me to hide my data and my code, I can get you any prediction you like, highest bidder gets it.
Finally, when it comes to CO2 and it’s effects, what we are primarily concerned with, the increase of downward longwave radiation due to increasing CO2 over time has never actually been directly measured. While the physics seems perfect for the models, until we measure it out there, in the real world, and see whether it is actually increasing or whether some other effects (such as evaporation and clouds, for instance) are overwhelming it, we really don’t have any way to tell if the models output is correct. After all, this idea, increasing downward radiation, is the very central idea of AGW. If we could somehow directly observe this, the central idea of AGW, and see if it is in line with the model predictions, then at least we would have something to verify or falsify the central idea behind the models. Right now, we are merely imperfectly and partially measuring what we believe are secondary effects from that. We are merely making grand assumptions that it will work out there the way it works in our models.
Note that one of the big problems is that, indeed, the models were specifically made to show that increasing CO2 has a catastrophic effect on climate. What we need are models that are there simply to understand the climate, not to deliberately try to show the catastrophic effect of just one variable among a great many that effect the climate. We need to stop deliberately trying to prove something and start to do the spadework of understanding the climate enough to be able to prove anything.
In short, the models are, in fact, too simple.

kuhnkat
October 31, 2011 4:51 pm

“The model may have no predictive capability despite being a perfect model. The model may represent the physics of the situation perfectly and exactly in each and every relevant detail. But if that perfect model is tuned to a dataset, even a perfect dataset, it may have no predictive capability at all.”
DANG!!! I missed that one. Of course, since the data set will be limited, and possibly not perfect also, it will not be able to represent the full range of the possible data and activity/cycles. It will impart a limited range to the model and not an accurate representation of the interactions or functions.

Legatus
October 31, 2011 4:59 pm

It actually worked as well as economic forecasts normally work.

I have seen this headline many times economists were suprised today when…”. Predicting the economy is similar to predicting future climate, especially when the climate predicters are not trying to predict what will happen, but what they wish to happen. Basically, when you try to predict the economy, you ae trying to predict the actions of a very great many unpredictable people, and one of the chief unpredictabilities are your won biases. There are simply too many variables, and you are one of them, and your variables (biases and wishes) can overwelm all the others.
Mnay times, the economists are simply telling the government types what they want to hear. The universities, paid for by the government, are teaching them how to.

Konrad
October 31, 2011 5:03 pm

Models can be very useful tools. I use computer modelling daily in engineering and design. These are models of mechanical structures and they do have useful predictive capability. How useful or potentially misleading they are in part depends on the user. A good user knows many of the models limitations. As someone who has worked on the shop floor, I am well aware of the difference between reality and a simplified model prepared for FEA.
Understanding the limitations of resolution and inaccuracies in initial conditions has led users of weather models to use a Monte Carlo approach to weather prediction with computer models. Multiple runs with intentionally perturbed initial conditions can show if a prediction is robust or if minor differences in the initial conditions will cause widely differing results, and thereby an unreliable prediction.
Climate modelling seems to be a comprehensive collection of all known problems with computer modelling. First the basic physics engine is wrong. The effect of backscattered LWIR is set over 200% too high for 71% of the Earth’s surface. Then water vapour feedback is set to strongly positive and aerosols are used as a tuning knob. Additionally the resolution is poor. Worse, data for initial conditions is limited and for some climate modelling is itself the output of previous models. Further the users seem unwilling to accept the limitations of the models. Meteorologists would have called the resulting spaghetti graphs a non robust result with no predictive value. Climatologists average the mess and call it a “scenario” By using computer models as a political tool, climate modellers seem determined to destroy all trust in computer modelling.
On the other hand, a cynical person could claim that climate models work perfectly. They have exploited the previous successes in other areas of computer modelling. They look like science and produce authoritative looking graphs. They have successfully generated over 80 billion dollars in funding and could defeat democracy where force of arms has failed. As economic and political tools climate models have been very effective.
Willis’ request for a prediction hiatus may seem reasonable, however it has been long said “Never try and explain something to someone who’s job depends on them not understanding it”

peter_dtm
October 31, 2011 5:04 pm

Zac says:
October 31, 2011 at 2:37 pm
Ref the use of RN deck log books and ‘met obs’
I asked a long time ago why the 100s of years of Merchant Navy deck log books weren’t being used as climate data source; along with ‘Met Obs’ (which I think started not long after Radio).
I was trampled on – no quality control – bad readings – unknown errors – the anemometers were wrong; the chronometers were wrong (so the synoptic hours were wrong); the barometers were wrong and not calibrated (ha bloody ha !); the water temperature was wrong ‘cos the bucket wasn’t deep enough (SURFACE temp); temperature was wrong ‘cos of the engine cooling water…. .
I think the responses I got from the AGW advocates that I asked (initially in all innocence) where the final nail in the personal journey of becoming an AGW denier.
I spent too many hours as a ship’s Radio Officer (helping to prepare and) sending OBS to accept being told so many people had spent so much time and effort for nothing – especially as what they claimed was patently wrong. ( I remember being complained at by a 3rd mate about ‘bloody sparkies’ always want the OBS done properly; not fudged … MET OBS took about 10 minutes to tabulate – 3 minutes to code and anything up to 30 minutes to send – 3 minutes on the key to actually send after 25 minutes of calling & queuing…))
And of course; following Anthony’s surface station expose; I would bet that ship’s OBS are a hell of a lot more accurate and reliable than many land based weather stations – even after some of the lousy morse operators had managed to mangle it all !

Chris B
October 31, 2011 5:11 pm

Zac says:
He also went on to say what I posted only a few days ago, that every RN ship has recorded the weather (temp, pressure, wind speed and direction, cloud type etc ), seawater temp, sea state, ship’s postion, speed and course on every watch without fail for a very long time and no doubt other navies have too.
—————
Dammit Zac, you just gave Muller another project to inflate his ego. BErkeley Aggregate Naval Observations (BEANO). For non UK readers the BEANO was a kid’s comic in the 1950s. Muller’s BEST project is about as useful but less entertaining.

October 31, 2011 5:12 pm

DocMartyn says:
October 31, 2011 at 3:13 pm
Hard that you may find it to believe Leif, complex model that are trained are in fact fits and are about as useful as a one legged man at an arse kicking party.
This has been shown time and again.

I think that is what I have been saying that mere fits or training won’t work.

October 31, 2011 5:39 pm

Final Line in the paper :
“Our concern is that if we cannot successfully calibrate and make predictions with a model as simple as this, where does this leave us when are models are more complex, have substantive modelling errors, and we have poor quality measurement data.”
To say climate models are “more complex ” than this little reservoir model would be the understatement of all times. The climate would be 10’s of orders of magnitude more complex. Throw in substantial modeling errors & poor data quality & you see where we are today.
I think another key statement is that calibrated models do have some utility over some period of time. We have all see this – GFS (or choose your favorite med range model) usually is pretty good days 1-3 but get out to day 10 & beyond & it’s predictive capability has been severely diminished. So the question is what is the utility period for the current set of climate models? It would be interesting to do a similar study to this with various subsets of past data & see how long the utility period is – I am guessing it is substantially less than the 100 yrs commonly seen on IPCC graphs.
Another interesting statement is that the authors clearly see the connection to climate models here with a direct reference to them at the start of the paper.

October 31, 2011 6:13 pm

What a great picture choice.
People have to keep in mind that the outcome of a computer program is in fact a mechanical outcome. In other words, the outcome is ALWAYS going to be the same with the same given set of inputs. This is no different than math and how the math outcome is fixed for a given set of numbers.
The fixed outcome includes the “random” number generator code that runs when you hit the random song button on your mpeg player or CD player. So the computer code that runs on that music device uses the seed of time. If you hit the random button at the exact same time again for the internal clock value used for seeding the code then the random sequence will ALWAYS be the same EVERY time. In other words, the outcome is fixed based on math.
In the physical world (nature) there is no such thing as a random event, but only events in which we don’t have quality instruments to know the inputs. A particle moving through space does not have a staff meeting and decide to change its direction without a CAUSE.
And quantum mechanics does not change this issue one bit. Simply put objects cannot move themselves. In other words if we know the weight, velocity (vectors) and all inputs of a baseball when hit means we can determine the outcome using math. The SAME math will ALWAYS give the same answer for a given set of input numbers. The base ball will do the same thing for the same given set of inputs.
However because we don’t at the quantum level or even in the simple case of a baseball have the input numbers then we simply revert to a statistical math answer. We thus state that we have “X” amount of certainly that the baseball will land inside of the ball stadium 90% of the time. Such estimates are fine and legitimate math for these cases as that all we have to go on.
However this does not mean the particle or baseball is not subject to a given set of laws and a pre-determined math outcome.
There is no demonstrable experiment or observation that shows or proves ANY kind of random event in which objects change their course without a cause.
So it is fitting to keep in mind that nature, electronic computers or that mechanical tinker toy computer are subject to a fixed set of laws and math and the outcome is pre-determined in a mechanical way for a given set of math inputs.
Albert D. Kallal
Edmonton, Alberta Canada

JJB MKI
October 31, 2011 6:27 pm

You say projections,
I say predictions,
Let’s call the whole thing off..

RDCII
October 31, 2011 6:36 pm

Willis, given the astonishing number of things you have done in life, it wouldn’t surprise me one bit to hear that you’d been a Steamboat navigator. 🙂

Rattus Norvegicus
October 31, 2011 6:39 pm

Willis, in case you haven’t noticed, most model studies are aimed at gaining understanding. But part of that understanding comes from comparing model predictions of what we should be seeing with data collected in the real world.

Roger
October 31, 2011 6:41 pm

See http://www.oldweather.org for an on-going project using volunteer efforts to compile a computerized database of weather data found in old ship logs (UK).

October 31, 2011 6:51 pm

Rattus Norvegicus says:
October 31, 2011 at 6:39 pm
Willis, in case you haven’t noticed, most model studies are aimed at gaining understanding. But part of that understanding comes from comparing model predictions of what we should be seeing with data collected in the real world.
And if the model predictions match reality we have not learned anything new. Only when there is discrepancies can we learn something.

JJB MKI
October 31, 2011 6:56 pm

@Rattus Norvegicus on October 31, 2011 at 6:39 pm:
Speak up, you appear to be mumbling.

Ian W
October 31, 2011 6:57 pm

If models were so very good there would be no role for test pilots

Sun Spot
October 31, 2011 7:01 pm

If I build a model of the Titanic I would have to make it sink (tuned to known events). If the Titanic had not hit the iceberg and not sunk I would have had to build my model not to sink (and would I model it to survived WW1 ?).
Is my ‘Titanic model’ curve fitting or is my the model is based on sound physics and would my Titanic models really have had any predictive value ?

ferd berple
October 31, 2011 7:15 pm

Leif Svalgaard says:
October 31, 2011 at 1:46 pm
If the model is based on sound physics it usually will have predictive capability, unless it is too simple.
That is the illusion of Victorian era physics. Physics has limited predictive skills for the future even when dealing with very simple problems. For example, Gravity can’t predict the orbits of 3 or more objects, except in very special cases.
There is a fundamental reason that Willis outlined the unexpected results. Most of us assume the word follows the Victorian era view of physics. That tomorrow is just like today, but one day removed, and thus our laws of physics that work today should apply to tomorrow as well.
However, quantum mechanics tells us that this view of the world is fatally flawed. Tomorrow does not exist as a “place”. Tomorrow exists as a probability only, which introduces uncertainty into all attempts to predict the future.

Sun Spot
October 31, 2011 7:22 pm

That above should read.
Is my ‘Titanic model’ curve fitting or is my model based on sound physics and would my Titanic model(s) really have had any predictive value ?

Cherry Pick
October 31, 2011 7:23 pm

People don’t expect that : “The model may have no predictive capability despite being a perfect model. The model may represent the physics of the situation perfectly and exactly in each and every relevant detail. But if that perfect model is tuned to a dataset, even a perfect dataset, it may have no predictive capability at all.”
Let’s have an example: the lottery machine. The physics and data are known but it is very difficult to forecast the lottery numbers. The runs are not repeatable – you get different results. That is what chaos theory is talking about.
Is this what the paper is preferring to?

ferd berple
October 31, 2011 7:34 pm

http://www.scientificamerican.com/article.cfm?id=finance-why-economic-models-are-always-wrong
At a minimum, climate models should be subject to sensitivity analysis as outlined in this article in Scientific American. If the models truly are predictive, they should show very low sensitivity to small input errors. However, if they show high sensitivity to small input errors (chaotic) then they can never hope to be predictive, because there is always small errors in the source data (eg: temperature records) that drive the models. (Willis’ third unexpected result).

DocMartyn
October 31, 2011 7:38 pm

Leif do you believe the various climate models to be reasonably physics based mathematical simulations or are they highly trained fits that contain little physical information?
Pick one or the other Leif.

Rattus Norvegicus
October 31, 2011 7:40 pm

Lief, you are correct in this, to an extent. Negative results sometimes result in improvement to the measurement systems, sometimes the result in improvements to the models.
The most interesting negative result (which we might be seeing at the LHC) is the lack of a Higgs boson. That would be really, really interesting.

October 31, 2011 7:42 pm

“For example, none of them predicted the current 15-year or so hiatus in the warming.”
Actually several did predict this and some predicted less warming.
the MEAN of all the models is another matter.
A model is a set of equations used to describe reality. F=MA is a model.
all physical laws are ‘models’ of reality and not reality themselves.

Theo Goodwin
October 31, 2011 7:57 pm

Leif Svalgaard says:
October 31, 2011 at 1:46 pm
“If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
Leif, you cannot get away with this hand waving forever. What do you mean when you say that the model is based on sound physics? My guess is that you mean that the people who constructed the model understand the relevant physics. But that claim only brings up the additional question: How do they get the physics into the model? That question does not have a hand -waving answer. One answer might be that they have actually programmed the statements that make up a physical theory into a deductive engine. That would put a physical theory into the model. But you cannot have done that because if you had then you could exhibit the set of statements that make up the physical theory; after all, you just programmed them into the model. But you cannot exhibit those statements Leif because you do not have them. No climate scientist has them. In fact, the reason that you are using a model is that you have not been able to formulate your piece of climate science as a physical theory. Having no physical theory by which to judge your model, you have no idea whether your model amounts to a physical theory at all and no clue whether it has predictive ability.
Now, tell me where I am wrong.

October 31, 2011 7:57 pm

ferd berple says:
October 31, 2011 at 7:15 pm
For example, Gravity can’t predict the orbits of 3 or more objects, except in very special cases.
Of course we can, see: http://ssd.jpl.nasa.gov/?horizons which predicts with high accuracy the orbits of a system consisting of 568670 asteroids, 3113 comets, 171 planetary satellites, 8 planets, and the Sun.
DocMartyn says:
October 31, 2011 at 7:38 pm
Leif do you believe the various climate models to be reasonably physics based mathematical simulations or are they highly trained fits that contain little physical information?
Pick one or the other Leif.

Since I know what goes on, I must pick the first choice. They are certainly not highly trained fits. That the models don’t perform well is another matter that people are very hard working on.
Whether or not, the models will ever work well enough to be useful is not known at this time.

Theo Goodwin
October 31, 2011 8:20 pm

steven mosher says:
October 31, 2011 at 7:42 pm
On this topic, you are hopeless. “F=MA” is an abbreviation for a law of physics. The statement is either true or false. In the form presented here, it is not rigorously stated and so is just an abbreviation.
You can create a model of our solar system. The model can be created out of physical objects, created on a computer, or created in some other fashion. If it is a good model it will have exactly the same behavior as our solar system. Note that the model is not true or false and, in fact, says nothing about our solar system. It need not be a set of statements at all.
How do we know that a model is a good model? We have a physical theory that enables us to predict the behavior of our solar system. If that physical theory predicts the behavior of our model with no conflicts then we know that our model of our solar system is as good as a model can get, given the existing physical theory. What is the moral of this story? The only way to judge the quality of a model is by reference to a physical theory that can predict all the behavior of the model without conflicts.
The very idea of a model in the physical sciences makes no sense whatsoever except by reference to some physical theory. No judgments about the quality of a model can be made except by reference to some physical theory. There is another moral.
If our model of our solar system turns out to be a perfect model by reference to our existing physical theory, roughly the Big Bang Theory, the model still cannot be used for prediction. The model is not a set of statements and declares nothing about our solar system. A prediction of a future event is one or more statements that will prove to be true or false at some future time. A statement that formulates a prediction can be obtained only from other statements that imply it, namely, our physical theory plus some statements of initial conditions that we have specified. Models are not statements and imply no statements and, for that reason, cannot be used for prediction.
Could I make it any clearer, any starker, any more black and white? To claim that a model can be used for prediction is what upperclass Englishmen call a “category mistake.” Such a mistake is like reporting that your tour of the campus at Berkeley revealed some beautiful buildings but you saw nothing that could be called a university. (If you don’t get the punch line, the category mistake is thinking that a university is something like a building. Can’t see the forest for the trees.)

October 31, 2011 8:22 pm

Theo Goodwin says:
October 31, 2011 at 7:57 pm
How do they get the physics into the model? That question does not have a hand -waving answer. One answer might be that they have actually programmed the statements that make up a physical theory into a deductive engine. […] Now, tell me where I am wrong.
Physical systems are controlled by a [sometimes large] set of coupled differential equations. Given the equations and an initial set of values, the equations can be integrated forward in time. A good example is [the simple] physical system consisting of 568670 asteroids, 3113 comets, 171 planetary satellites, 8 planets and a star that make up our solar system. This is where you are wrong.
If not all of the boundary conditions are known or the processes are poorly understood, the model will not perform as well as JPL’s calculation of orbits in the solar system. This is a condition for the climate system that might [and probably will] change as time goes on. The principle is clear, though, and must be understood.

October 31, 2011 8:26 pm

>quantum mechanics tells us that this view of the world is fatally flawed.
No that is not what it tells you. Because you don’t know where the baseball is going to land is NOT an excuse to tell me the laws of physics don’t apply to that baseball.
As I pointed out there is nothing wrong with using statistics to tell me that the baseball going to land 80% of the time in the baseball park. Howver we are doing this because we DO NOT have the ability to gather the needed information to make a precise math outcome. So this is not a proof that such a perfect math and set of rule based outcome does not exist, it simply means we don’t have that information (very big differnce here).
In other words because you don’t know where the ball is going to land is not a logical argument that the ball is not going to land in a particular spot based on a set of rules.
There is NO experiment of ANY kind that shows or proves that such outcomes are not based on a cause and outcome. Given a correct set of inputs the outcome WILL ALWAYS be the same. This is no different than telling me that one day you add up some math, and another day the numbers will now be different? They are always going to be the same.
Objects do not move or change their behavior without a cause or force being acted upon them. Objects do not move themselves.
So simply not having the ability to measure the outcome is not an excuse or logical reason to toss out causality and quantum mechanics does not change or alter this fact one bit.
>The physics and data are known but it is very difficult to forecast the lottery numbers. The runs are not repeatable – you get different results. That is what chaos theory is talking about.
If you repeat the experiment with the same input values the outcome will be the same EVERY TIME. There is not some chaos here there is simply not the means to gather the input values to plugs into the laws of physics. If you have all of the correct input values when the “lottery button” is pressed, then you can most certainly determine the outcome and it will be the SAME every single time base on the same input values.
The idea that we don’t know where the baseball is going to land is NOT an excuse or proof that we throw out the laws of physics and that the lottery balls don’t follow a set of math and rules here.
The baseball or lottery balls don’t think as they fly through the air, but are subject to a set of rules based on math. The outcome of math and numbers are the same every time for a given set of input numbers.
The balls or particles in quantum mechanics ALSO adhere to this rule and there no demonstrable experiment that shows otherwise. Not being able to gather the correct inputs is not an excuse or proof that these things don’t follow a given set of rules. Using “stats” as we do in qanta is simple done because we don’t have the abiliity to get the input values we need.
Albert D. Kallal
Edmonton, Alberta Canada

anticlimactic
October 31, 2011 8:41 pm

The second definition is pretty much the definition of chaos : you can know everything about a system and still can not predict the future. Chaos theory was developed to decribe the impossibility of accurate long term weather forecasting [longer than 3 days!]
With the climate change models it is equivalent to measuring a sine wave from the trough to the steepest slope and then projecting that as a trend! As the sine wave flattens out you just ignore it, or claim it is just temporary, or claim there is a counter effect, etc. What you don’t do is admit you are wrong, ever!
Normally when people favour fantasy over reality they are regarded as insane, but when you get huge amounts of money for this behaviour accusations of insanity moves to the funders, governments. But then you realise rich friends of the governmnets can make huge amounts of money ‘fighting climate change’. So there is no insanity – just a gullible public.

Frank K.
October 31, 2011 8:45 pm

One way to come to some conclusion about the accuracy of GCMs is to write down all of the differential equations, and their boundary/initial conditions used in these models so we can make an assessment. I’ll wait for someone to do this for the purposes of our discussion in this thread. …Take your time…
(PS – It would also be useful to write down all of the numerical algorithms used to solve the coupled, non-linear, partial differential equations …again, take as much time as you need … I’ll wait).

Rational Debate
October 31, 2011 8:52 pm

Actually Leif, according to NASA, at least last I can find and recall reading, while they don’t discount the possibility of some emission from Pioneer causing the slow down, they suspect it’s more likely that Voyager is experiencing the same effect as Pioneer – only the many atitude adjusment thrusts are likely covering it up.
http://www.space.com/448-problem-gravity-mission-probe-strange-puzzle.html
http://science.nasa.gov/science-news/science-at-nasa/2006/21sep_voyager/
http://www.guardian.co.uk/science/2002/feb/28/physicalsciences.research

Rational Debate
October 31, 2011 8:58 pm

re: Leif Svalgaard & Kev-in-Uk says: October 31, 2011 at 2:45 pm
I can’t help but think of the Heisenburg Uncertainty Principal.
Besides, what does a sound physics based model have to do with AGW, Climate Models, etc? What are the sound physics that allow modelling of biota’s interaction with climate, especially on a global scale? Same re: clouds. Cosmic and solar radiation? ENSO, AMO, PDO, NAO, etc? Or any number of other major variables involved in forming climate trends?
While I’m sure that there are sound physics involved in all of these things, I’m also quite certain that humans don’t even begin to know enough to put it all together such that a “sound physics model” can be generated to model climate with any meaninful accurracy. Which is what this article is about, after all.

NW
October 31, 2011 9:01 pm

Fron p. 197 of the article: “The very large spike, with h roughly 10, corresponds to the truth case. We can also see notable local optima with 0 < h < 8, 30 < h < 38 and 40 < h < 45. The global optimum has a small basin of attraction around it and has proved diffcult to identify in previous
work[2], the easiest optimum to find has been the one with 30 < h < 38. The rather noisy structure of the objective surface is largely an artifact of the of the way that kg is sampled."
Now I'm not a physicist. But I am an experimenter and statistician. When I read this, it sounds to me like the problem here is an artifact of either: (1) the estimation method (a genetic algorithm); (2) the data sampling interval; and/or (c) the nature of the model itself, which gives rise to an irregular objective function in the parameters.
If I were leading a group of grad students, I would be asking them questions like these. Suppose we sample this at a much higher rate; would that help? Suppose we use a different estimation algorithm: Would that help? Suppose we don't worry about estimating one of these parameters, but instead use some outside estimate (or prior information)…Would that help?
Or, put differently. I frequently tell students that 90% of the statistical thought should have happened before you ever started collecting data (or running an experiment if you are lucky enough to be able to). You should already have simulated your model and asked: What is a good sampling plan? How many observations do I need, and at what interval? If I have a lot of confidence in some prior information, should I incorporate that, and how?
I don't see much of this kind of thinking in the article Willis cites. Perhaps that is irrelevant to the evaluation of climate science business-as-usual. But I would hate for people to take away the wrong message from an example like this one. In this example, the experimental design is given, and the poor message is perhaps as much a result of that as the underlying physics (or whatever). This is, to my mind, a good example of what can happen when you don't think carefully about the design of experiments… at least as much as an example of fundamental uselessness of models.
Just sayin.

Rational Debate
October 31, 2011 9:05 pm

re: Leif Svalgaard says: October 31, 2011 at 3:05 pm

Jeremy says: October 31, 2011 at 2:56 pm
So, to apply this to the previous topic then, are we sure all physics in their first and second order influences are included in any model of earths climate?

We are not sure, of course, but the remedy is to keep trying to improve them.

I sure agree there – they ought to keep trying to improve them. But in the meantime, it seems pretty clear that the models aren’t anywhere close to accurate, and that there are reams of interactions occuring that they don’t understand yet or know well enough to model. So long as the uncertainties are so massive, it sure seems to me that repeated proclamations of how the world will be in dire straights in 100 years if we don’t massive change our behavior, standard of living, etc., all based on those highly questionable and clearly flawed model predictions is beyond absurd and far into the realm of doing serious harm to many people.

Jaye Bass
October 31, 2011 9:21 pm

RE: Leif Svalgaard
Define “sound physics” in terms of a complete, validated and verified, system model for the climate. Get back to me when you are finished.

October 31, 2011 9:22 pm

steven mosher says:
“… F=MA is a model”.
More to the point, it is a Law.
Climate models can’t accurately predict. They are BEST at generating taxpayer loot. But as for accurate predictions… nope. Sorry. But that’s the truth.

Jaye Bass
October 31, 2011 9:26 pm

I’m betting that F=MA is not strictly speaking perfectly correct.

October 31, 2011 9:35 pm

Rational Debate says:
October 31, 2011 at 8:52 pm
Actually Leif, according to NASA, at least last I can find and recall reading
Those were old news. Here is more up-to-date stuff http://planetary.org/programs/projects/pioneer_anomaly/
http://arxiv.org/PS_cache/arxiv/pdf/1107/1107.2886v1.pdf
Rational Debate says:
October 31, 2011 at 8:58 pm
I can’t help but think of the Heisenburg Uncertainty Principle.
Does not apply to macroscopic systems.
While I’m sure that there are sound physics involved in all of these things, I’m also quite certain that humans don’t even begin to know enough to put it all together such that a “sound physics model” can be generated to model climate with any meaningful accuracy
The people who do this for a living think otherwise and I will agree with them in principle. That the models may not perform well yet, should not stop us from trying very hard to improve them.

John Blake
October 31, 2011 9:37 pm

Why is there never a mention of Edward Lorenz’s seminal Chaos Theory (1960) allied to Benoit Mandelbrot’s Fractal Geometry (c. 1974)? Good lord, from Newton’s “three-body problem” on down, physical science has known that any and all “complex dynamic systems” –those with three or more mutually interacting variables– are in principle non-random but indeterminate, self-similar on every scale.
Despite cycling in context of over-broad parameters, non-linear processes are by nature effectively random-recursive/stochastic, meaning that chance-and-necessity in combination are forever beyond forecasting ken. “No-one is expert on the future,” nor indeed could so-called experts ever agree on their prognostications if they were, for nothing is ever fixed or given: “No world is beyond surprise.”
In AGW contexts, no credentialed practitioner of integrity could possibly deny these elemental constraints on even short-term, nevermind centuries-long projections. To knowingly pretend otherwise means the Green Gang comprises not fools but rather charlatans or knaves; in all too many cases, both.

October 31, 2011 9:40 pm

Jaye Bass says:
October 31, 2011 at 9:21 pm
Define “sound physics” in terms of a complete, validated and verified, system model for the climate. Get back to me when you are finished.
You wouldn’t understand the physics anyway, nor the computer code, so what would be the point?

October 31, 2011 9:47 pm

Jaye Bass says:
October 31, 2011 at 9:21 pm
Define “sound physics” in terms of a complete, validated and verified, system model for the climate. Get back to me when you are finished.
You can prove me wrong by reading and understanding Jacobsen’s text book describing the physics: http://www.stanford.edu/group/efmh/FAMbook/FAMbook.html

October 31, 2011 9:54 pm

Jaye Bass says:
October 31, 2011 at 9:21 pm
Define “sound physics” in terms of a complete, validated and verified, system model for the climate. Get back to me when you are finished.
Updated presentation at: http://www.stanford.edu/group/efmh/FAMbook2dEd/index.html
and ‘come back to me when you are finished’.

Rational Debate
October 31, 2011 10:25 pm

re: steven mosher says: October 31, 2011 at 7:42 pm

“For example, none of them predicted the current 15-year or so hiatus in the warming.”
Actually several did predict this and some predicted less warming.

Would you please show me where that is the case, because I don’t see it in the IPCC 2001 synthesis report figures: http://www.ipcc.ch/ipccreports/tar/wg1/figspm-5.htm

RACookPE1978
Editor
October 31, 2011 10:25 pm

Thank you for the link.
OK. That book has been ordered from Amazon. (Stanford.edu apparently doesn’t let you order dead-tree versions of the whole volume?)
I need a reference for the climate circulation models – Who has written the best? (Yes, I saw Trenberth’s edition in the Amazon page – but I don’t trust his judgement nor his accuracy.)

Cherry Pick
October 31, 2011 10:28 pm

Albert D. Kallal says:
October 31, 2011 at 8:26 pm
“The idea that we don’t know where the baseball is going to land is NOT an excuse or proof that we throw out the laws of physics and that the lottery balls don’t follow a set of math and rules here. ”
No is not, but that this not the issue.
Both physics and computer programs are deterministic. Same inputs give same outputs. But models are simplifications of reality. The output of reality and output of the model do not match. Real computer programs have limited precision in data storage and calculations. Real life physics can’t be expressed by simple equations. You can’t have indefinitely precise measurements of the input parameters either. When run your model thousands of steps the errors aggregate.

Jeff D
October 31, 2011 10:47 pm

Cracks me up. We cant even measure temperature properly. Why would we think that a model could even come close for long term predictions. Is it worth the effort to try? Yes, but don’t confuse this with reality and try and convince me the end of the world is nigh. Just last Tuesday the models said that it was going to be clear and sunny sky’s for the next 5 days. It was raining on Thursday. Most of the time they get pretty close but even short term it is so chaotic as to almost be useless.
Speaking of proper measurements. How accurate are the CO2 readings say for the last 20 years?

RACookPE1978
Editor
October 31, 2011 10:54 pm

Willis Eschenbach says:
October 31, 2011 at 8:56 pm
(Replying to)
steven mosher says:
October 31, 2011 at 7:42 pm

… F=MA is a model.
Indeed it is. However, it is not an iterative model, as climate models are. More to the point, it is also not a “tuned” model.
As a result, the situations are far from the same, far enough that an iterative tuned model has very different limitations from a “model” of the “F=MA” type.

I ‘m going to disagree with both of you in this way: F=MA is a valid model. FEA models of stresses and strains and engineering “models” of movement and distortion are used thousands of times ad ay – and they are accurate representations (simplifications) of the items they are trying to model.
BUT It is a correct application only with the approximations and simplifications that we (everybody!) needs to use to make the finite element analysis (FEA) “work” in the simulated worlds of perfect solids and exact-to-the-last-degree-computer models used in engineering FEA work.
You can’t make a exact FEA model of the “real world” flaws and stress risers and lattice holes and stresses and crystal structure inside an actual casting. You HAVE to make a simplified “perfect” material with a “perfect” geometry as a FEA starting point. THEN you apply the stresses and strains as the metal (or plastic or composite or liquid) moves as reacts to the stresses being studied.
But you have to begin with the assumptions of the geometry. To be accurate, your F=MA model must be simulated with near-identical, symmetric small equal-sided lattices and nodes.
Your boundary conditions need to be accurate and match the real-world you are trying to simulate .
Your transfer equations from node to node (across the “planes” of each cube to the next cube) MUST be accurate in every detail. Assumptions and simplifications must be understood and “tested” (or validated) back against the real world.
The model “edges” (sharp points, corners, radii, chamfers, fillets, holes, dividing lines or planes) must be as accurate as possible: The model will respond at these sharp edges (changes) and will exaggerate the problems actually found there.
Every FEA model running every different FEA software running on every different computer worldwide is REQUIRED to come out with the same results from the same input conditions.
The F=MA model is applied worldwide, and we get back the same results every time it is run worldwide.
THEN we get the same (within experimental accuracy!) results when every F=MA “test” is made using real materials and real crystals and real mechanical and thermal and fluid-flow tests.
But none of these are applied to the CAGW favored “models” of the climate.
They start from 0,0,0,0,0,0,0 conditions (x,y,z,Temp,time,pressure,humidity,etc) then let them run for years to see if the result approximates the world’s climate and temperatures.
They don’t “rotate” the sun to simulate night and day and the changing distance to the sun.
They don’t “rotate” the simulated earth to generate Coriolis effects on winds and ocean currents and jet streams.
They don’t model coasts and islands and mountains and local surface conditions.
Their “cells” are huge – but don’t change to match the poles as the curvature closes in.
Their “cells” are huge – but they are very, very thin with respect to height against width and depth.
Their “cells” are huge – but even these slices of atmosphere are too coarse in height (altitude) to approximate the actual changes in pressure, temperature, air flow and clouds w/r to altitude.
Their “cells” are huge – but still too large to even simulate or approximate one hurricane per cell.
There are some 23 different models, and every model is “averaged” after thousands of runs – and “bad” results get thrown out based on the prejudices of the climate “team” – .Even so, every result is known to be different, with each model starting from different assumptions and slightly different parameters and using slightly different .. So, 22 of the 23 models are wrong, even if they come out with the same result. (But they don’t come out with the same result even when run two times in a row with the same input.)
But we don’t know which of the 23 is closest to being “not too wrong.”

Rational Debate
October 31, 2011 10:54 pm

re: Leif Svalgaard says: October 31, 2011 at 9:35 pm

Those were old news. Here is more up-to-date stuff http://planetary.org/programs/projects/pioneer_anomaly/ http://arxiv.org/PS_cache/arxiv/pdf/1107/1107.2886v1.pdf

Thanks for the updated info Leif, interesting reading.

Rational Debate says: October 31, 2011 at 8:58 pm
“I can’t help but think of the Heisenburg Uncertainty Principle.”
Does not apply to macroscopic systems.

Of couse not – but your statement was: “If the model is based on sound physics it usually will have predictive capability, unless it is too simple.” In other words, you didn’t specify macro v. micro, you simply said models based on sound physics. Who couldn’t help but think of Heisenburg UP with a statement like that?

Rational Debate said: “While I’m sure that there are sound physics involved in all of these things, I’m also quite certain that humans don’t even begin to know enough to put it all together such that a “sound physics model” can be generated to model climate with any meaningful accuracy”
The people who do this for a living think otherwise and I will agree with them in principle. That the models may not perform well yet, should not stop us from trying very hard to improve them.

That they don’t perform well simply proves my statement. We’re not there yet. The multiple papers that are published frequently finding new unexpected major effects is further evidence. What is the accurate physics on soot? On natural variability from ENSO, AMO, PDO, NAO, etc.? On comsic ray effects? On clouds wrt positive, negative or neutral forcing? Aerosols? It wasn’t that many years ago when they discovered that biota produce aerosols under certain conditions, aerosols that affect cloud cover… the list goes on and on. All of these things show that currently we don’t begin to know enough to create a sound physics model of climate. That said, again I totally agree with you that scientists ought to keep trying to improve what we do know with regard to these issues.
Frankly, I believe that the funds expended would serve far more purpose (including resulting in usefully accurate climate models sooner) if put towards understanding these underlying variables, especially naturally occuring variables and the basic physics behind known issues such as soot, rather than towards trying to generate climate models that are based on so many different gross assumptions and which are intended primarily only to predict, er, project future climate. Particularly considering that the temperature changes we’ve seen over the last 60 years or so don’t even begin to break out of the null hypothesis, e.g., what we know to be natural variability that has occurred during this interglacial.

October 31, 2011 11:00 pm

Leif writes : “If the model is based on sound physics it usually will have predictive capability”
Parametisation is fitting not “sound physics”. You might argue that the frequency at which the parametisation changes adequately represents sound physics but where is your evidence?

Gary Hladik
November 1, 2011 1:25 am

timg56 (October 31, 2011 at 2:15 pm), thanks for the Scientific American reference. I found that easier to understand than the original article.

londo
November 1, 2011 1:27 am

This post reminds me of high-school math (or was it early college) about interpolation, extrapolation and polynomials. Through any set of n points one can draw a n-1-order polynomial and there’s even an explicit expression for it, the Lagrange interpolation formula (if I remember correctly). It will have no error in this context but NO predictive value, i.e. it will be totally useless for extrapolation. However, if one knows something about the physics behind the data, one can allow interpolation errors, lower the interpolation polynomial and retain some predictive value of the model. Better yet, one can fit the data to a function that has something to do with the physics and have even more predictive power (but that’s another story).

Steve C
November 1, 2011 1:39 am

Point as nicely made as ever, Willis – and thanks indeed for introducing me to the Tinkertoy computer, which looks beautifully eccentric. May I also recommend, for those who appreciate such baroque machinery, Tim Robinson’s Meccano Computing Machinery website at http://www.meccano.us/ … the videos of his things working are among the few things which are, beyond all argument, awesome Dunno that he’s done a lot of useful computing on them, though!
[Meccano is/was a constructional toy, rather more popular in the 50s and 60s than in an age when kids grow up knowing they won;’t be doing any real engineering, ever.]

Frank
November 1, 2011 2:03 am

Willis,
I’d say this only shows that their objective functions are not suitable.
I find it rather weird that a calibration on a seven month period (Fig. 2b, Equation 2) gives a better-defined optimum calibration result than a calibration on a 36-month period (Fig. 2a, Equation 1). You’d expect 5 times more data to result in a less noisy objective function landscape, not the other way around.
This point is only made more clear when they add modeling error and the best tuning of their parameters (Fig. 4a) actually gives a completely wrong throw (h).
I don’t think this article says anything about predictive value of models in general.

wsbriggs
November 1, 2011 4:11 am

What the “Climate Scientists” are trying to do is to piggyback on the successes of physics where first principles are well understood. The difference is huge.
Several years ago LLNL had developed a computer model of the impact of multiple laser beams on a small, aluminum covered, spherical volume of deuterium. The model showed a shockwave racing around the ball. Did it really exist? Searching, they found that a group of physicists in Italy had used an ultra high-speed camera to record laser beams striking an aluminum ball. The shock wave was there. The model can be said to be capable of doing physics from first principles and correctly predicted a physical result.
The difference between the understanding of physics that predicted the shockwave on the surface of the aluminum sphere and the models predicting the climate should be obvious to a school child. Scientists working to predict the climate are not remotely able to use physics at a fine enough level to predict lightning in their models, let alone the weather.

Kelvin Vaughan
November 1, 2011 4:23 am

There used to be a saying when I was young “Only a fool tries to predict the future”! Looks like it was accurate then.

November 1, 2011 5:56 am

TimTheToolMan says:
October 31, 2011 at 11:00 pm
Parametisation is fitting not “sound physics”.
It is if the parameter is based on the physics involved. An example is the parameterization of the ‘Surface roughness’ [see chapter 8 of http://www.stanford.edu/group/efmh/FAMbook/Chap8.pdf ].

November 1, 2011 5:59 am

TimTheToolMan says:
October 31, 2011 at 11:00 pm
Parametisation is fitting not “sound physics”.
It is if the parameter is based on the physics involved.
An example is the Charnock relation:
http://www.termwiki.com/EN:Charnock's_relation

Frank K.
November 1, 2011 7:20 am

Leif Svalgaard says:
October 31, 2011 at 9:54 pm
Jaye Bass says:
October 31, 2011 at 9:21 pm
Define “sound physics” in terms of a complete, validated and verified, system model for the climate. Get back to me when you are finished.
Updated presentation at: http://www.stanford.edu/group/efmh/FAMbook2dEd/index.html
and ‘come back to me when you are finished’.
Nice reference Leif – Thanks.
Does Dr. Jacobson have a GCM climate code that he’s written (I couldn’t find anything online)? If so, perhaps he has listed for us the specific differential equations and numerical algorithms he is solving in his model and the associated initial/boundary conditions. That would be highly informative. I can wait…

Crispin in Waterloo
November 1, 2011 7:22 am

@E.M.Smith
“There are ‘hydraulic computers’ for various uses ”
+++++++++
My father was involved in the early 1960’s in the development of a ‘teaching machine’ that used hydraulic logic circuits that were impressed onto the side of a plastic card the size of a standard business card. These could be removed (as from a countertop PayPoint credit card machine) and swapped for another one, which had the effect of reprogramming the ‘computer’. It had no electronics in it.
Perhaps this was developed from the hydraulic transmission science, or preceeded it. Not sure. It had control gates, amplifiers and controllable ‘current’. The circuits looked like squashed spiders and had common water connection points arranged in a grid so they were interchangeable.

Jeremy
November 1, 2011 7:42 am

Leif Svalgaard says:
October 31, 2011 at 3:05 pm

Jeremy says:
October 31, 2011 at 2:56 pm
So, to apply this to the previous topic then, are we sure all physics in their first and second order influences are included in any model of earths climate?

We are not sure, of course, but the remedy is to keep trying to improve them.

That’s an unassailable point and you will get no argument from me. However, it does not address the issue of whether I should place any confidence in the current predictive value of computer models.

Gail Combs
November 1, 2011 7:43 am

Jeff D says:
October 31, 2011 at 10:47 pm
Cracks me up. We cant even measure temperature properly. Why would we think that a model could even come close for long term predictions. Is it worth the effort to try? Yes, but don’t confuse this with reality and try and convince me the end of the world is nigh. Just last Tuesday the models said that it was going to be clear and sunny sky’s for the next 5 days. It was raining on Thursday. Most of the time they get pretty close but even short term it is so chaotic as to almost be useless.
Speaking of proper measurements. How accurate are the CO2 readings say for the last 20 years?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The first question is WHERE.
One of the first “ASSumption” made is that CO2 is well mixed and uniform. This ASSumption is accepted by most skeptics despite the efforts of a few scientists.
If I get the physics at least sort of correct, the interaction of Long Wave Radiation emitted by land. is much higher at the bottom of the atmosphere then at the top. This is because the density of air follows the Gas Laws.
Therefore the “Green House Gas” interaction that is the center of all this is MORE dependent on the CO2 at the bottom of the atmosphere then at the top. I am sure there is some sort of height above sea level vs interaction math or there should be.
So what is the CO2 concentration at the ground level????
WHEAT:
“…The CO2 concentration at 2m (~6 ft) the crop was found to be fairly constant during the daylight hours on single days or from day-to-day throughout the growing season ranging from about 310 to 320 p.p.m. Nocturnal values were more variable and were between 10 and 200 p.p.m. higher than the daytime values….”
CO2 depletion “…Plant photosynthetic activity can reduce the CO2 within the plant canopy to between 200 and 250 ppm… I observed a ppm drop within a tomato plant canopy just a few minutes after direct sunlight at dawn entered a green house (Harper et al 1979) … photosynthesis can be halted when CO2 concentration approaches 200 ppm… (Morgan 2003) Carbon dioxide is heavier than air and does not easily mix into the greenhouse atmosphere by diffusion… “
From this it is obvious that:
1. Plants can RAPIDLY change the concentration of CO2 from ~400ppm down to as low as 200ppm.
2. The change in CO2 depends on the presents or absence of sunlight.
3. Wind bringing more CO2 to the plants is a bigger factor than diffusion.
4. The amount of sunlight changes every day of the year and through out each day.
5. Wind changes
6. Plant activity changes with temperature. (Hard frost can stop some but not all )
So even if there is an “even distribution of CO2” somewhere higher in the atmosphere, at the lowest level of the atmosphere where a lot of the “action” is, the amount of CO2 is chaotic.
(By the way do not forget all the sources like volcanoes or termites emitting CO2)
Seems to me “Modeling” CO2 is a lot more complicated then is ever admitted. Not surprising since the whole objective from the very start was to show industrialization by man was hurting the environment.
From Mauna Loa’s methodology:
4. In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step…. http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html
…..They also pointed out that the CO2 measurements at current CO2 observatories use a procedure involving a subjective editing (Keeling et al., 1976) of measured data, only representative of a few tens of percent of the total data. http://www.co2web.info/esef3.htm
The information presented to the public is Less than one percent of the data??? This is the “cherry picked” data used in the climate models??? Most “Skeptics” with few exceptions agree with “the requirement that CO2 in background air should be steady” despite an ever changing amount of sources and sinks in the real CO2 dynamics Changes mixed only by diffusion and the wind.
Graph of CO2 measurements by different labs: http://img27.imageshack.us/img27/3694/co2manytrends.png
THE POLITICS
Discussion of the politics and science of CO2 measurement : http://www.co2web.info/
For example:
“…The acknowledgement in the paper by Pales & Keeling (1965) describes how the Mauna Loa CO2 monitoring program started:
“The Scripps program to monitor CO2 in the atmosphere and oceans was conceived and initiated by Dr. Roger Revelle who was director of the Scripps Institution of Oceanography while the present work was in progress. Revelle foresaw the geochemical implications of the rise in atmospheric CO2 resulting from fossil fuel combustion, and he sought means to ensure that this ‘large scale geophysical experiment’, as he termed it, would be adequately documented as it occurred. During all stages of the present work Revelle was mentor, consultant, antagonist. He shared with us his broad knowledge of earth science and appreciation for the oceans and atmosphere as they really exist, and he inspired us to keep in sight the objectives which he had originally persuaded us to accept.”…”
http://www.co2web.info/ESEF3VO2.pdf
OBJECTIVES???? PERSUADED us to ACCEPT???? What do those words have to do with science?
This is just the CO2, then there is Water in all its forms not to mention the shennanigans with temperature.
In other words the input data is bogus. GIGO

November 1, 2011 8:00 am

Frank K. says:
November 1, 2011 at 7:20 am
perhaps he has listed for us the specific differential equations and numerical algorithms
He does list all of those that are routinely used. He does not [as far as I know] run a model [expensive] himself.

Leonard Weinstein
November 1, 2011 8:04 am

Leif Svalgaard said:
“If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
Leif, you ought to read a good book on Chaos theory. In a system described EXACTLY by numerous non-linear equation (describing the physics of the problem), with extremely accurately known boundary and initial conditions, the calculated evolution forward still diverges over time. If not all of the equations (physics) are well know, but just approximated (as is true for ocean currents, clouds, cosmic ray effects, aerosols, etc.), and with limited accuracy in initial and boundary condition, the calculated solution diverges rapidly. That is why the best we can do on weather prediction, even using satellite data, generally falls apart in just a few days. Increasing the time and looking at long time averages (i.e., climate) does not improve the accuracy, since chaotic systems vary at all scales. There is no indication that climate is not chaotic. Your statement is falsified.

November 1, 2011 8:04 am

Jeremy says:
November 1, 2011 at 7:42 am
it does not address the issue of whether I should place any confidence in the current predictive value of computer models.
Your confidence [or lack thereof] could come from several sources:
1) intimate and detailed personal knowledge of the models and the physics
2) hearsay [parroting what other people say]
3) your agenda [if any]
4) other [please specify]
Which of these is it?

Gail Combs
November 1, 2011 8:05 am

I really hate this dinosaur of a computer.
The quote with out the word drop was: “I observed a 50 ppm drop in within a tomato plant canopy just a few minutes after direct sunlight at dawn entered a green house (Harper et al 1979) “ Source

Eric Anderson
November 1, 2011 8:50 am

Leif: “If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
I understand your point, namely that if *all* the physics were included then we could perhaps expect a decent predictive model. This might be true, as an abstract idea, but in the real world as a practical matter the point about lack of predictive capability still holds for GCM’s for a couple of reasons. First, we are not even close to having all the physics included. We don’t know if all the physics included are fully “sound” (though of course the models do include much sound physics); further, the models are almost certainly too simple. Second, the climate system is essentially made up of particles, many of which interact at the atomic level. Thus, even if *all* the physics principles were understood to the nth degree and included in the model, we would still have to know, for a specific point in time, the location and direction of movement of all of these particles in order to accurately model how the physics would influence these particles over time. In other words, it isn’t just physics, it’s physical facts: location and movement of particles; interaction with landforms such as mountains, valleys, deserts, oceans; volcanic eruptions; vegetation growth; etc. I personally have no confidence that there is an ability to model climate 50 or 100 years out when there are a massive number of unknowns, both in physics and in physical facts.

Eric Anderson
November 1, 2011 9:00 am

Leif at 8:04 a.m.
You left out a couple of the most obvious (at least as they relate to lack of confidence):
5) A basic realization of the kinds of issues that need to be treated in the models, which are admitted to not be fully treated well (clouds, for example).
6) The models’ abysmal failure to accurately predict the last decade plus.
7) A general healthy skepticism about complex, multi-paramater models, which to date have not been demonstrated to have predictive value.

November 1, 2011 9:19 am

Leonard Weinstein says:
November 1, 2011 at 8:04 am
There is no indication that climate is not chaotic.
There is no indication that it is.

November 1, 2011 9:22 am

Willis Eschenbach says:
November 1, 2011 at 9:18 am
So did you read the paper, and if so, where is it wrong?
Yes, I did read it, and it is not the paper that is wrong, just you in assuming the conclusion carries over to climate models.

November 1, 2011 9:39 am

Willis Eschenbach says:
November 1, 2011 at 9:18 am
So did you read the paper, and if so, where is it wrong?
To assume that the physics is ‘perfect’ is a stretch as all I see is just fitting [even using genetic algorithms] with no physics.

Martin Lewitt
November 1, 2011 9:43 am

Leif Svalgaard,
I agree that models are worth some investment. But given all the diagnostic issues that have been reported: correlated surface albedo bias, under representation of precipitation, under representation of solar cycle signatures, poor regional performance, questionable cloud feedback, etc., they shouldn’t be being used for attribution and projection yet, and that is before the issue of local vs global optima raised by the paper is considered. There is a good chance that climate might be predictable, given that even a 1960s geography textbook will probably only be off by one to two hundred miles in climate zone locations in the year 2100, even in worst case scenarios.
If you have come this far, you have to agree that the IPCC conclusions and confidence were premature, and reporting model projections without error range estimates due to the diagnostic issues was unscientific.

November 1, 2011 10:01 am

Willis Eschenbach says:
November 1, 2011 at 9:43 am
Then why did they start by mentioning “climate models” in the very first paragraph of their work?
They and you and most others go wrong by assuming that climate models are curve fitting as in the paper. They are not, they are solving differential equations forwards with a small time step [every ~5 minutes]. That is why the models do not compare. This is a qualitative difference.

Martin Lewitt
November 1, 2011 10:24 am

Leif Svalgaard,
The fluid and mass balance equations are just a small part of the climate models. Most of the rest of the physics is parameterized. BTW, I believe the times steps are much longer than 5 minutes, which is why they use “white sky albedo” another parameterization. The time steps are in the range of two to four hours, at least for the AR4 models. Maybe they are doing shorter time steps now.

November 1, 2011 10:34 am

Martin Lewitt says:
November 1, 2011 at 10:24 am
The fluid and mass balance equations are just a small part of the climate models. Most of the rest of the physics is parameterized. BTW, I believe the times steps are much longer than 5 minutes,
Check Jacobsen’s book I referred you to in order to calibrate your ‘belief’ and reality.
For time step check slide 7 of http://uscid.us/08gcc/Brekke%201.PDF

commieBob
November 1, 2011 10:44 am

Leif Svalgaard says:
October 31, 2011 at 1:46 pm
If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.

The problem is that even if we perfectly understood the physics, the computer to perfectly calculate the climate would have to be the size of the known universe. 😉
Surely you would agree that cosmic rays influence the climate. That means that, not only do we have to predict the sun’s activity (because that modulates the cosmic rays), we have to predict the activity of the heavenly bodies supplying the cosmic rays in the first place.
What we need is not a knowledge of the underlying physical principles, it is a knowledge of how the climate works in general.
An example would be Ohm’s law. It is empirical. You don’t have to know anything about physics to use Ohm’s law and get a useful result. With a knowledge of Ohm’s law, you can model a linear circuit with great accuracy.
A similar case is Piers Corbyn. He seems to get accurate (more than average for weather forecasters) climate predictions without resorting to complicated physical models. His main thing seems to be a correlation between solar activity and the climate. As far as I can tell, it is empirical.
It seems likely to me that it impossible to accurately model a complex chaotic system based on physical properties alone unless the computer is as complex as the system itself. For the climate, that would seem to imply that each element in the model would have to be quite small in order to be strictly deterministic rather than acting chaotic. A wild ass guess would be each cubic meter of the atmosphere, ocean and land. OTOH, perhaps we are dealing with butterfly sized elements.

Richard S Courtney
November 1, 2011 11:04 am

Lief:
I see you have still not addressed the fundamental point made to you in my post at October 31, 2011 at 3:43 pm. I would be grateful if you were to answer it.
Richard

November 1, 2011 11:12 am

Richard S Courtney says:
November 1, 2011 at 11:04 am
I see you have still not addressed the fundamental point made to you in my post at October 31, 2011 at 3:43 pm. I would be grateful if you were to answer it.
I didn’t see an explicit question to answer, just your [somewhat muddled] opinion.

Steve In S.C.
November 1, 2011 11:12 am

I must remark that the discussion of physics or curve fitting is interesting as is the discussion of chicken entrails. The sniping back and forth has some levity but gets tiresome after a while.
The thing that everyone must realize is that despite your considerable skill and knowlege, there are days that you just ain’t going to hit that curve ball.

Martin Lewitt
November 1, 2011 11:12 am

Leif Svalgaard,
The time step depends on the numerical method, implicit methods that allow longer time steps are being sought, and different physics is stepped at different intervals. Even at the AR4, parameterizations, averaging and smoothing must be tuned not just for physics but for numberical requirements over different time scales. However, great our faith is in physics, simulating physics on a global scale is far from that ideal:
“Coupling frequency is an important issue, because fluxes are
averaged during a coupling interval. Typically, most AOGCMs
evaluated here pass fluxes and other variables between the
component parts once per day. The K-Profi le Parametrization
ocean vertical scheme (Large et al., 1994), used in several
models, is very sensitive to the wind energy available for
mixing. If the models are coupled at a frequency lower than
once per ocean time step, nonlinear quantities such as wind
mixing power (which depends on the cube of the wind speed)
must be accumulated over every time step before passing to
the ocean. Improper averaging therefore could lead to too
little mixing energy and hence shallower mixed-layer depths,
assuming the parametrization is not re-tuned. However, high
coupling frequency can bring new technical issues. In the
MIROC model, the coupling interval is three hours, and in this
case, a poorly resolved internal gravity wave is excited in the
ocean so some smoothing is necessary to damp this numerical
problem. It should also be noted that the AOGCMs used here
have relatively thick top oceanic grid boxes (typically 10 m or
more), limiting the sea surface temperature (SST) response to
frequent coupling (Bernie et al., 2005).”
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter8.pdf

November 1, 2011 11:14 am

commieBob says:
November 1, 2011 at 10:44 am
Surely you would agree that cosmic rays influence the climate.
Actually not. There is, at best, conflicting evidence, and overall, no compelling evidence for any such observable influence.

November 1, 2011 11:21 am

Martin Lewitt says:
November 1, 2011 at 11:12 am
The time step depends on the numerical method, implicit methods that allow longer time steps are being sought, and different physics is stepped at different intervals.
True, but irrelevant, as the time step is very small compared to the length of the simulation. The smallest time step [5 minutes] are used for the most important dynamical processes.

November 1, 2011 11:23 am

Steve In S.C. says:
November 1, 2011 at 11:12 am
I must remark that the discussion of physics or curve fitting is interesting as is the discussion of chicken entrails.
It illustrates the level of ignorance by participants.

Frank K.
November 1, 2011 11:43 am

Martin Lewitt says:
November 1, 2011 at 11:12 am
The time step depends on the numerical method, implicit methods that allow longer time steps are being sought, and different physics is stepped at different intervals. Even at the AR4, parameterizations, averaging and smoothing must be tuned not just for physics but for numerical requirements over different time scales. However, great our faith is in physics, simulating physics on a global scale is far from that ideal:”
Your comment goes to stability of the numerical algorithm used in the climate model. Unfortunately, it is difficult or impossible to prove anything about the stability of systems of partial differential equations, unless they are linearized model equations. The “optimal” time step is therefore usually found by trial and error (i.e. the largest value that doesn’t cause the solution to “blow up”).
This also gets us into fuzzy area of why the numerical algorithms for climate model atmospheric dynamics are any different from numerical weather prediction models…
“For time step check slide 7 of http://uscid.us/08gcc/Brekke%201.PDF
You’ve got to be kidding?! Where is the stability analysis to prove the values asserted? There aren’t even any equations in the entire presentation!

November 1, 2011 11:51 am

Frank K. says:
November 1, 2011 at 11:43 am
“For time step check slide 7 of http://uscid.us/08gcc/Brekke%201.PDF”
You’ve got to be kidding?! Where is the stability analysis to prove the values asserted? There aren’t even any equations in the entire presentation!

It was not the purpose to prove anything, just to substantiate wit a quote what is actually used.

TomB
November 1, 2011 12:08 pm

I especially like the title to this piece. As, indeed, this is much like Jonathan Swift’s original proposal. So seemingly simple in concept but impossible to execute.

Frank K.
November 1, 2011 12:09 pm

Leif Svalgaard says:
November 1, 2011 at 11:51 am
“It was not the purpose to prove anything, just to substantiate wit a quote what is actually used.”
OK. Sounds like a trial and error time step to me… ;^)

Septic Matthew
November 1, 2011 12:10 pm

RACookPE1978 says:
October 31, 2011 at 10:54 pm
That’s a good contribution. You provide lots of examples of how climate models are, in Leif Svalgaard’s words, “too simple”.

November 1, 2011 12:15 pm

Frank K. says:
November 1, 2011 at 12:09 pm
OK. Sounds like a trial and error time step to me… ;^)
A more important issue is the amount of computer time spent and also the relation between time step and spacial resolution. Last time I asked Gavin Schmidt about this, he told me that the 5 minutes is a compromise between all these factors [including numerical stability].

Myrrh
November 1, 2011 12:15 pm

“We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.”
As noted here: GLOBAL WARMING: FORECASTS BY SCIENTISTS
VERSUS SCIENTIFIC FORECASTS
by
Kesten C. Green and J. Scott Armstrong

“The IPCC WG1 Report was regarded as providing the most credible long-term
forecasts of global average temperatures by 31 of the 51 scientists and others involved
in forecasting climate change who responded to our survey. We found no references
in the 1056-page Report to the primary sources of information on forecasting methods
despite the fact these are conveniently available in books, articles, and websites.
We
audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report
to assess the extent to which they complied with forecasting principles. We found
enough information to make judgments on 89 out of a total of 140 forecasting
principles. The forecasting procedures that were described violated 72 principles.
Many of the violations were, by themselves, critical.
The forecasts in the Report were not the outcome of scientific procedures. In
effect, they were the opinions of scientists transformed by mathematics and
obscured by complex writing. Research on forecasting has shown that experts’
predictions are not useful in situations involving uncertainly and complexity. We
have been unable to identify any scientific forecasts of global warming. Claims that
the Earth will get warmer have no more credence than saying that it will get colder.

Isn’t that the problem here? No knowledge of the subject.

commieBob
November 1, 2011 12:17 pm

Leif Svalgaard says:
November 1, 2011 at 11:14 am

commieBob says:
November 1, 2011 at 10:44 am
Surely you would agree that cosmic rays influence the climate.

Actually not. There is, at best, conflicting evidence, and overall, no compelling evidence for any such observable influence.

OK so the computer has merely to be the size of the solar system.

November 1, 2011 12:22 pm

commieBob says:
November 1, 2011 at 12:17 pm
OK so the computer has merely to be the size of the solar system.
We actually have such a computer, it is called ‘our solar system’ and shows us what is happening.

Septic Matthew
November 1, 2011 12:33 pm

Leif Svalgaard wrote: And if the model predictions match reality we have not learned anything new. Only when there is discrepancies can we learn something.
That is not absolutely or universally true. We will have learned that the model makes accurate predictions. Enough of that and we might be able to base policy (including the design of the next experiment, or the design and construction of a device) on the model predictions. If the model contains a novel conjecture, then the accurate prediction can increase confidence that the conjecture is not false. Learning need not be all-or-none, but can be an increase or decrease in confidence of reliability. “We” denotes a diverse group: “we” learned a great deal when Eddington’s expedition confirmed the predictions of Einstein’s General Theory of Relativity, and when Meitner wrote out the model that was accurate for the latest in the series of experiments by Meitner, Hahn and Strassmann. That last ignited, so to speak, a rush of confirmatory experiments and a tidal wave of effort to construct some new devices. Perhaps “we” could say that Einstein learned nothing new from those model confirmations, bet everyone else learned a great deal.

DirkH
November 1, 2011 12:34 pm

Leif Svalgaard says:
November 1, 2011 at 10:01 am
“Willis Eschenbach says:
November 1, 2011 at 9:43 am
Then why did they start by mentioning “climate models” in the very first paragraph of their work?
They and you and most others go wrong by assuming that climate models are curve fitting as in the paper. They are not, they are solving differential equations forwards with a small time step [every ~5 minutes]. That is why the models do not compare. This is a qualitative difference.”
Leif, don’t confuse the genetic algorithm that does the curve fitting by selecting a model variation with the individual model runs. They say their genertic algorithm runs a total of 7,000 individual model runs. It is these model runs that they compare to GCM runs.
There is no qualitative difference to a Gavin Schmidt running a GCM, and tunes its parameters and runs it again to see whether it matches historical data better now.
And we KNOW that they do this – they constantly publish papers in which they argue that they now have a better estimate for climate sensitivity to a doubling of CO2 because they tried out some values…
So the human climate modelers in their entirety correspond to a huge curve-fitting algorithm that runs GCMs over and over again and modifies them.

November 1, 2011 12:46 pm

Willis Eschenbach says:
November 1, 2011 at 12:39 pm
They have a number of parameters which are tuned by comparison of the model outputs to the historical data. This is not a qualitative difference at all.
I don’t think so. Perhaps you have some references.
DirkH says:
November 1, 2011 at 12:34 pm
Leif, don’t confuse the genetic algorithm that does the curve fitting by selecting a model variation with the individual model runs.
The climate models are numerical solutions to differential equations. I see no such in the paper under discussion. That is the qualitative difference.

Rational Debate
November 1, 2011 12:53 pm

re: Eric Anderson says: November 1, 2011 at 9:00 am

re: Leif at 8:04 a.m.
You left out a couple of the most obvious (at least as they relate to lack of confidence):
5) A basic realization of the kinds of issues that need to be treated in the models, which are admitted to not be fully treated well (clouds, for example).
6) The models’ abysmal failure to accurately predict the last decade plus.
7) A general healthy skepticism about complex, multi-paramater models, which to date have not been demonstrated to have predictive value.

Exactly.
Add :
8) A basic understanding of the massive amount of uncertainty with regard to how much various factors influence climate (e.g., soot, underwater volcanoes, natural & man made aerosols, atmospheric residence time of CO2, clouds, cosmic rays, etc).
9) A good concept of the degree of unknowns that still exists – e.g., how many “new” but highly relevant factors have been and are being recently discovered, how little understanding there is of naturally existing climate cycles, etc.
10) A basic understanding that the model results are being used, by their producers and operators, to claim severe damage will occur to the Earth, when current conditions and at least some of the model results don’t deviate from natural cycles seen in the past, in terms of rate of temp increase, amount of temp increase, and maximum temps – and in those past cycles, the warmest times were the best times for man and other species as best we can tell.

John Garrett
November 1, 2011 1:04 pm

w.-
Asking professional crystal ball readers to cease prognisticating is akin to asking a politician to stop talking. Whether they admit it or not, both classes of men get a thrill out seeing their name in print or hearing their voice broadcast. It’s an addictive drug and, as is the case for most addicts, asking them to go “cold turkey” is asking the impossible.

November 1, 2011 1:22 pm

Rational Debate says:
November 1, 2011 at 12:53 pm
10) A basic understanding that the model results are being used,
That last one is completely irrelevant as to the veracity or lack thereof of the models. Many of the other factors cited sound more like ignorance on part of the doubter or [worse] are agenda-driven.

Frank K.
November 1, 2011 1:39 pm

Leif Svalgaard says:
November 1, 2011 at 12:15 pm
“A more important issue is the amount of computer time spent and also the relation between time step and spacial resolution. Last time I asked Gavin Schmidt about this, he told me that the 5 minutes is a compromise between all these factors [including numerical stability].”
Actually, numerical stability is the ONLY thing that matters. The basic relationship between spatial resolution and time step is given by the CFL (or Courant) numnber: C = u*dt/dx. For most explicit schemes, C is generally less than about 1. You can push this higher for implicit schemes (and hence the time step), but temporal accuracy will begin to suffer if the time step is too large. Again, stability proofs can only be made for simple model equations. Once you begin to add equation coupling, non-linearity, spatially-varying source terms and properties, Lagrangian advection of scalars, etc., proofs go out the window and you rely on trial and error (which I suppose is the “compromise” Gavin was referring to).

Rational Debate
November 1, 2011 1:40 pm

re: Leif Svalgaard says: November 1, 2011 at 1:22 pm

Rational Debate says: November 1, 2011 at 12:53 pm
10) A basic understanding that the model results are being used,

That last one is completely irrelevant as to the veracity or lack thereof of the models. Many of the other factors cited sound more like ignorance on part of the doubter or [worse] are agenda-driven.

Oh really Leif? The null hypothesis no longer has meaning in the scientific method. When did that get overturned and tossed? Researcher bias and agenda also no longer merits concern? Both were included in 10), factors you amazingly claim are irrelevant.
As to the other factors sounding more like ignorance or agenda driven (your claim ironically coming immediately after you claim that researcher bias is completely irrelevant wrt models) – try reading the research from the past decade, Leif. Then tell me, just how much does “black carbon”/soot affect climate? How much agreement is there over when clouds act positive feedback vs. negative in the equation – and just how much they affect overall albedo. What is the residence time of CO2 in the atmosphere? Research tells us anywhere from 5 years to hundreds of years. How well dispersed is CO2 in the atmosphere? To what degree and in which direction under what conditions do biota produce aerosols affect the climate? Have plankton levels radically decreased in the past 40 or so years, or increased?
Then tell me how these and so many other clearly uncertain factors have the underlying physics involved accurately captured in the climate models.

Vince Causey
November 1, 2011 1:53 pm

Willis Eschenbach says:
November 1, 2011 at 12:39 pm
“They have a number of parameters which are tuned by comparison of the model outputs to the historical data. This is not a qualitative difference at all.”
Leif says:
“I don’t think so. Perhaps you have some references.”
Wiki says:
“Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry.”
BUT. . . it then adds the following:
“Parametrizations are used to include the effects of various processes. All modern AGCMs include parameterizations for:
* convection
* land surface processes, albedo and hydrology
* cloud cover”
So, it appears that although based on differential equations, there are parameterizations. The paper is therefore correct in referring to the GCM’s as tuned models, unless you assume that changing the parameters won’t alter their predictive power.

November 1, 2011 2:02 pm

Frank K. says:
November 1, 2011 at 1:39 pm
you rely on trial and error (which I suppose is the “compromise” Gavin was referring to).
I’m sure that that was part of the equation as well. Although it is hard to know when the models ‘blow up’ because of numerical instability.
Rational Debate says:
November 1, 2011 at 1:40 pm
Oh really Leif? The null hypothesis no longer has meaning in the scientific method. When did that get overturned and tossed? Researcher bias and agenda also no longer merits concern?
The null hypothesis says that researcher bias and agenda are not relevant.
Then tell me how these and so many other clearly uncertain factors have the underlying physics involved accurately captured in the climate models.
I didn’t say that they were accurately captured. Each is an item of ongoing research and the results will be incorporated at their appropriate time. My view is that this will make the models better with time. What I complained about is the notion that this cannot be done, in principle.

November 1, 2011 2:08 pm

Vince Causey says:
November 1, 2011 at 1:53 pm
So, it appears that although based on differential equations, there are parameterizations. The paper is therefore correct in referring to the GCM’s as tuned models, unless you assume that changing the parameters won’t alter their predictive power.
As far as I know, the parameterizations are not driven by comparison with observed data and hence are not ‘tuned’. Other [physical] concerns and empirical data determine the parameters.

November 1, 2011 2:10 pm

Leif Svalgaard says:
November 1, 2011 at 2:08 pm
As far as I know, the parameterizations are not driven by comparison with observed data and hence are not ‘tuned’.
To make that clear: not driven by comparing model output with observed behavior to change the parameters. You might prove me wrong if you can show some examples of the opposite.

November 1, 2011 2:36 pm

Regarding Parameterisation, Leif writes : “It is if the parameter is based on the physics involved. ”
At the end of the day, they’re approximations for large areas over large timesteps and two items that are parameterised cant properly interact within a timestep. Its not physics Leif. Whether its useful is another matter but the point is you cant wave away the uncertainty that arises from the computational efficiencies made through parameterisation.

DirkH
November 1, 2011 2:48 pm

Leif Svalgaard says:
November 1, 2011 at 2:02 pm
“I didn’t say that they were accurately captured. Each is an item of ongoing research and the results will be incorporated at their appropriate time. My view is that this will make the models better with time. What I complained about is the notion that this cannot be done, in principle.”
Their conclusion is that you cannot determine IN ADVANCE whether your model has predictive skill. You can, of course, wait and see and improve your model when you see it failed.
That’s exactly the situation the climate modelers are in. Skepticalscience has defended Hansens wrong forecasts, saying “if he had assumed a climate sensitivity of 3.4 deg C per doubling of CO2 instead of 4.5 he would have been right” – and that’s what they say NOW; they might have to revise it AGAIN in a few years (and again… and again… and again. And during all those years climate models will slowly get better.)

Rational Debate
November 1, 2011 2:51 pm

re: Leif Svalgaard says: November 1, 2011 at 2:02 pm

Rational Debate says: November 1, 2011 at 1:40 pm
Oh really Leif? The null hypothesis no longer has meaning in the scientific method. When did that get overturned and tossed? Researcher bias and agenda also no longer merits concern?

The null hypothesis says that researcher bias and agenda are not relevant.

Leif, clearly they are two separate issues. The null hypothesis says nothing about researcher bias – researcher bias, however, can lead to claiming the null hypothesis has been overturned when it hasn’t. The scientific method only works when researcher bias has been eliminated or at least mostly eliminated – this aspect is quite justifiably suspect with many heavily involved in AGW research, the IPCC, and climate models based on their own statements, discounting of the null hypothesis, discounting natural variability, etc.

Rational Debate said: Then tell me how these and so many other clearly uncertain factors have the underlying physics involved accurately captured in the climate models.

I didn’t say that they were accurately captured. Each is an item of ongoing research and the results will be incorporated at their appropriate time. My view is that this will make the models better with time. What I complained about is the notion that this cannot be done, in principle.

And I never said this cannot be done, in principle, and in time. I simply said it’s impossible to actually do NOW, in reality, and isn’t being done in the current models primarily because of insufficient knowledge of multiple factors. Yet you keep arguing against this, as if it can and is currently done in existing climate models, going so far as to imply anyone who doesn’t agree is simply ignorant or agenda driven. Perhaps the question of agenda is better asked of yourself.
I went further to note that those creating and running the climate models’ claims of reasonably accurate long term projections from these models (accurate enough to be used to affect the lives of billions of people), and considering that modern climate hasn’t breached the null hypothesis of natural variability and the state of uncertainty wrt major variables involved, leads to quite reasonable questioning of the agenda and bias of the scientists involved. You claim that any bias or agenda on their part is meaningless. WUWT??

November 1, 2011 3:19 pm

Willis Eschenbach says:
November 1, 2011 at 3:04 pm
The model is tuned (using the threshold relative humidity U00 for the initiation of ice and water clouds) to be in global radiative balance (i.e., net radiation at …
The model is tuned to be in global radiative balance which is a physical constraint, not driven by comparing model output with observations, so, no, you did not respond with a relevant reference.

Theo Goodwin
November 1, 2011 3:22 pm

Leif Svalgaard says:
October 31, 2011 at 8:22 pm
“Physical systems are controlled by a [sometimes large] set of coupled differential equations. Given the equations and an initial set of values, the equations can be integrated forward in time. A good example is [the simple] physical system consisting of 568670 asteroids, 3113 comets, 171 planetary satellites, 8 planets and a star that make up our solar system. This is where you are wrong.”
You are referencing our existing physical theory, are you not? In that case, I agree with you completely but your example is no counter example to my argument. I heartily embrace your differential equations. You are allowed to calculate. But if those differential equations do not introduce additonal primitive predicates then they cannot be treated as statements that are true or false. Do you really want to dispense with truth? Many do.
“If not all of the boundary conditions are known or the processes are poorly understood, the model will not perform as well as JPL’s calculation of orbits in the solar system. This is a condition for the climate system that might [and probably will] change as time goes on. The principle is clear, though, and must be understood.”
The only principle that is clear in your statement is that climate science, so-called, could someday become a genuine science. I certainly agree with that principle. My criticism is that climate science is not a science at this time and no one should be asserting that it is. Paraphrasing your words, “because not all of the boundary conditions are known and the processes are poorly understood, the model will not perform as well as JPL’s calculations of orbits in the solar system.” Well, right, that is exactly what I am saying. At this time you cannot claim to have a decent model of the solar system. So, why are you, in the analogous case of so-called climate science, claiming that you have a decent model there? You do not and you should lead the way among your colleagues in explaining that you do not. I wish you the greatest and quickest success but at this time you have neither model nor science.

Theo Goodwin
November 1, 2011 3:36 pm

I take my hat off to several who have replied to Leif. (Of course, I take my hat off to Leif also for being not only an eminent scientist but a good egg who engages in rational argument.) I do not have time to mention all but the following deserves special mention:
[Leif says:]
“I didn’t say that they were accurately captured. Each is an item of ongoing research and the results will be incorporated at their appropriate time. My view is that this will make the models better with time. What I complained about is the notion that this cannot be done, in principle.”
[Rational Debate says:]
“And I never said this cannot be done, in principle, and in time. I simply said it’s impossible to actually do NOW, in reality, and isn’t being done in the current models primarily because of insufficient knowledge of multiple factors. Yet you keep arguing against this, as if it can and is currently done in existing climate models, going so far as to imply anyone who doesn’t agree is simply ignorant or agenda driven. Perhaps the question of agenda is better asked of yourself.”
Yes. No one is arguing that climate science is doomed to incompleteness in principle or something similar. What we argue is that at this time, NOW, there is nothing in climate science that can qualify as either a genuine physical theory that goes beyond Arrhenius or as a validated model that captures temperature changes since 1850. Someday there will be. But all the claims of doom following from people such as Hansen and Trenberth should be withdrawn because the science is simply not there NOW.

Richard S Courtney
November 1, 2011 4:05 pm

Leif Svalgaard:
I admit that I had anticipated much better from you than your post at November 1, 2011 at 11:12 am which says;
“Richard S Courtney says:
November 1, 2011 at 11:04 am
“I see you have still not addressed the fundamental point made to you in my post at October 31, 2011 at 3:43 pm. I would be grateful if you were to answer it.”
I didn’t see an explicit question to answer, just your [somewhat muddled] opinion.”
To save others finding it, I copy my point here (below) and perhaps you can
(a) try to address it
(b) explain what you think is “muddled” in it
and
(c) explain what you think is not factual but is merely “opinion” in it .
Or is your response that I copy here (above) your admission that you know my post (copied below) shows you are wrong?
Richard
Leif Svalgaard:
Several people have rebutted your assertion at October 31, 2011 at 1:46 pm which said;
“If the ‘model’ is plain curve fitting it may indeed have no predictive capability. If the model is based on sound physics it usually will have predictive capability, unless it is too simple.”
And you have attempted to refute the rebuttals. However, you miss the important point; viz.
No model (of any kind) should be assumed to have more predictive skill than it has been demonstrated to possess.
Assuming undemonstrated model predictive skill is pseudoscience of precisely the same type as astrology. But no climate model has existed for 30, 50 or 100 years and, therefore, it is not possible for any of them to have demonstrated predictive skill over such times.
In other words, the predictions of future climate by
(a) the climate models
and
(b) examination of chicken entrails
have equal demonstrated forecast skill (i.e. none) and, therefore, they are deserving of equal respect (i.e. none).

Theo Goodwin
November 1, 2011 4:24 pm

Leif Svalgaard says:
October 31, 2011 at 9:47 pm
Jaye Bass says:
October 31, 2011 at 9:21 pm
Define “sound physics” in terms of a complete, validated and verified, system model for the climate. Get back to me when you are finished.
You can prove me wrong by reading and understanding Jacobsen’s text book describing the physics: http://www.stanford.edu/group/efmh/FAMbook/FAMbook.html
A textbook does not a physical theory make. Back around 1940, Hans Reichenbach published a really wonderful book with the title “Axiomatization of the Theory of Relativity.” I have one here somewhere and you can buy it on Amazon. It is several hundred pages long. It sets out Einstein’s physical theory and does all kinds of wonderful things such as identifying Einstein’s primitive predicates. Those predicates tell you what the theory is about.
As soon as climate science has one of these babies ready to go, I will be the first to buy it and review it. However, no such book is on the horizon for climate science because they have no rigorous formulation of their (part of) physical theory, axiomatized or not. They cannot so much as guess what their primitive predicates will be in ten years. Climate scientists should accept this reality and address the world accordingly.

November 1, 2011 4:27 pm

Theo Goodwin says:
November 1, 2011 at 3:36 pm
because the science is simply not there NOW.
But it will be eventually, and that is my point. On parameters: if the models were constantly tuned to match observations they would always be correct ‘up to yesterday’. The parameters are constrained by other considerations and the models are thus not tuned to the data, but to reasonable physics, such as radiative balance [what goes in must come out].

Frank K.
November 1, 2011 4:29 pm

Leif Svalgaard says:
November 1, 2011 at 2:02 pm
“I’m sure that that was part of the equation as well. Although it is hard to know when the models ‘blow up’ because of numerical instability.”
Well, unless there is a bug somewhere (always a possibility) a “blow up” (which refers to solutions oscillating wildly until a divide by zero or NAN occurs) is usually due to a numerical instability being amplified, hence the importance of stability analysis of the numerical algorithms. This is not hard to know…
By the way, let me know what other considerations are required for determining the appropriate solution time step outside of numerical stability and accuracy…

November 1, 2011 4:30 pm

Willis Eschenbach says:
November 1, 2011 at 4:10 pm
I see … and the fact that the albedo is tuned, not to a “physical constraint” but to be about what the historical data says? How did they do that, without reference to historical data?
I don’t think you see. Some parameters depend on measured quantities and those change with time as instruments and data get better. It makes sense to use the best data available, no?

Theo Goodwin
November 1, 2011 4:42 pm

Leif Svalgaard says:
November 1, 2011 at 12:22 pm
commieBob says:
November 1, 2011 at 12:17 pm
OK so the computer has merely to be the size of the solar system.
“We actually have such a computer, it is called ‘our solar system’ and shows us what is happening.”
Our solar system is a model of the physical theory that describes it. This is a clear and definite use of the word “model” in this scientific context. You are getting clear on the relationship between model and theory.

November 1, 2011 4:44 pm

Frank K. says:
November 1, 2011 at 4:29 pm
hence the importance of stability analysis of the numerical algorithms. This is not hard to know…
There are more subtle ‘blowups’ where the variations just get too large without catastrophic failures.
By the way, let me know what other considerations are required for determining the appropriate solution time step outside of numerical stability and accuracy…
Computing time. If we had 1000 layers and a spatial resolution of a few meters and 4096-bit floating point numbers, there is little doubt that we can go to a much smaller time step as we can go to, say, a 500th-order Runge-Kutta method, instead of the 4th order used now. There is nothing magical about the 5 minutes.

November 1, 2011 4:55 pm

Theo Goodwin says:
November 1, 2011 at 4:42 pm
You are getting clear on the relationship between model and theory.
Good to know that you agree.

November 1, 2011 5:13 pm

Willis Eschenbach says:
November 1, 2011 at 5:03 pm
So I invite the interested reader to peruse this piece about Kiehl’s paper, and then y’all can tell me if you think Leif is right that the models are not tuned to the historical record.
Kiehl said: “In many models aerosol forcing is not applied as an external forcing, but is calculated as an integral component of the system. Many current models predict aerosol concentrations interactively within the climate model and this concentration is then used to predict the direct and indirect forcing effects on the climate system.”

Frank K.
November 1, 2011 5:15 pm

Leif Svalgaard says:
November 1, 2011 at 4:44 pm
“Computing time. If we had 1000 layers and a spatial resolution of a few meters and 4096-bit floating point numbers, there is little doubt that we can go to a much smaller time step as we can go to, say, a 500th-order Runge-Kutta method, instead of the 4th order used now. There is nothing magical about the 5 minutes.”
Well, only partly true. Sure you can’t use a time step of 10^-20 seconds. But, as you probably know, one should do a mesh independence study (by running the code on a series of meshes with different resolutions) to determine how small the mesh and time step should be to produce a “mesh independent” solution (where you would choose a suitable metric to determine the sensitivity). I’m sure GCM modelers do this all the time.
(Your comment about “500th order Runge-Kutta” makes no sense to me. What are you trying to say? By the way, don’t the GCM modelers all use leap frog time marching? Of course, using an explicit Runge-Kutta approach would make a lot of sense given it’s wide use in conventional CFD applications).
Also, if you are required by stability to take a really small time step, thereby making the numerical solution intractable, then you have a stiff or ill-posed problem on your hands. We haven’t talked about mathematical ill-posedness, which is yet another unknown in the climate modeling world…

Tom in South Jersey
November 1, 2011 5:18 pm

I’m late to the party, but I shall have my say anyway. You often see such predictions in the healthcare industry. For that I profoundly miss Junkfoodscience.com

Theo Goodwin
November 1, 2011 5:25 pm

Leif Svalgaard says:
November 1, 2011 at 4:27 pm
Theo Goodwin says:
November 1, 2011 at 3:36 pm
because the science is simply not there NOW.
“But it will be eventually, and that is my point. On parameters: if the models were constantly tuned to match observations they would always be correct ‘up to yesterday’. The parameters are constrained by other considerations and the models are thus not tuned to the data, but to reasonable physics, such as radiative balance [what goes in must come out].”
We have returned to a confusion between theory and model. Watch closely:
Our solar system is a model of the physical theory that describes it.
As a model of (part of) our (total) physical theory, our solar system is a set of objects (whose behavior) renders true all the universal generalizations in (that part of) our (total) physical theory.
Obviously, then, our physical theory is true (not just useful) because all of its universal generalizations are true of that special model which is our actual solar system.
So, Leif, using your words, the question is: how does “reasonable physics” constrain the parameters and, thereby, the models?
The first of two problems for your account is that you cannot tell us what that reasonable physics is; that is, you cannot tell us without citing a text book. But texts are not helpful here. The only thing that will serve here is a rigorously formulated set of hypotheses (part of a physical theory) that actually constrains these particular parameters. (My diagnosis is that when you take action to constrain the parameters you tell yourself that it is done on the basis of reasonable physics but you have never demanded of yourself that you write down a rigorous formulation of it. This is no put down. Einstein did the same thing on occasion.)
The second problem for your account is that you cannot specify some chain of reasoning that leads from the reasonable physics to the constraints on the parameters. If you had a physical theory, the relationship would be that the theory plus initial conditions implies the constraints. But what you have is your brilliant, highly educated gut. And you will not have anything more until you demand of yourself (and your colleagues) that you produce rigorously formulated chains of reasoning that lead from the reasonable physics to the parameters that they constrain.
The solution to your first problem is to continue doing what you are doing, and maybe some more things, until you have the necessary rigorously formulated physical hypotheses. Now, for the good of humanity, please announce that you do not have them now but even sceptics believe that you or your students will have them someday. For the solution to the second problem, see the solution to the first.

Steve in SC
November 1, 2011 5:26 pm

Wow! Willis both you and Leif are overly touchy today.
I must comment for the both of you.
There is no model that is 100% all the time.
There are models that are based on sound principles and are quite useful, particularly in reducing the cost of something or estimating a result. For instance, you can run your simulation software and fine tune your design based on those runs. But, in the end you really need to test that nuclear bomb to make sure it goes KaBoom instead of just Boom.
Chill out boys, Don’t be so crabby.

November 1, 2011 5:29 pm

Willis Eschenbach says:
November 1, 2011 at 5:13 pm
Pielke Sr. on how models are tuned …
I think he just proves my point:
“Some parameters (such as the von Karman “constant”) are assumed to be universal, but most are just values that provide the best fit of a parametrization with the observed data used in its construction. The second type of parametrization is the same as the first (their division into two types is artificial), except there is no observational data to make the tuning. “

November 1, 2011 5:35 pm

Theo Goodwin says:
November 1, 2011 at 5:25 pm
The first of two problems for your account is that you cannot tell us what that reasonable physics is; that is, you cannot tell us without citing a text book.
It takes a thick text book to tell you.

Theo Goodwin
November 1, 2011 5:53 pm

From the article about models of financial markets:
“he assumed, reasonably, that the process would simply produce the same parameters that had been used to produce the data in the first place. But it didn’t. It turned out that there were many different sets of parameters that seemed to fit the historical data. And that made sense, he realized–given a mathematical expression with many terms and parameters in it, and thus many different ways to add up to the same single result, you’d expect there to be different ways to tweak the parameters so that they can produce similar sets of data over some limited time period.”
OMG! Of course there are an indefinitely large number of ways to tweak a model that is not a model of a specific physical theory (or whatever kind of theory). All tweaks are based on common sense or enlightened common sense or enlightened genius about the objects that make up the model. But no tweak has any meaning unless the model being tweaked is a model of a physical theory. It is the physical theory and only the physical theory that gives a context to the model and that gives a specific meaning to each object in the model. Once there is a physical theory, each object is necessary to render true all the sentences in the physical theory. Prior to the existence of such a physical theory, the so-called model is just a collection of objects with no particular meaning.

Theo Goodwin
November 1, 2011 5:57 pm

Leif Svalgaard says:
November 1, 2011 at 5:35 pm
Theo Goodwin says:
November 1, 2011 at 5:25 pm
“The first of two problems for your account is that you cannot tell us what that reasonable physics is; that is, you cannot tell us without citing a text book.
It takes a thick text book to tell you.”
So, there is an “Axiomatization of the Theory of Climate Change?” That would not be a text book. Each of the axioms would be shown in its unique context in the overall theory that is the unique true account of climate change. In textbooks, you get principles with some explanation of how they are applied. Big difference.

November 1, 2011 7:04 pm

Leif writes “I think he just proves my point:”
WTF? Did you not read the first part?
“but most are just values that provide the best fit of a parametrization with the observed data used in its construction.”
So because some parameters might be “ok” then you’re right? Wow. What about all the other parameters that are coarse reflections of reality?

Theo Goodwin
November 1, 2011 7:35 pm

Willis Eschenbach says:
November 1, 2011 at 5:11 pm
From Environmental Research Letters:
At present, climate models are tuned to achieve agreement with observations. This means that parameter values that are weakly restricted by observations are adjusted to generate good agreement with observations for those parameters that are better restricted, with the TOA radiative balance belonging to the latter category.
“Yep. Sure ’nuff …”
Yep, unlimited rejiggering with nothing to supply context or give meaning to the individual rejiggers.

Eric Anderson
November 1, 2011 7:42 pm

In addition to whether all the physics are understood and properly included in the models (which is very possibly asking for more than can ever be achieved — but let’s set that aside now), we also have to include in the models precise information about the starting points. That has very little to do with physics calculations, and everything to do with physical facts, such as current temperatures, current aerosols, current vegetation and on and on, nearly ad infinitum. Unless we get the current state input properly, we can’t expect to run our physics calculations to generate a predictive view of a future state. Further, there are plenty of known unknowns, such as future volcanic eruptions, future changes in aerosols, vegetation, solar activity, just to name a few. By definition, it is impossible to include these in a model with precision.
The idea that the climate models will have good long-term predictive value if they can but manage to “get the physics right” is simply wishful thinking.

November 1, 2011 9:17 pm

Willis Eschenbach says:
November 1, 2011 at 7:39 pm
Leif, models are tuned to observations. Apparently, everyone but you got the memo. Note that better-tuned climate models (give results closer to observations) are claimed (without substantiation) by the IPCC to give better predictions.
Perhaps it is time to make explicit with a tuned model is. I offer the following definition: A model M is tuned if some observable output X from the model depends on some parameter P, and if the actual observed value X’ being different from X causes P to be adjusted to P’ so that the difference D = |X-X’] is smaller than without the tuning of P to P’. For this kind of tuning to be useful it must be done continuously. I grant that I have not carefully examined the current top-of-the-line models, but I have studied in great detail the physical basis for atmospheric modeling [e.g. as given by Jacobsen] and to my knowledge those models are not tuned as per the definition above. There are many parameters that encapsulate physics that can be expressed by empirical relationships [thus obviating the need for calculating them from the microphysics – which often cannot even be done], but those do not qualify. If you know specifically which parameters in current climate models are adjusted to minimize the measure D, I would be glad to be educated.

November 1, 2011 9:31 pm

Eric Anderson says:
November 1, 2011 at 7:42 pm
Further, there are plenty of known unknowns, such as future volcanic eruptions, future changes in aerosols, vegetation, solar activity, just to name a few. By definition, it is impossible to include these in a model with precision.
Those you deal with by running ‘scenarios’, e.g. by assuming a large volcanic at a given time and see what its effect will be. That is where models actually can shine.

November 1, 2011 10:08 pm

Leif writes : “A model M is tuned if some observable output X from the model depends on some parameter P, and if the actual observed value X’ being different from X causes P to be adjusted to P’ so that the difference D = |X-X’] is smaller than without the tuning of P to P’.”
I’m thinking you really ought to define what a curve fit is too …because this describes curve fitting applied to an imperfect process compared to imperfect measurements.
Your underlying assumption appears to be that P-P’ (necessarily moving away from anything that may have started as a “pure” physical representation) is so small that even over many iterations the accumulated error is either negligible or cancels out.

Eric Anderson
November 1, 2011 11:35 pm

Leif: “Those [known unknowns] you deal with by running ‘scenarios’, e.g. by assuming a large volcanic at a given time and see what its effect will be. That is where models actually can shine.”
Well, I don’t know about “shine.” The ‘scenarios’ may be interesting, perhaps even educational, but that they have any predictive value does not follow. Further, we have seen that in practice (not in the pristine idealized world of how “science” should operate) these ‘scenarios’ get pushed by those with an agenda as predictions or likely outcomes and then used to demand particular political/economic actions.
And we’re still not dealing with the formidable challenge of inputting the original parameters in a comprehensive enough way to generate valuable predictions.
Look, the models may have some value as tools to help us think about how the climate works and may even help us think about climate possibilities. But right now (1) we don’t have all the physics included, (2) we don’t have all the initial physical parameters included, and (3) we don’t know whether the ‘scenarios’ run with unknowns will bear any relation to future reality. I’m happy to let folks keep working on computer scenarios, but let’s acknowledge the very real limitations. They can keep improving things and then get back to us once they have a model that has actually been successful at forecasting a decade’s worth of climate (or was it 17 years that was needed to see an actual trend . . . or perhaps 30 :)). Until then, the onus is squarely and properly on those who market their scenarios/predictions to demonstrate why anyone else should take any of the scenarios/predictions seriously.

Richard S Courtney
November 2, 2011 1:27 am

Lief:
You have made several posts since my post at November 1, 2011 at 4:05 pm which was a response to your bluster at November 1, 2011 at 11:12 am but you have not replied to my post.
Therefore, in accordance with my post that you have not replied, I assume your bluster is a clear admission by you that you know you are wrong.
Richard

Frank
November 2, 2011 3:34 am

Frank said:

Willis,
I’d say this only shows that their objective functions are not suitable.

Willis said:

Unless you can show where they are are wrong instead of just asserting that they must be wrong, I’m going with their conclusions.

Okay, here’s what I understand of it:
For generating the “measurements” / “truth cases”, they take 3 parameters h, kp and kg, plus a grid of porosity and permeability values and feed them into the ECLIPSE simulator, getting a monthly set of 3 production rates during 3 years (36 sets in total, the “history set”) and a yearly set of the 3 production rates during 7 years (7 sets in total, the “future set”. For the “with error measurements”, they only modify the grid porosity and permeability values to within 1% error and run the same simulator.
Then, they use a genetic algorithm to try and find the optimum values for h, kp and kg that match the production rate sets by running the simulation over and over with different trial parameters. Figs 2 and 4 show more or less a likelihood that the tried parameters are correct. The objective functions are used to calculate this likelihood. For Figs. 2a and 4a, they use the history set, for Figs. 2b and 4b, they use the future set.
So far so good. But now comes the part where I don’t agree.
They say the model has no predictive power, but they don’t actually compare any predictions. In this case, what you would want to predict are the production rates. So to see if the model has predictive power, you would expect a comparison of the predicted and the “truth case” production rates. But this is not done! Instead, they argue that, because tuning to the future set gives different parameter values than tuning to the history set, the predictive power must be weak. However, it may well be that a bunch of parameters that’s not entirely correct (like those in Section 3.2), does in fact give a relatively good production rates prediction.
What I see in comparing Fig. 2a and 2b is that the monthly production rates in the history set don’t have enough variability to unequivocally find the best parameters (multiple spikes), whilst in the future set, the production rates do have enough variability (single clean spike). You can see the difference as trying to match a amplitude and frequency of a sine curve to a values near x=0 or to values along its entire up-and-down domain. In other words, the measurement data doesn’t “stretch” the parameter space enough. Similarly, in Fig. 4, you only see that the history set with error definitely doesn’t have enough data, and the future set only marginally has.
So the logic of saying that a model has predictive powers only if you get the same matched parameters when you tune to past and future sets, is basically wrong. Take the extreme case of having very little past data and lots and lots of future data — the matched parameters will almost always be different.

November 2, 2011 6:03 am

Frank writes : “So the logic of saying that a model has predictive powers only if you get the same matched parameters when you tune to past and future sets, is basically wrong.”
I believe Willis is right. When Carter et al say “We conclude that for this model you can only obtain a good prediction from the truth case, and that good matches from the history matching phase have no predictive value.”
If I’m understanding their paper correctly, they are saying that there is only one model that has predictive power and that is the truth model (ie the one that has correct parameters which by definition wont be changing) and that all the other models that have different parameters,even though they match the history fail to have predictive power.
Its true that they aren’t specific about what they mean by saying they have no predictive power. I guess you’d like to argue that when they say “no” they really mean varying degrees of “some”. Taken at their word, though, no means no.

November 2, 2011 6:05 am

Richard S Courtney says:
November 2, 2011 at 1:27 am
Therefore, in accordance with my post that you have not replied, I assume your bluster is a clear admission by you that you know you are wrong.
One more time: your post did not contain a question or anything for me to reasonably respond to. It just stated your opinion. Perhaps you could be specific and say again what is troubling you?

Frank K.
November 2, 2011 7:09 am

Leif – Given all of the questions/comments this thread has elicited, do you think your colleague at Stanford, Mark Jacobson, would be up for writing a short article for WUWT on the state-fo-the-art in GCM modeling? I for one would welcome his expertise and insight into this topic area. As someone who has worked in modern industrial CFD for over 20 years, I would love to learn more about GCMs beyond the material in the textbooks. As you can tell, I am particularly interested in the entire topic of numerical stability and it’s relationship to the discretization of the underlying differential equations, which is extremely important for any time-dependent numerical model (you can’t just say a 5 minute time step seems to work OK without some rigorous justification).

Frank
November 2, 2011 8:05 am

TimTheToolMan said:

I believe Willis is right. When Carter et al say “We conclude that for this model you can only obtain a good prediction from the truth case, and that good matches from the history matching phase have no predictive value.”

I get the idea that they were getting at. Basically, a lack of peak in Figs. 2b and 4b at parameter values where there *is* a peak in Figs. 2a and 4a, means that that parameter set (tuned by the history set) actually has a large objective function delta_f, and with that, a large difference between the predicted and “measured” values — or bad predictive capacity.
However, they don’t specifically calculate the future objective function at *precisely* the predicted parameter values. They just generate another objective function map which *interpolates* values near the predicted parameter values that happened to be visited by the genetic algorithm, possibly missing a sharp peak at precisely the predicted parameter values.
Moreover, the fact that there may be larger peaks in the future objective function map actually also clouds the issue. So yes, there may be parameter values which happen to give a higher peak than the peak at the predicted values, but that doesn’t mean the prediction is bad. It just means that with *that particular* future data set (a mere 21 data points in our case), there are other solutions for the parameters that happen to give a higher peak.
Coming back to the size of the future set; when they say that in the perturbed case, “the spike at h ~ 10 has the wrong values for kp and kg” while in the unperturbed case, they do find the correct values, doesn’t that mean that they just don’t have enough data in the future set rather than that the model lacks predictive power?

Vince Causey
November 2, 2011 8:19 am

Leif,
If I understand you correctly, you are saying that these parameterizations amount to no more than making obvervations about things like albedo, then plugging those values into the models.
You are saying that this is not the same as meant by the article. If I could give an example of what that kind of parameterization is, it would be as if they put the model through thousands of runs, and each time varying the albedo until the model output more and more closely resembles observed temperature data. A bit like training a neural network. Is that about right?

November 2, 2011 9:49 am

Frank K. says:
November 2, 2011 at 7:09 am
do you think your colleague at Stanford, Mark Jacobson, would be up for writing a short article for WUWT on the state-fo-the-art in GCM modeling?
I’ll try…
Vince Causey says:
November 2, 2011 at 8:19 am
A bit like training a neural network. Is that about right?
As far as I know, no, that is not about right. It would be right if the models were truly ‘tuned’, but they are not.

Vince Causey
November 2, 2011 1:31 pm

Leif,
Ok, I understand it now.

November 2, 2011 2:27 pm

Frank : “It just means that with *that particular* future data set (a mere 21 data points in our case), there are other solutions for the parameters that happen to give a higher peak.”
But thats the point. With incorrect parameters you just dont know what you’re predicting. Suggesting that with a different future data set, the results would be different and somehow “better” misses the point.
And to top it all off were only considering the no-error versions of the model. There are no correct predictions to be made when there are slight errors introduced (ie +/- 1% in one aspect).
So bringing this back to GCMs, they are certainly flawed and therefore cant predict the future no matter how well they are tuned. This result is especially important when considering interpolating results to regional climates. Its little wonder its not (often) done.

Richard S Courtney
November 3, 2011 5:10 am

Leif Svalgaard:
It is clear that you are being deliberately obtuse when (at November 2, 2011 at 6:05 am ) you write:
“Richard S Courtney says:
November 2, 2011 at 1:27 am
“Therefore, in accordance with my post that you have not replied, I assume your bluster is a clear admission by you that you know you are wrong.”
One more time: your post did not contain a question or anything for me to reasonably respond to. It just stated your opinion. Perhaps you could be specific and say again what is troubling you?”
But I wrote at November 1, 2011 at 4:05 pm:
“To save others finding it, I copy my point here (below) and perhaps you can
(a) try to address it
(b) explain what you think is “muddled” in it
and
(c) explain what you think is not factual but is merely “opinion” in it .”
And my fundamental point was;
“No model (of any kind) should be assumed to have more predictive skill than it has been demonstrated to possess.
That point is not “opinion”: it is fact.
Or do you want to explain what degree of trust can be justified – and how it can be justified – for the predictions of a model that has no demonstrated predictive skill?
And I said;
“Assuming undemonstrated model predictive skill is pseudoscience of precisely the same type as astrology. But no climate model has existed for 30, 50 or 100 years and, therefore, it is not possible for any of them to have demonstrated predictive skill over such times.”
Those are two more facts. Or do you want to try to dispute them.
Your bluster does not hide your evasion of this issue.
Richard

November 3, 2011 9:35 am

Richard S Courtney says:
November 3, 2011 at 5:10 am
It is clear that you are being deliberately obtuse
I’ll disregard this silly accusation and ascribe it to your lack of emotional stability
“No model (of any kind) should be assumed to have more predictive skill than it has been demonstrated to possess.
The issue was that models are supposed to be ‘tuned’ to agree with observations and thus will always agree [eventually with a lag]. The assumption is that we [by working hard on this] can get better and better models and that they have predictive skill until proven otherwise. Every time a prediction fails we learn something new and can improve the models. This is how science [of any stripe] operates.

November 3, 2011 9:49 am

Richard S Courtney says:
November 3, 2011 at 5:10 am
Or do you want to try to dispute them.
It is perhaps somewhat presumptuous to think that your opinion deserves a response…

Gary Swift
November 3, 2011 10:38 am

Willis said:
“In no other field of scientific endeavor is every finding surrounded by predictions that this “could” or “might” or “possibly” or “perhaps” will lead to something”
You got that right. I’ve been saying something similar for years. Even when they aren’t making predicitons, they are still guilty of drawing conclusions that are not justified by the strength of the evidence. Every finding leads to a conclusion. The problem lies with interpretations of the past as well as the future. For instance, when you read about someone analyzing a single core sample, and then making broad statements about global conditions.
In my favorite field, cosmology, the experts aren’t afraid to say things like “the evidence is inconclusive” or “there are several popular theories”. Another difference between a field like that and climate study is that it seems like astronomers prefer to have peers check their work and validate thier findings before they release any statements or publish anything.

November 3, 2011 10:52 am

Gary Swift says:
November 3, 2011 at 10:38 am
the experts aren’t afraid to say things like “the evidence is inconclusive” or “there are several popular theories
It is interesting that there are no global climate models [that I know of] written by, run by, or otherwise pushed by skeptics.

Ged
November 3, 2011 11:55 am

@Leif Svalgaard,
It seems to me that the claims you are making and saying are different from tuning, are defined by you the same as what’s already been defined, ergo what you say isn’t different. Maybe things have changed considerably in the most recent years such that tuning isn’t used; but here are some references though about GCM and tuning: that is matching parameters to observations over successive model runs to minimize differences (exactly what this paper in the head post is all about).
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-1-3.html
“8.1.3 How Are Models Constructed? ”
“Some of these parameters can be measured, at least in principle, while others cannot. It is therefore common to adjust parameter values (possibly chosen from some prior distribution) in order to optimise model simulation of particular variables or to improve global heat balance. This process is often known as ‘tuning’.”
http://iopscience.iop.org/1748-9326/3/1/014001/fulltext
” A tuning experiment is carried out with the Community Atmosphere Model version 3, where the top-of-the-atmosphere radiative balance is tuned to agree with global satellite estimates from ERBE and CERES, respectively, to investigate if the climate sensitivity of the model is dependent upon which of the datasets is used.”
http://www.nature.com/nature/journal/v430/n7001/full/430737a.html
“Although it is often true that (in John von Neumann’s words) “the justification of such a mathematical construct is solely and precisely that it is expected to work”, practical data from instruments must be used to set limits on the physically realistic range for these parameters.”
http://webcache.googleusercontent.com/search?q=cache:0zkMM0VjhIMJ:www.mcs.anl.gov/uploads/cels/papers/P819.ps.Z+&cd=10&hl=en&ct=clnk&gl=us Argonne National Laboritory
“Sensitivity Analysis and
Parameter Tuning of a Sea-Ice
Model”
“We also illustrate the
effectiveness of using these sensitivity derivatives with an optimization algo-
rithm to tune the parameters to maximize the agreement between simulated
results and observational data.”
http://www.jamstec.go.jp/frsgc/research/d3/jules/philtrans_final.pdf
“The seasonally-averaged differences between
the control run and the target of NCEP seasonal surface air temperature fields is
shown on at the top. Below, the ensemble mean shows a marked improvement after
5 parameters were tuned. Precipitation fields were also used as a target in this
experiment, but tuning was unable to significantly improve the model fields (not
shown), thus demonstrating that there were substantial structural problems and
motivating further model development.”
http://www.iac.ethz.ch/groups/schaer/research/reg_modeling_and_scenarios/clim_model_calibration
“The tuning of climate models in order to match observed climatologies is a common but often concealed technique. Even in physically based global and regional climate models, some degree of model tuning is usually necessary as model parameters are often poorly confined.”
http://www.ipcc.ch/ipccreports/tar/wg1/371.htm models being tuned to other, more complex, models’ “observations”.
“Working Group I: The Scientific Basis”
“Having selected the value of F2x and T2x appropriate to a specific AOGCM, the simple model tuning process consists of matching the AOGCM net heat flux across the ocean surface by adjusting the simple model ocean parameters following Raper et al. (2001a), using the CMIP2 results analysed in Raper et al. (2001b).”
http://www.climatescience.gov/Library/sap/sap3-2/final-report/sap3-2-final-report-appB.pdf More on the model being tuned to emulate models; aka models as “observational data”.
“For the TAR, these parameters were
tuned so that MAGICC was able to emulate results from a
range of Atmosphere-Ocean General Circulation Model
(AOGCMs) (Cubasch and Meehl, 2001; Raper et al.,
2001).”
– From the book “Climate change: an integrated perspective” by Willem Jozef Meine Martens and Jan Rotmans, pg 79-80
“In an effort to ensure that GCMs are as faithful as possible to the observations of the ocean and atmosphere, a process of calibration is required. As the models evolve, they are continually scrutinised in the light of observations to determin if significant physical processes are missing, and to refine those that are present. The latter may involve a reworking of the existing physical parameterisations or simply a new choice for the adjustable parameters that inevitably are part of these parameterisations. Changing the latter is also called ‘tuning’.”
http://unfccc.int/resource/brazil/climate.html
“Parameters for tuning a simple climate model (plus aerosol forcing)”
http://climateaudit.org/2007/12/01/tuning-gcms/ Analysis of Kiehl et al.
——————————————
And there are many more!
So Leif, I will need you to explain to me how this tuning presented from all these sources differ from the tuning under discussion by the paper here, or even defined by you. I frankly see no descriptive difference what so ever; it’s all the exact same discussion and methodology as far as I am reading -everywhere-.

November 3, 2011 12:10 pm

Ged says:
November 3, 2011 at 11:55 am
It seems to me that the claims you are making and saying are different from tuning, are defined by you the same as what’s already been defined, ergo what you say isn’t different.
All your examples are general babble, but I have yet to see a single explicit example of a single parameter being changed to rectify a difference between model output and observations. There is a difference between a better empirical determination of a parameter and the parameter being changed because of the model being wrong. Please provide one.

Gary Swift
November 3, 2011 1:39 pm

“Leif Svalgaard says:
November 3, 2011 at 10:52 am
Gary Swift says:
November 3, 2011 at 10:38 am
the experts aren’t afraid to say things like “the evidence is inconclusive” or “there are several popular theories
It is interesting that there are no global climate models [that I know of] written by, run by, or otherwise pushed by skeptics.”
That is a straw man argument. Nice try, but FAIL. I am not going to bother chasing my own tail by responding to your illogical proposal that only someone who has their own climate model is allowed to have an informed opinion.
What I said about climate science is true, and it has nothing to do with models. I did not mention them.
As for my background, how do you know that I do not have adequate knowledge for an informed opinion? Climate scientists aren’t the only profession who deal with temperature, pressure, humidity, etc. on a daily basis. Yes, believe it or not, industry does have to deal with physics once in a while. How woulda thunk it? OMG!

Eric Anderson
November 3, 2011 3:29 pm

Leif: “The assumption is that we [by working hard on this] can get better and better models and that they have predictive skill until proven otherwise. Every time a prediction fails we learn something new and can improve the models. This is how science [of any stripe] operates.”
The models should be treated as having “predictive skill until proven otherwise?” Let’s see . . . By this logic any Tom, Dick or Harry can put together a model and we must treat it as though it has predictive value until proven otherwise. Then when it fails after — oh, let’s see, how long do they claim we must wait, was it 17 years, or 30 years? — then Tom, Dick and Harry can make a few tweaks to the model and we must again *assume* that the “improved” model has predictive skill until another decade or two have passed, at which point they tweak it again . . .
Sorry, but you are not describing objective science [of any stripe]. You are describing an unhealthy blind belief in something that has never been demonstrated to have meaningful predictive skill. Maybe you run your science that way, but I think most of us will stick to the perfectly logical approach that unless, and until, Tom, Dick and Harry can demonstrate that their model has predictive value, there is no reason to think that it does, and absolutely no obligation to pretend that it does in obeisance to some broad notion of the advancement of science. And there is definitely no valid reason to pass laws and spend money based on a model that has not demonstrated meaningful predictive skill. That’s not science; it is insanity.

November 3, 2011 3:53 pm

Leif : “There is a difference between a better empirical determination of a parameter and the parameter being changed because of the model being wrong. Please provide one.”
Is there? What is the difference Leif?

November 3, 2011 4:10 pm

Willis Eschenbach says:
November 3, 2011 at 12:37 pm
In general the “particular variables” start with the historical 20th century global mean surface air temperature. Parameter values are adjusted to optimize the simulation of historical values. This is called “tuning”, Leif.
Provide a cite of a specific parameter that was tuned in that way.

November 3, 2011 4:42 pm

TimTheToolMan says:
November 3, 2011 at 3:53 pm
Leif : “There is a difference between a better empirical determination of a parameter and the parameter being changed because of the model being wrong. Please provide one.”
Is there? What is the difference Leif?

An example: the Charnock relation. Here is how it is measured and parameterized [calibrated]:
http://powwow.risoe.dk/publ/Lange-POW'WOW-WorkshopPorto2007-WindWaveInteraction.pdf
The value used in atmospheric modeling for the parameter varies from 0.00001 [smooth sea] to 5 [tall building ] depending on the surface. Now it could be that the value also depends on the grid size of the model [or the time step or other model characteristic], so that a different set of values would work better. This might be explored by tuning the model using different values for the parameters [rather than the non-model empirical determination] to get a better fit to the observations. That is the difference between calibration and tuning.

November 3, 2011 4:45 pm

Eric Anderson says:
November 3, 2011 at 3:29 pm
That’s not science; it is insanity.
The insanity is that people vote for politicians that exploit the ignorance of the people. Which one did you vote for?

November 3, 2011 5:05 pm

Willis Eschenbach says:
November 3, 2011 at 4:55 pm
It has no physical basis. The months have been selected to match historical observations.
The assumption that melt accumulates in the summer when the insolation is highest is a sound physical basis and that that makes for a better match… Or perhaps you disagree with that.
Their model doesn’t balance regarding radiation. Two choices. Fix the underlying problem, or just tweak a parameter. They chose to tweak a parameter
Which parameter?
I think you consistently confuse calibration with tuning.

November 3, 2011 5:32 pm

Leif writes : “Now it could be that the value also depends on the grid size of the model [or the time step or other model characteristic], so that a different set of values would work better.”
Or it could be that the underlying physics doesn’t actually represent the reality sufficiently well and that the parameter which has the role of fudging for averaging changes in surface structure over a large area also fudges for a slightly misunderstood physical process because who would have thought that algae would have had a large effect on ocean roughness. />

November 3, 2011 5:34 pm

Leif in my previous reply, my “Ficticous” tag was lost surrounding the comment on Algae…in case you think I’m being factual with that statement.

November 3, 2011 5:48 pm

TimTheToolMan says:
November 3, 2011 at 5:32 pm
Or it could be that the underlying physics doesn’t actually represent the reality sufficiently well and that the parameter which has the role of fudging for averaging changes
Part of building a good model is to get the physics as correct as possible. This is hard and several tries or experimentation is needed [perhaps ongoing] to get a better representation. Sometimes you discover that you are missing a process, which you then have to add. There is a learning process here. Getting the physics right is not ‘fudging’, nor is it ‘tuning’, it is called ‘research’.

November 3, 2011 6:04 pm

Willis Eschenbach says:
November 3, 2011 at 4:55 pm
the pond melt can only accumulate in June and July … this is a tuned routine, Leif. It has no physical basis. The months have been selected to match historical observations.
Here is the physical basis for the routine:
http://en.wikipedia.org/wiki/File:InsolationTopOfAtmosphere.png
The code is a crude approximation to the function given in the Figure. I wonder if you would also have said that the routine had no physical basis if the code was actually calculating the function and using that as a coefficient [suitably scaled].

November 3, 2011 6:13 pm

Willis Eschenbach says:
November 3, 2011 at 5:57 pm
The new parameter they just added, the “only at certain times” parameter for the ice melt ponds.
See my comment on that.
Leif, we’ve provided a stack of references to how climate models are calibrated.)
As long as they are calibrated based on physics [like the melt periods] I have no problem. The calibrations can be good or bad. It is research to make them better.
if the model physics are so good, different models use very different forcings, and different physical representations of the ocean and atmosphere
Because this is a hard problem. Where we [might] differ is that I think the problem can be solved, eventually. I was just at a conference http://sdo3.lws-sdo-workshops.org/ where we were discussing how to model the solar atmosphere and interior and could only marvel at the enormous progress we have made the last ten years. I fully expect progress in climate modeling too.

November 3, 2011 6:18 pm

Leif writes “Sometimes you discover that you are missing a process, which you then have to add. There is a learning process here. Getting the physics right is not ‘fudging’, nor is it ‘tuning’, it is called ‘research’.”
But meanwhile many aspects of climate models are known to be deficient and many others are no doubt not-yet-known to be deficient but are. This paper tells us that none of those models represent any form of reality with their predictions.
The upshot of trying to use models to validate the “CO2 dunnit” theory of GW is FAIL based on guaranteed incorrect results and circular reasoning.

November 3, 2011 6:25 pm

Willis Eschenbach says:
November 3, 2011 at 5:57 pm
No, they picked the months of June and July, not when the insolation is highest.
You are saying that the insolation at the two polar areas where there is a lot of ice is not highest in June and July [or December/January for the other hemisphere – also in the code]? – oh, well, how does one react to such a claim? Perhaps Wikipedia? http://en.wikipedia.org/wiki/Insolation

November 3, 2011 6:31 pm

TimTheToolMan says:
November 3, 2011 at 6:18 pm
But meanwhile many aspects of climate models are known to be deficient and many others are no doubt not-yet-known to be deficient but are. This paper tells us that none of those models represent any form of reality with their predictions.
What paper? The one Willis referred to does not demonstrate that climate models are defective. That models have defects is a property of all models. The important thing is whether there is research aiming at improving the models. As I remarked [but nobody responded to], I have not seen any models by skeptics. Perhaps, they are think that they can’t compete [not enough money, knowledge, or motivation]…

November 3, 2011 6:47 pm

Leif writes “The important thing is whether there is research aiming at improving the models.”
That is apparently a widely held misunderstanding. It is certainly generally held that as we improve the models, they give us better results but that simply is not true. The models can never represent reality when it comes to predicting the future. That is what this paper is showing.
All the models can and are doing is showing that we can represent historic temperatures with some level of accuracy based on an incomplete set of tuned physics based algorithms.
I’m quite certain you believe that a tweek closer to reality in one area of a model leaves that model a little better off. It doesn’t.
Take another look at the paper and think very carefully about the implication of introducing an error of 1% in the model and its subsequent inability to predict anything in the future no matter how much tuning is applied and how well it “predicts” the past.

November 3, 2011 7:48 pm

TimTheToolMan says:
November 3, 2011 at 6:47 pm
That is apparently a widely held misunderstanding. It is certainly generally held that as we improve the models, they give us better results but that simply is not true. The models can never represent reality when it comes to predicting the future.
We have models that predict stellar evolution. We are quite sure what the luminosity of the Sun will be a billion years from now. We are quite sure that the oceans of the Earth will boil away some 2-3 billion years from now, etc. To say that something ‘is simply not true’ is ‘simply’ unfounded. Models can predict the future, even the very distant future.

November 3, 2011 7:50 pm

Leif says “You are saying that the insolation at the two polar areas where there is a lot of ice is not highest in June and July [or December/January for the other hemisphere – also in the code]?”
Well the Wiki says this …Insolation is a measure of solar radiation energy received on a given surface area in a given time.
And that seems pretty reasonable to me. So you are agreeing that as far as the model goes, cloud cover is irelevent and that if cloud cover were to drop (or increase) as a result of all sorts of unexpected (and incidentally wrong) interactions in the rest of the model then its safe to ignore that fact when looking at the ice ponds.

November 3, 2011 7:55 pm

Leig writes “We are quite sure that the oceans of the Earth will boil away some 2-3 billion years from now”
Sorry, was that Tuesday in 2-3 billion years time? Or Wednesday? Of course there are some things we can predict. I can predict I’ll be dead then too…but that is a complete strawman argument and you know it.

November 3, 2011 8:07 pm

TimTheToolMan says:
November 3, 2011 at 7:50 pm
And that seems pretty reasonable to me. So you are agreeing that as far as the model goes, cloud cover is irelevent
No, I’m not arguing that cloud cover or CO2 or land-use or whatever is irrelevant. I’m saying that if you want to parameterize ice melt then making that higher when insolation is higher is perfectly valid physics. Whether that parameter itself is reasonable was not the issue.
TimTheToolMan says:
November 3, 2011 at 7:55 pm
Of course there are some things we can predict. I can predict I’ll be dead then too…but that is a complete strawman argument and you know it.
Actually, it was not a strawman. I really mean this. We probably can never predict whether it will rain in Petaluma on a given hour 100 years from now, but the climate is easier to predict [I’ll predict that July 2112 will be warmer than January 2112] and the far future destiny easier still. Your statements that models c an never predict anything is [as you say] ‘simply not true’.

November 3, 2011 8:47 pm

Leif predicts “[I’ll predict that July 2112 will be warmer than January 2112]”
I’m assuming you’re doing that from your personal “I live in the Northern hemisphere” perspective and not a global temperature anomoly perspective. Strawman much? The point is that there is no additional energy involved in the CO2 argument, only modelled increased temperature gradients. Increased energy doesn’t require a model to see the answer, we have perfectly good laws of thermodynamics for that…
“and the far future destiny easier still.”
Well this is where we disagree. And you disagree with the paper presented at the start. C’est la vie.
“Your statements that models c an never predict anything is [as you say] ‘simply not true’.”
I’m assuming you’re extrapolating my statement to all models and not GCMs. Strawman much?

u.k.(us)
November 3, 2011 8:48 pm

Leif Svalgaard says:
November 3, 2011 at 6:13 pm
“I was just at a conference http://sdo3.lws-sdo-workshops.org/ where we were discussing how to model the solar atmosphere and interior and could only marvel at the enormous progress we have made the last ten years. I fully expect progress in climate modeling too.”
======
With all do respect, Leif.
What will you model, and what data will you include.
The dearth of data would seem to be a problem.
What is the expected output of the model.
Are we not still collecting data to set a “baseline”.
I guess we have to start somewhere, but I’ll bet you wish you had 10,000 years of the kind of data you are collecting now.
Keep us informed, please.

Brian H
November 3, 2011 8:51 pm

Legatus says:
October 31, 2011 at 4:59 pm

[Many] times, the economists are simply telling the government types what they want to hear.

And here’s (another) kicker: What model can incorporate the effects of taking the model’s output seriously? There’s a kind of corrosive feedback loop in there.

November 3, 2011 9:20 pm

TimTheToolMan says:
November 3, 2011 at 8:47 pm
I’m assuming you’re doing that from your personal “I live in the Northern hemisphere” perspective and not a global temperature anomoly perspective
I thought that was clear from mentioning Petaluma [in California]
we have perfectly good laws of thermodynamics for that…
The radiative properties of CO2 and H2O are also perfectly well known.
Well this is where we disagree. And you disagree with the paper presented at the start. C’est la vie.
The starting paper does not do [as far as I can see] what the climate models do: solve the differential equations that describe the problem, so cannot be compared.
I’m assuming you’re extrapolating my statement to all models and not GCMs.
You did not qualify your statement. And you have not demonstrated that GCMs are different from all other models.
u.k.(us) says:
November 3, 2011 at 8:48 pm
Assuming you mean about the Sun.
What will you model, and what data will you include.
We’ll model the behavior of the solar cycle, the sunspots, the coronal mass ejections, solar flares, everything we can observe.
The dearth of data would seem to be a problem.
We collect 1000 Gigabytes of data every day, so we have the opposite problem
What is the expected output of the model.
How the solar cycle will evolve. Will the next cycle be large or small. When will a given sunspot flare and send bad stuff our way. The amount of solar radiation, …
Are we not still collecting data to set a “baseline”.
No, we are observing solar features in unprecedented detail and cadence, and seeing brand new things, ‘wonderful things’ [Carter, 1923: http://en.wikipedia.org/wiki/Howard_Carter ]
I guess we have to start somewhere, but I’ll bet you wish you had 10,000 years of the kind of data you are collecting now.
Of course, but that would not be 10,000 times better.

Richard S Courtney
November 4, 2011 1:10 am

Leif Svalgaard:
I appreciate your posts addressed to me at November 3, 2011 at 9:35 am and November 3, 2011 at 9:49 am.
They prove the reason for your specious debate of Willis in this thread is that you think your opinions are facts and, therefore, you think demonstrable facts presented by others are merely opinions.
And you attempt to justify your opinions by evasion and bombast which seem to be convincing in your own mind. But I write to inform you that they provide discredit of you by impartial observers.
Richard

Rational Debate
November 4, 2011 2:42 am

re: Leif Svalgaard says: November 3, 2011 at 9:35 am and November 3, 2011 at 4:45 pm

“No model (of any kind) should be assumed to have more predictive skill than it has been demonstrated to possess.”

The issue was that models are supposed to be ‘tuned’ to agree with observations and thus will always agree [eventually with a lag]. The assumption is that we [by working hard on this] can get better and better models and that they have predictive skill until proven otherwise. Every time a prediction fails we learn something new and can improve the models. This is how science [of any stripe] operates.

Talk about putting the cart before the horse. The statement is fine, with the massive exception of “and that they have predictive skill until proven otherwise. That is quite clearly not how science works, unless perhaps it’s in the world of “post-normal science.” I have to think that this was a typo on Leif’s part. The assumption must be that the model does not have predictive skill until proven otherwise.

Eric Anderson says: November 3, 2011 at 3:29 pm
That’s not science; it is insanity.

The insanity is that people vote for politicians that exploit the ignorance of the people. Which one did you vote for?

Talk about switching the subject to avoid addressing the issue.
As an aside – sure, it’s insane to vote for politicians that exploit the ignorance of the people – and when the only candidates fit into this category, then which do you vote for Leif? Or when the candidate who doesn’t appear to exploit, has other gross failings that make them an even worse choice – then which do you vote for?
Voting and politicians are irrelevant to the point Eric was making – which is that it makes no sense to assume a model has predictive ability before it has been soundly proven to actually make accurate predictions. To assume a model has predictive power without proof when its use is associated with anything important just begs for a huge waste of money at best and serious failures at worst.
Spend a few trillion dollars on AGW. And let’s do away with airplane test flights and just send up new airliners for their maiden flight with several hundred passengers aboard. Or send up astronauts in a rocket without ever having done test launches of that type of rocket. Why bother testing scramjet engines in unmanned drones? Just put ’em in a jet with a live pilot and navigator. Don’t bother doing any tests on that new bridge design, just christen it “Galloping Gertie II.” After all, the models say they’ll all work just fine.

Rational Debate
November 4, 2011 3:03 am

re: Leif Svalgaard says: November 3, 2011 at 5:05 pm

“Willis Eschenbach says: November 3, 2011 at 4:55 pm
It has no physical basis. The months have been selected to match historical observations.”
The assumption that melt accumulates in the summer when the insolation is highest is a sound physical basis and that that makes for a better match… Or perhaps you disagree with that.

If melt accumulation has been recorded during other months, then it’s clearly not a sound physical basis. It may make the model appear to match better by artificially eliminating melt the model would predict without the bounding parameters (e.g., the two month limit) added, but the only way it’s a sound physical basis is if it matches reality. It’s a tweak, not an accurate representation of the physics involved in melt accumulation. Is it a ‘good enough’ tweak? Well, I suppose that all depends on just how often melt occurs during other months, the effect on the final output results, and just how accurate one requires it to be. But it’s still a manually imposed tweak that isn’t an accurate programming representation of the actual physics involved in melt accumulation.

Rational Debate
November 4, 2011 3:10 am

re: Leif Svalgaard says: November 3, 2011 at 7:48 pm

We have models that predict stellar evolution. We are quite sure what the luminosity of the Sun will be a billion years from now. We are quite sure that the oceans of the Earth will boil away some 2-3 billion years from now, etc. To say that something ‘is simply not true’ is ‘simply’ unfounded. Models can predict the future, even the very distant future.

I have no doubt you believe that – but unless you have a time machine to actually go and see the final results, there is simply no way to know what the solar luminosity will be or when the Earth’s oceans will boil off. A very large asteroid impact could certainly completely change the future of the Earth, and I’m sure there are things that could happen that could similarly change the progression of the solar luminosity. Unless your model can account for anything and everything that might occur, they most certainly can’t predict the future – especially the very distant future.

Rational Debate
November 4, 2011 3:19 am

re: Leif Svalgaard says: November 3, 2011 at 8:07 pm

No, I’m not arguing that cloud cover or CO2 or land-use or whatever is irrelevant. I’m saying that if you want to parameterize ice melt then making that higher when insolation is higher is perfectly valid physics. Whether that parameter itself is reasonable was not the issue.

“No, I’m not arguing that cloud cover or CO2 or land-use or whatever is irrelevant. I’m saying that if you want to parameterize ice melt then making that higher when insolation is higher is perfectly valid incomplete physics. Whether that parameter itself is reasonable was not the issue.”
There, fixed that for you.

Gary Swift
November 4, 2011 7:12 am

To Leif:
There is a difference between a simulation and a predictive model. For example, it is possible to create a computer generated person. You can give it a human appearance, give it the ability to respond to questions, even simulate expression of emotions. That is a far cry from being a predictive model though. You could keep improving the quality of your simulation forever, and still not have a predictive tool. You could add complex algorithms for biological chemistry, neurology, physiology, statistics from sociology, etc. but you would still not be able to predict what I will do an hour from now, or how a group of people will act. The reason for this is that the number of variables is large, and a large portion of them are parameterized in the simulation. The computer generated person might look a lot like the real thing, and you can make it as complex as you want, but it is still not able to predict anything. You could add all sorts of physics and biology equations, but it would still not be a predictive model of a person. You might be able to predict how a body would bounce off of an object with fair accuracy, but you can’t predict whether a person will decide to jump out of a window. Similarly, you can predict the weather tomorrow or the track of a tropical storm with fair accuracy, but despite the ability of a climate model to look a lot like the real climate, it is still unable to predict because there are too many variables that must be parameterized and other variables that are simply not included. I ask, for example, can any climate model explain the start or end of an ice age? The answer is no. That seems to indicate that there are fundamental deficiencies in our knowledge about how the climate works. Climate models cannot predict because they are not predictive models. They are simulations, and are only able to create a fair representation of something that looks a lot like the real climate, but that doesn’t make them predictive.

November 4, 2011 7:33 am

Richard S Courtney says:
November 4, 2011 at 1:10 am
But I write to inform you that they provide discredit of you by impartial observers.
At least I am civil and provide comments that explain my position rather than just parroting others.
Rational Debate says:
November 4, 2011 at 2:42 am
The assumption must be that the model does not have predictive skill until proven otherwise.
People that build the models make a great effort to do the best job possible. Skill can be measured as a ‘skill score’. One definition involves the mean squared error MSE = sum ([prediction(i)-observation(i)]^2)/N. Then the skill score is SS = 1 – MSE(prediction)/MSE(climatology). A perfect prediction has a SS of 1.0. A prediction that is no better than just averaged climatology has a SS of 0, while a prediction that is worse than climatology has negative SS. The absolute value of the SS will usually decrease as the time interval covered by the prediction increases. You can learn more about skill scoring here: http://www.mmm.ucar.edu/events/ISP/presentations/Semazzi_endusers.pdf or here: http://www.swpc.noaa.gov/forecast_verification/Assets/Bibliography/i1520-0493-117-03-0572.pdf This [comparing many models] may also be of interest: http://www.arm.gov/science/highlights/R00175/pdf especially the conclusion that “The mean model does so well mostly because errors in individual models are distributed on both sides of the observations. […] it’s difficult to imagine accurate projections of future change coming from a model that does a poor job in simulating the present climate – and now there’s a way to measure the success of a model at doing the latter job.”
Thus skill can be measured and current ensemble of models do have positive skill [“The mean model does so well”]. It is also to be expected that when people put solve the equations governing the evolution of the climate that there will be some skill. This expectation is normally fulfilled in all other scientific endeavors. That is what science is: the ability to predict something from ‘laws’ that have been deduced from observations.
Eric Anderson says: November 3, 2011 at 3:29 pm
Talk about switching the subject to avoid addressing the issue.
I think you brought up insanity…
As an aside – sure, it’s insane to vote for politicians that exploit the ignorance of the people – and when the only candidates fit into this category, then which do you vote for
A people has the government they deserve.
it makes no sense to assume a model has predictive ability before it has been soundly proven to actually make accurate predictions.
See comment to Richard. Some skill is better than no skill. If I can predict the stock market with 51% accuracy, I’ll come out ahead in the long run.
Rational Debate says:
November 4, 2011 at 3:03 am
but the only way it’s a sound physical basis is if it matches reality. […] But it’s still a manually imposed tweak that isn’t an accurate programming representation of the actual physics involved in melt accumulation.
It is an approximation [compared to using the actual insolation] that is based on sound physics. The model builders seem to have concluded that is is good enough for their purpose. Perhaps in later version they will use a better approximation, especially if is turns out that this parameter is important [which I doubt].
Rational Debate says:
November 4, 2011 at 3:10 am
but unless you have a time machine to actually go and see the final results, there is simply no way to know what the solar luminosity will be
Prediction is not about ‘knowing’. but about having a skill score that is high enough to take seriously, e.g. 0.99999999999999999 or even 0.9. We have great confidence in our prediction of the Sun’s luminosity because we observe millions of stars at all ages and at all phases of their evolution and can directly verify that they behave as predicted.
A very large asteroid impact could certainly completely change the future of the Earth
Again, prediction is about being good enough, not perfect. Your argument is of the kind that it does not make sense to lay up supplies for the coming winter, because we may all be wiped out by an asteroid anyway. Not exactly a ‘Rational Debate’.
Rational Debate says:
November 4, 2011 at 3:19 am
when insolation is higher is perfectly valid incomplete physics.
All models have incomplete physics to some degree. As long as the physics is valid, an approximation – even if incomplete (to be improved in the next version, perhaps) is better than none. You might want to compare your ‘fixed’ version “perfectly valid incomplete physics’ to Willis’s “have no physical basis”, and note that we have made progress via our discussion. I take that as a positive sign.

November 4, 2011 7:39 am

Gary Swift says:
November 4, 2011 at 7:12 am
Climate models cannot predict because they are not predictive models. They are simulations, and are only able to create a fair representation of something that looks a lot like the real climate, but that doesn’t make them predictive.
If a simulation can show the correct behavior of a system when starting from given initial conditions [this is what we use simulations for: to examine the system under varying conditions], then when those conditions are taken as actual conditions as of now, then the simulation [if it is any good at all] becomes a prediction.

Frank
November 4, 2011 7:55 am

Willis, Tim,
I’m still not convinced that the paper proves what you think, i.e. that even if you have the physics completely right, you can’t predict the future because of a bit noise.
First off, they predict ahead 7 years based on a 3-year history set. That’s just unreasonable and unrealistic extrapolation, especially under noise conditions. There is no information whatsoever about how well the history-tuned model does in predicting, e.g. 2 years into the future (a much more reasonable time span).
The only thing they really do is show that when they tune their model to the future set, they get a different set of optimum parameters than when they tune their model to the history set. They never show what is in fact the error made by the prediction, i.e. how much (in absolute terms) do the future production rates differ from the predicted ones? The values in the Figures are normalized, so it might be quite a small difference, one a reservoir engineer might laugh at.
See, there is just too much weight on looking at peak positions. But the peak positions are produced by tuning to a particular data set, so every time they look at a different future set (5, 6, 7 years into the future), they will have a different set of peaks. The position of the peaks (i.e. the parameter values) doesn’t say much about how far the prediction is really off. Suppose my history-tuned model is off by 1%, while the optimally tuned future model is off by 0.1% (and therefore, has a much higher peak), does that mean that my history-tuned model can’t predict the future? I think not.
I want to see graphs of production rates over time using the history-tuned model and the “truth” case before I’ll say their model can’t predict the future. I’ll probably end up saying something like “Given 3 years of historic data and 1% of noise, the prediction window is X years”.
I’m not saying that models can solve everything if you tune and tweak long enough, but I am just not convinced that this paper proves the opposite.

Eric Anderson
November 4, 2011 8:27 am

Leif: “The insanity is that people vote for politicians that exploit the ignorance of the people. Which one did you vote for?”
I share your concern with the politicization of the issue and exploitation for ulterior motives. However, that does not address the issue at hand, which was your suggestion that the models should be treated as having predictive value until proven otherwise. That is the insanity I was talking about, because it is exactly backwards from rational science.
Or maybe your comment was meant to say that you think, as a personal preference, that the models have predictive value, but that it would be insane for politicians to pay any attention to the models and to take any actions that might be based on the outputs of the models? Kind of strange, but I guess I could live with that approach. 🙂

Ged
November 4, 2011 8:39 am

@Leif,
Actually, almost all those references specifically mention particular parameters which are tuned, but maybe you missed this one which is even more explicit and potentially what you are looking for. Here we see a very specific equation parameter being changed to match observations after the model has already been made:
http://iopscience.iop.org/1748-9326/3/1/014001/fulltext
” A tuning experiment is carried out with the Community Atmosphere Model version 3, where the top-of-the-atmosphere radiative balance is tuned to agree with global satellite estimates from ERBE and CERES, respectively, to investigate if the climate sensitivity of the model is dependent upon which of the datasets is used.”
That’s explicit, that’s a value in the GCM’s equations, and that fits every definition I can see you giving and arguing about. All the other references did too, if you read them, but at least this one doesn’t quite carry as much of the vague language as most scientific papers/publications.
If I am still misunderstanding you, or what tuning is (as everyone is using the term) from my extensive reading, then please elucidate you position more and correct my error.

November 4, 2011 9:18 am

Eric Anderson says:
November 4, 2011 at 8:27 am
which was your suggestion that the models should be treated as having predictive value until proven otherwise. That is the insanity I was talking about, because it is exactly backwards from rational science.
In all science, when we think we know the physics [and there is no ‘new’ physics in climate] and we can write down the equations that describe the physics we fully expect that when solving those equations we get results that agree with observations. This is not ‘backwards’ and not ‘insanity’. If we get results that do not agree, it is a sign that the models must be upgraded or improved and that is ongoing.
Ged says:
November 4, 2011 at 8:39 am
” A tuning experiment is carried out with the Community Atmosphere Model version 3, where the top-of-the-atmosphere radiative balance is tuned to agree with global satellite estimates from ERBE and CERES, respectively, to investigate if the climate sensitivity of the model is dependent upon which of the datasets is used.”
That’s explicit, that’s a value in the GCM’s equations

There is no such ‘value’ in the equations. Show me if you think otherwise. What they mean is that they are trying to figure out which of the two data sets to compare with. The radiative balance is a computed value [output of the model – not a value in the equations]. The experiment is to try and see if the model can be changed such as to agree with one or the other estimates. As far as I know the experiment came out negative. If you know otherwise let me know, i.e. which data set was chosen and which actual changes were made.

November 4, 2011 10:23 am

Ged says:
November 4, 2011 at 8:39 am
” A tuning experiment is carried out with the Community Atmosphere Model […]” That’s explicit, that’s a value in the GCM’s equations
What they actually did was:
“The tuning is done in the atmospheric component of a coupled GCM […] The cloud microphysics of the model, as in any GCM, is highly parameterized and the tuning is carried out through alterations of parameter values in these physics descriptions. There are numerous non-restricted parameters that affect the model cloud properties and thereby the radiative fluxes, and hence there are numerous ways to tune the model to a chosen level of radiative balance. We modify a number of parameters that are commonly used for tuning (Hack et al 2006), including relative humidity thresholds for cloud formation, thresholds for autoconversion of liquid and ice to rain and snow, efficiency of autoconversion in convective and stratiform clouds, efficiency of precipitation evaporation and adjustment timescales associated with convection….
there is presently no way to determine which is more correct. The small magnitude of the difference must not be used as an excuse for not continuing to refine the measurements of the TOA radiative balance or imbalance and definitely not as an excuse for not making further attempts to restrict free parameters like cloud water content in models…”
So, they play around with different combinations [all the time staying within reasonable physical limits] to see if it makes any real difference and find that it does not. This shows that the current calibrations of these parameters are either not too far off or do not have any great impact on the result. Such experiments with those and any other parameterizations must be carried out all time time [it is called research] and should lead to better models with higher skill scores.

November 4, 2011 10:35 am

Ged says:
November 4, 2011 at 8:39 am
” A tuning experiment is carried out with the Community Atmosphere Model […]” That’s explicit, that’s a value in the GCM’s equations
To bring home their conclusion:
“Although this limited study offers no conclusive evidence, it indicates that the CAM is rather robust to tuning changes and that climate sensitivity is not strongly dependent on what level the TOA radiative balance is tuned to.” and “within the realm of reasonable agreement with reality there is no unique way to reach a certain level of TOA radiative balance.”
So ‘tuning’ is not the answer. The way ahead is better empirical determination of the calibration of the various relationships that have been parameterized. That is: don’t play dice with random numbers hoping to find some that fit, but go after the physics and try to understand the processes involved, e.g. try to get a better approximation [actual insolation] of the ice melt function than the simple 2-month step function now employed.

Ged
November 4, 2011 12:23 pm

@Leif,
You make great points, but I still do not see how any of this is different from the definition of tuning as put forth in the head article, and as used in each and every one of those sources. Such as “tuning is carried out through alterations of parameter values in these physics descriptions” and “…and hence there are numerous ways to tune the model to a chosen level of radiative balance.”. That’s the definition of tuning I’m reading here and everywhere, yet somehow you are claiming that isn’t tuning, or isn’t what’s being discussed? What is tuning then by your definition? This is the disconnect I don’t understand in your reasoning, nor does Willis from what I’m inferring from his responses.
About using random values to find a good fit, again we see from just that one paper (as is also used in all the others): “We modify a number of parameters that are commonly used for tuning (Hack et al 2006), including relative humidity thresholds for cloud formation, thresholds for autoconversion of liquid and ice to rain and snow, efficiency of autoconversion in convective and stratiform clouds, efficiency of precipitation evaporation and adjustment timescales associated with convection… there is presently no way to determine which is more correct.”
So, they played with the values, “tuning” them to see if the model would best fit the observational data between either data set, and which tuned parameters would be most effective, yet it came out as a wash, with no way to determine which was more correct. This is in agreement with the head post’s paper about such tuning/calibration not being effective for models. And yet, we have from this paper another reference to a previous paper about tuning. CGM are tuned, as all my references stated, including state-of-the-art ones, and so fall under the topic of discussion and are not exempt as you wish to make them. The word tuning is used all the same, in the same scientific context, etc.
This is why I cannot fathom your reasoning: it flies in the face of what’s clearly written and described.
The whole idea is that calibrating a model with physics or observation does not give it predictive power, that’s what the original paper seems to be saying. And here I’ve shown you many times calibration/tuning are used on models with particular parameters listed. And in the end the conclusion of that particular paper we’re discussing, when looking at various tuning parameters was that: ” there is presently no way to determine which is more correct.”
So, unless completely new methods and data I’m not familiar with has come up, all my reading comes to the same conclusion:
1. GCMs are tuned, as they themselves report, as the IPCC reports, as the methodology reports. Using that exact word, “tuned” or “tuning” or “calibration”. All uses are seen describing the same method: changing parameters iteratively to best match the behavior of the nascent model to observations. The absolute range for the parameters being tuned come from physics, but their values are subject to revision against observation.
2. This tuning is the tuning under discussion by this post and the paper in the head post.
3. Tuning is not seen to increase the predictive value of models, in agreement with the head post’s paper, which similarly stated that calibrated, tuned models had no predictive power for their investigation.
This does not mean there is no predictive power to weather models, as there is at least a “best guess” effect. But none the less, the conclusions of the head post paper ARE applicable to the GCMs, and you have shown no evidence to the contrary, and I and others have provided ample references using and describing ‘tuning’, which stand contrary to your arguments.
Again, how is tuning of GCMs listed in the references different from tuning as listed by the head post paper? I see absolutely no scientific difference in the definitions or applications or conclusions. Again, maybe I’m missing it, or maybe I’m outdated, but so far you still have not shown the difference between tuning listed in my references and others, and tuning listed in the head post, nor the difference in the conclusions or the exclusivity of GCMs from the conclusion.

November 4, 2011 12:59 pm

Ged says:
November 4, 2011 at 12:23 pm
You make great points, but I still do not see how any of this is different from the definition of tuning as put forth in the head article, and as used in each and every one of those sources.
The crucial difference is one of physical understanding. Let us take the case of melt ponds. Suppose that altering the rule to ‘if month = October or month = January or month = July then …’ actually did improve the fit for historical data, then that would be ‘tuning’ to match the model to the observations. But would IMO not presage improved predictability for the future because the tuning did not have a reasonable physical basis. If, on the other hand, the actual insolation was computed from astronomical data and that improved the fit, then I would say that a better physical calibration improved the model. I hope that this example clarifies the difference. Of course, for rabid contrarians, nothing I say will clarify anything. I trust that you do not belong to that category.

Ged
November 4, 2011 1:15 pm

@Leif,
Please also notice I’m not making any claims about the veracity of the results from the head paper, nor trying to interpret their meaning.
I should also make sure we’re on the same page about what “predictive power” means, which from my understanding is that “as a value changes in the future in a way we expect it will, we predict other values will change in an x or y way in response”. So while “tuning” changes values to match observations of the past, “prediction” changes expected values and reads out what the “observations” should look like in the future.
There is no doubt tuning is done for GCMs as far as I see and read, again I make this point so we can get off that hang up and onto the real discussion about tuning verses predictive power.
Can we make models which do have some level of predictive power, without using tuning/calibration (which come from physics and observation in the first place)? Maybe; and a complete understanding of physics should mean no tuning would ever be required, the model would work for dependent values the same forwards and backwards as long as independent values changed as they were expected to. We don’t have a complete understanding of physics, so how do we gauge “predictive power” and assign a quantitative value to a model’s ability to predict the future? I can think of ways, but it’s all a crud shoot.
If tuning/calibration does not increase predictive power of a model, what does that imply and how do we move forward to make useful, predictive models? And to what degree of “predictive power” do we shoot for? What degree do we say is enough? What level do we need before we decide we can place confidence enough on a predictive series to try to modify if wanted the expected change in the independent value to avoid changes in corresponding dependent values which the model is predicting will be effected by x or y if we do nothing?
I don’t have the answers to those questions, and have yet to see convincing answers to those questions. The paper in the head post challenges the usefulness of models, or at least the tuning/calibration based on physics methods for making “predictive” models. But quantifying this issue is beyond me at the moment. And this effects all fields, not just GCMs.
Look at the semiconductor industry: models there are never right, though they are used as a starting place, they never predict a chip’s performance accurately; and surely we know the physics there better than the climate, and surely businesses have more invested in the matter and a need for more accuracy than anyone with GCMs. Many months are spent revising chips before they are brought to market, and some still fail spectacularly despite model predictions (i.e. AMD Bulldozer chip). In all fields, I have never seen a model with much predictive power, only as something that could guide experimentation as a hypothesis that “might be”, and thus is testable.
So, I can’t tell you the ultimate conclusions to all this, other than what we can plainly observe everywhere, and what this head paper points out clearly. You can make arguments about its results and the interpretations to take from them, and how those might apply to GCMs; and I would be very interested in that.

Ged
November 4, 2011 1:23 pm

@Leif,
“If, on the other hand, the actual insolation was computed from astronomical data and that improved the fit, then I would say that a better physical calibration improved the model.”
I think here lies the heart of what you’re trying to get at, and differentiate? From what I’m reading, of the head post and all the sources, what you say there -is tuning-. That is what tuning is, right there. You took observed astronomical data, calculated new features to improve the fit to other observed historical melt pond data. Now, if astronomical data was never included in the first place, then it wouldn’t be tuning, I would think, as you added a new feature/constraint. But if astronomical data was already in there, and you just updated the calculated value to increase fit with the melt data, that would clearly be tuning by every definition I’m seeing. That is my understanding.
If the model correctly had the physics and equations for melt ponds, it would need -no- astronomical data or calculations from it, -no- outside observations from any sort would be needed: the pure math itself would deterministically decide the melt pond values, and that would work when going backwards (independent values fed in from historical data where melt pond data is dependent), and then would work going forwards. Having to modify “insolation” from astronomical data, is tuning, and the melt pond data is no longer a self contained constraint.
And indeed, as put forth in my references and elsewhere, that is what GCMs do, clearly, and thus it is fair to say they are tuned in the same regard as “tuning” is discussed and used in the head post paper.
That is my logic and understanding at the moment, and I -could always be wrong/off- from what you’re getting at. I do not yet see the distinction you are attempting to draw, it’s all the same as far as I am seeing.

November 4, 2011 6:21 pm

Ged says:
November 4, 2011 at 1:23 pm
That is what tuning is, right there. You took observed astronomical data, calculated new features to improve the fit to other observed historical melt pond data.
No, that is not tuning. That is calibration because I use a physical process and reason for the change. If I just varied the parameter at random until I found the best fit, that would be tuning [of – in fact – the parameter]. For the calibration I expect improved predictive capability, for the tuning I cannot have such expectation [one can always hope, but that is not the same]. Put differently: curve fitting [tuning] does not improve predictions. Calibration with better physics does.

November 4, 2011 9:20 pm

Ged says:
November 4, 2011 at 1:23 pm
That is what tuning is, right there.
To put it as succinctly as possible:
Mere tuning does not improve the predictive power. Improving the parameters [or adding more] based on better measurements and physics-based calibrations will improve the predictive power. Same with improvements in both spatial and temporal resolution.

Richard S Courtney
November 5, 2011 2:30 am

At November 4, 2011 at 7:33 am you say;
“Richard S Courtney says:
November 4, 2011 at 1:10 am
“But I write to inform you that they provide discredit of you by impartial observers.”
“At least I am civil and provide comments that explain my position rather than just parroting others.”
Say what!?
Even in this case you imply that I do not “explain my position” and “parrot others” which are both falsehoods. That is not “civil” and you attempts to “explain” your “position” have been a total failure in this thread.
Richard

November 5, 2011 4:34 am

Leif writes “Improving the parameters [or adding more] based on better measurements and physics-based calibrations will improve the predictive power.”
In a classic three body problem if you know precisely the velocitiesand masses of the bodies then you can predict their positions in the future. This is a perfect model with perfect data. However if you dont know two of the body’s velocities precisely then you can no longer predict their positions in the future at all (except perhaps in the very very near future where you have an approximation)
Now imagine if you learn precisely one of those unknown bodys’ velocities thus leaving only one unknown body. Are you any better off? No. You still cant predict the future for the bodies.
Such is it with GCMs. Improvements are meaningless as far as their ability to predict the future goes. They may be able to predict it better in the very very short term but essentially they’re as useless for prediction as they ever were.

November 5, 2011 5:23 am

Richard S Courtney says:
November 5, 2011 at 2:30 am
Even in this case you imply that I do not “explain my position” and “parrot others” which are both falsehoods. That is not “civil” and you attempts to “explain” your “position” have been a total failure in this thread.
I’ll grant that there are people who simply don’t get it.

November 5, 2011 6:01 am

Leif writes “I’ll grant that there are people who simply don’t get it.”
You believe that despite the imperfect processes within the models, because they are constrained by physics they are still modelling what happens when a pertubation happens.
I believe that those imperfectly modelled processes invalidate that effect of that pertubation and so cannot predict what its effect is. The failure of the models to model those processes happens on many levels. The model in this article demonstrates that very well.
Some of us understand this a lot better than you think Leif. We dont have dogs in this race, though, so we can perhaps be a little more objective about it.
Incidentally you appear to be rejecting this notion on irrational grounds. Do you believe the model in this article isn’t based on well understood physics? I mean it appears to be based on a commercial oil exploration and production model afterall. I expect its actually considerably better than most if not all of the GCMs.

November 5, 2011 6:26 am

Actually I quite like my three body analogy and can push it a little further.
You believe that the forcing of CO2 is akin to giving the 3 body system a bit of a push in one direction as a whole. You (may) accept you cant model where any of the 3 bodies are precisely if you dont know the facts about them precisely but you can model where they have moved laterally in a general sense.
So something like… body “a” is constrained to be within such and such a diameter and x kms North of the starting point because we knew how hard we pushed the system as a whole.
This is wrong however because CO2 is not a true forcing. If the sun had increased its output by a few Watts then yes, the above would be a valid analogy but the change is “internal” and so therefore its not that the system gets to move laterally, rather its bodies instead are pushed onto different courses.
This, I believe, is a fundamental misunderstanding by people who believe we may not be able to measure weather but climate change from CO2 is perfectly achievable.
There is scope for atmospheric/oceanic processes to change such that they transport energy more quickly upwards to decrease the temperature gradient. There are lots of possibilities. But there is no guaranteed effect. Not even due to the relatively well known radiative properties of the GHGs because there are so many other important processes that will respond to the changes and will naturally try to minimise the temperature gradients.

November 5, 2011 6:29 am

Oh and above I said “measure the weather” but what I meant was “model/predict the weather”

November 5, 2011 6:32 am

TimTheToolMan says:
November 5, 2011 at 6:01 am
Do you believe the model in this article isn’t based on well understood physics? I mean it appears to be based on a commercial oil exploration and production model afterall. I expect its actually considerably better than most if not all of the GCMs.
‘appears’, ‘expect’…
The model in the paper is just curve fitting and as such does not the predictive powers that solution of the governing differential equations does. This should not be difficult to understand and most people here seem to have grasped that [with a few notable exceptions]. If you expect that curve fitting is considerably better than solving the equations then there is little hope.

November 5, 2011 6:38 am

TimTheToolMan says:
November 5, 2011 at 6:26 am
This is wrong however because CO2 is not a true forcing.
The models do not assume that. They simply calculate the effects of the system as a whole. You give yourself away when you offer “due to the relatively well known radiative properties of the GHGs”. Those properties are not ‘relatively well known’. They are ‘extremely well known’ as they are measured precisely in the laboratory.

November 5, 2011 6:52 am

Leif writes “The model in the paper is just curve fitting and as such does not the predictive powers”
Well maybe. They dont mention what model they’ve used specifically or whether it is as you suggest a very simple application of the three parameters mentioned. However its clear its more complicated that a simple equation because they describe the geological feature modelled and its non-trivial.
“The models do not assume that.”
They dont need to and I never said they did. Thats missing or ignoring the point entirely.
I guess we’re going to disagree here too. The radiative properties in our atmosphere are only relatively well known. CO2 isn’t evenly distributed throughout the atmosphere and there are attempts to measure its distribution. It varies by season too. Water vapour likewise varies considerably. There is a world of difference between what is measured in the lab and what happens in the real world.

November 5, 2011 7:42 am

TimTheToolMan says:
November 5, 2011 at 6:52 am
They dont mention what model they’ve used specifically
And you don’t consider that an important issue? There are many models in use in the petroleum industry [e.g. TDAS] and they work very well. In general, there are two problems: 1) to simulate the flow [and that is done numerically by standard simulation tools and is not the issue] and 2) [the harder of the two] to specify the shape and properties of the reservoirs. The paper in question is concerned with this second problem and the errors introduced by errors in the properties of the reservoir. This is very different from the climate models.
The radiative properties in our atmosphere are only relatively well known. CO2 isn’t evenly distributed throughout the atmosphere and there are attempts to measure its distribution. It varies by season too. Water vapour likewise varies considerably.
The climate models are directly concerned with the variability of the distribution and seasons [and those is not ‘radiative’ properties]. You might find it illuminating to actually read about what the atmospheric models do: http://www.leif.org/EOS/CAM3-Climate-Model.pdf especially section 4.8 and 4.9
There is a world of difference between what is measured in the lab and what happens in the real world.
And you assume that climate modelers are absolute morons that do not know this. The models are concerned with applying the lab data to the real world in the best way possible.

Rational Debate
November 5, 2011 1:56 pm

Leif, back to the melt accumulation for a moment. The modelers introduced the two month limit, in order to force the model to more closely reflect what is actually seen than it had been before they introduced the 2 month limit. If they were adjusting based on physics, why didn’t they adjust the algorithms, differential equations etc., associated with calculating insolation instead? e.g., the actual model representations of the physics involved?

November 5, 2011 3:00 pm

Rational Debate says:
November 5, 2011 at 1:56 pm
why didn’t they adjust the algorithms, differential equations etc., associated with calculating insolation instead? e.g., the actual model representations of the physics involved?
Because you parameterize to the extent you think it is important. The melt ponds are not a vital element in the model so an approximation is [or was judged to be] good enough. I do not see any evidence that that better approximation [than the previous 6-month function] was introduced after a specific tuning run was made just to test the fit for the melt ponds. If you can find such evidence I would be glad to look at it. The dynamic parts of the model [the differential equations] would not be affected anyway by a better approximation of the melt pond function.

Rational Debate
November 5, 2011 4:19 pm

re: Leif Svalgaard says: November 4, 2011 at 7:33 am
Working thru the various pieces:

Rational Debate says: November 4, 2011 at 2:42 am
The assumption must be that the model does not have predictive skill until proven otherwise.

People that build the models make a great effort to do the best job possible.

Yes, I believe that to be generally and in the majority of cases to be true. I also believe that they are humans and so subject to all the foibles of human nature, including those that are made all the time by the most well meaning, hard working, good people. Things such as accidentally overlooking or incorporating errors, confirmation bias, and all of the unintentional problems that can arise even with the most diligent and well intentioned person.
I have to think that the very complexity of climate models works against the modelers in this regard – the more complicated and longer the code, the more incredibly difficult it becomes to root out errors, make any major modifications (rather than just trying to tweak what you’ve got), and so on.
Then there are those who are just not as diligent, or not as dedicated, or just downright lazy. Worse, human nature, unfortunately, also includes what I believe to be a far smaller number who are either openly or under cover working to suit an agenda (other than that of conducting good science), or for whatever reasons actively attempt to delude others or even themselves in the pursuit of money, power, prestige, promotions, tenure, etc., etc. Heck, even just to cover up some previous error they made because their ego can’t tolerate it coming to light.

Skill can be measured as a ‘skill score’. One definition involves the mean squared error MSE = sum ([prediction(i)-observation(i)]^2)/N. Then the skill score is SS = 1 – MSE(prediction)/MSE(climatology). A perfect prediction has a SS of 1.0. A prediction that is no better than just averaged climatology has a SS of 0, while a prediction that is worse than climatology has negative SS. The absolute value of the SS will usually decrease as the time interval covered by the prediction increases…

Leif, thank you very much for the description and the links. It will probably take me little while to get to and work thru them, but I am quite interested. In the meantime, if you don’t mind, I’m wondering what “just averaged climatology” means? That if your projecting 5 years, the model output is within the bounds, high and low, actually observed during that time period, where a perfect prediction would be to exactly match observations throughout the entire time period? Or?
You had also said that the current models have a positive predictive ability – I’m assuming this is all done by using some chosen historical starting point for initial conditions, and then seeing how well the model predicts climate from there, such that the predictions can be compared to actual data, and that this is what is used to determine the model’s skill score, right? Also, the current model positive predictive ability – that’s over what time frame?
I’m thrown on this whole subject over the last IPCC model projections of temperature increases anywhere between, what, something like 2 to 6 degrees? When we’ve only seen approx a degree in the 20th century… e.g., how can a positive skill score be found when the model range output is 1) so very wide and 2) two to six times higher than the dataset available to calculate the skill score?

… It is also to be expected that when people put solve the equations governing the evolution of the climate that there will be some skill. This expectation is normally fulfilled in all other scientific endeavors. That is what science is: the ability to predict something from ‘laws’ that have been deduced from observations.

The majority of experiments aren’t positive – the null hypothesis remains. Generally the null hypothesis is there for very good reason – loads of past research and evidence suggesting that it’s almost certainly correct. The axiom of science is, if at first you don’t succeed, try, try, again. In other words, the expectation isn’t success, it’s failure. Followed by revision, and repeated attempts – or if appropriate (and it often is) based on the experimental results, abandoning the hypothesis as entirely incorrect and unable to be reformulated in any way worth pursuing, and moving on to something else.
The researcher who expects positive results is begging to run afoul of confirmation bias. Hoping your results are positive is unavoidable – we’re humans and there’s nothing wrong with that. Expecting that your experimental design is sound enough to be conclusive one way or the other wrt to the validity of your hypothesis, now that’s reasonable and typical – so long as when obtaining positive results you also expect to completely open your entire process up for other people to try their darnedest to poke holes in your experimental design, methods, results, and conclusions. Assuming or expecting positive results is not scientific and is dangerous in terms of radically increases the chances of producing bad results, particularly over time.

As an aside – sure, it’s insane to vote for politicians that exploit the ignorance of the people – and when the only candidates fit into this category, then which do you vote for

A people has the government they deserve.

Do they? Easy to say, but life is a bit more complicated than that. Plus, it’s a reply which doesn’t answer my question – when the only candidate options you have are both equally badly flawed, which do you vote for? Not to mention the issue of whether each of us individually deserves the government we wind up with.
As to the comment you did make, let’s imagine that space aliens descend on our planet with vastly superior technology and/or firepower. It becomes clear that resistance means 3/4’s or more of us will be killed in short order. Result, they take us over, and implement a government that we all abhor. Do we then deserve that government?
Closer to home – a warlord with money, automatic weapons, artillery, planes, bombs, maybe even chemical or biological weapons, etc., moves into an absolutely destitute area where the population is narrowly avoiding starvation and has no weapons beyond sticks, self made bows & arrows & spears. Do they deserve the government they wind up with?

it makes no sense to assume a model has predictive ability before it has been soundly proven to actually make accurate predictions.

See comment to Richard. Some skill is better than no skill. If I can predict the stock market with 51% accuracy, I’ll come out ahead in the long run.

Only if there is an investment vehicle that actually accurately matches the overall market that’s included in your model. And in following those predictions, you don’t wind up with more than the 1% being eaten up in the commissions involved in purchasing the stocks, and the tax losses each time you sell in order to readjust and follow your predictions. And you can afford to sink that much capital into the market and let it remain there for the duration necessary to recoup that 1%. And there aren’t other investments available that average a greater return in a shorter time frame. And that 51% accuracy is over a short enough time frame to accumulate “enough” for you to consider the time, effort, etc., all worth it such that you’re “ahead” before you die.
As to some skill being better than none when it comes to global climate models – that all depends on what is done with the results. Certainly it’s better from the perspective of any scientist or modeler wanting to improve on previous efforts. Also from the standpoint of being that much closer to a model that might be accurate enough to actually use more widely than just in the lab. Some skill, however, is not at all necessarily better than none if it is presented as being more meaningful than it really is, or if it is used by governments to re-order society or implement “remedies” where the cure is worse than the possible disease. 51% accuracy if used to spend trillions of dollars and lower the standard of living for millions or even billions, because it’ll likely warm 2 degrees in 100 years would do unspeakably grievous damage if during those 100 years there were a period of a few decades with temps far lower than present – even if by year 100 it was 2 degrees warmer than the present.
Not to mention that what may appear to be a skill score of 0.5 today, say, could easily to turn out to actually have been a negative score if naturally occurring cycles with a longer cycle than those included in the model exist (they almost certainly do, we don’t have a long enough and robust enough historical instrumental data set to know), or the skill score calculated happened to coincide with the observed data used in the calculation in part by chance rather than because of model accuracy, etc.

Rational Debate says: November 4, 2011 at 3:03 am …but the only way it’s a sound physical basis is if it matches reality. […] But it’s still a manually imposed tweak that isn’t an accurate programming representation of the actual physics involved in melt accumulation.

It is an approximation [compared to using the actual insolation] that is based on sound physics. The model builders seem to have concluded that is is good enough for their purpose. Perhaps in later version they will use a better approximation, especially if is turns out that this parameter is important [which I doubt].

Yes, but it isn’t only about melt, is it? I mean, insolation affects more than just melt – and if it’s calculation is wrong based on the melt results it produces, then that physics representation is likely propagating errors through the rest of the model in other ways also. Or perhaps isolation isn’t the problem, but other unaccounted for factors such as soot are – again, which would mean that there are errors propagating thru the model. The entire point wasn’t about how significant melt is or isn’t, but rather about how well and why the models – and adjustment methods used on those models – wind up representing reality and if there is any reason to have confidence in their predictive abilities.

Rational Debate says: November 4, 2011 at 3:10 am
but unless you have a time machine to actually go and see the final results, there is simply no way to know what the solar luminosity will be

Prediction is not about ‘knowing’. but about having a skill score that is high enough to take seriously, e.g. 0.99999999999999999 or even 0.9. We have great confidence in our prediction of the Sun’s luminosity because we observe millions of stars at all ages and at all phases of their evolution and can directly verify that they behave as predicted.

At best you can directly verify that over the past x years (25?, 50?, 100? – I’m sure both then number of stars & quality of observations has exploded recently), the stars that have been observed closely enough have behaved as you currently expect them to have behaved. That’s nothing relative to billions of years. There’s always that black swan.

A very large asteroid impact could certainly completely change the future of the Earth

Again, prediction is about being good enough, not perfect. Your argument is of the kind that it does not make sense to lay up supplies for the coming winter, because we may all be wiped out by an asteroid anyway. Not exactly a ‘Rational Debate’.

No, that wouldn’t be a ‘rational debate.’ That’s also not a rational argument on your part. We’ve essentially got reams of data every year going back centuries, and generational knowledge throughout the history of the existence of man that every year winter comes and to survive it’s necessary to have access to supplies. Significant asteroid strikes that affect many people, on the other hand, are extremely rare (thank gawd Tunguska didn’t happen over Moscow or NYC instead!). On the other hand, our actual instrumental data & experience with the behavior of Earth’s sun boiling off our atmosphere? Our instrumental data & experience with any significant variation of solar output over billions of years? Ya, there is no comparison – it’s not a rational analogy to have put forward as if it in any way represented what I actually said.

Rational Debate
November 5, 2011 6:12 pm

re: Ged says: November 4, 2011 at 1:15 pm

….Look at the semiconductor industry: models there are never right, though they are used as a starting place, they never predict a chip’s performance accurately; and surely we know the physics there better than the climate, and surely businesses have more invested in the matter and a need for more accuracy than anyone with GCMs. Many months are spent revising chips before they are brought to market, and some still fail spectacularly despite model predictions (i.e. AMD Bulldozer chip). In all fields, I have never seen a model with much predictive power, only as something that could guide experimentation as a hypothesis that “might be”, and thus is testable.

As best I know, that’s pretty much true in all fields. Aerospace is another prime example. The physics is far less complex than climate, and we’ve been at it far longer – yet they start with models then test, revise, test, revise, etc., and even so initial flight tests quite often surprise with actual performance significantly different than the predicted and wind tunnel tested, etc. versions, let alone what was predicted from the computer model only.
Higgs Boson is having fun with physics models. :0)

Rational Debate
November 5, 2011 6:13 pm

Apologies for the length of my November 5, 2011 at 4:19 pm post – I should have broken it into pieces I guess.

Rational Debate
November 5, 2011 6:21 pm

re: Leif Svalgaard says: November 4, 2011 at 7:33 am

….You might want to compare your ‘fixed’ version “perfectly valid incomplete physics’ to Willis’s “have no physical basis”, and note that we have made progress via our discussion. I take that as a positive sign.

It’s all relative. A model can easily contain a number of perfectly valid physics components, or even incomplete ones :0) , and still have no physical basis overall. Or what one person is willing to shade grey as perfectly valid incomplete physics, another might be more inclined to black and white and find the degree of incompleteness sufficient to categorize the thing as “no physical basis.”
Ah, the joys of the language and communication (espec. with the added difficulty of time constraints & written word only, sans vocal tone, expressions, and body language). 😉

November 5, 2011 7:18 pm

Rational Debate says:
November 5, 2011 at 4:19 pm
so subject to all the foibles of human nature
Science is very competitive and errors are ferreted out with glee.
the more complicated and longer the code, the more incredibly difficult it becomes to root out errors, make any major modifications (rather than just trying to tweak what you’ve got), and so on.
We have learned how to do this. It is called being modular and separate the various contributions. If you look at this model, you’ll see how well it is done: http://www.leif.org/EOS/CAM3-Climate-Model.pdf
under cover working to suit an agenda
I know many of the people involved and they just don’t do that. Also, the code is public and comes with a User’s Manual so you can try it out yourself. It would be too damaging to try to cheat with this.
“just averaged climatology” means?
Just the average values over a reference period [say 30 years]. The skill score will always be defined no matter what the climatology is and you just sum up the squares of the deviations from average. The skill score will always be less than 1.
what is used to determine the model’s skill score, right? Also, the current model positive predictive ability – that’s over what time frame?
There are many models and the errors in some tend to be cancelled by opposite errors in others. There is a literature on that. You can google it as well as I can.
I’m thrown on this whole subject over the last IPCC model projections of temperature increases anywhere between, what, something like 2 to 6 degrees?
I think the deviations from the record is much smaller. Here is a paper on that: http://www.leif.org/EOS/2007_Hansen_climate.pdf [Hansen is not the only author 🙂 ]
The axiom of science is, if at first you don’t succeed, try, try, again. In other words, the expectation isn’t success, it’s failure.
This may be so for discovering something new, but is certainly not the case when applying the laws we already know. For those we expect success, every time. It is really more like engineering: from the known properties of materials and mechanical laws we calculate how thick a beam must be to carry a given law, and we certainly expect that calculation to be correct.
you also expect to completely open your entire process up for other people to try their darnedest to poke holes in your experimental design, methods, results, and conclusions.
Every scientist worth her salt does that.
when the only candidate options you have are both equally badly flawed, which do you vote for?
You take to the streets. Occupy GISS! :-[)
Do we then deserve that government?
Might is right, especially ‘people power’
moves into an absolutely destitute area where the population is narrowly avoiding starvation and has no weapons beyond sticks, self made bows & arrows & spears. Do they deserve the government they wind up with?
That is the way of the world.
And that 51% accuracy is over a short enough time frame to accumulate “enough” for you to consider the time, effort, etc., all worth it such that you’re “ahead” before you die.
If you don’t like 51% make it 60%.
because it’ll likely warm 2 degrees in 100 years would do unspeakably grievous damage
I don’t think there will be any damage as people simply won’t go for it in the long run. And if they did, they deserve what they get. You seems to assume that Governments are competent is carrying out grandiose plans over many years. They are not.
what may appear to be a skill score of 0.5 today, say, could easily to turn out to actually have been a negative score
Again, you are assuming that the Governments can pull off those damaging plans. I’m much less sanguine about that.
if it’s calculation is wrong based on the melt results it produces, then that physics representation is likely propagating errors through the rest of the model in other ways also.
Calculations are not ‘wrong’ [computers can add]. Part of building the model is to get the code right for carrying forward the stuff calculated at earlier steps.
why the models – and adjustment methods used on those models – wind up representing reality and if there is any reason to have confidence in their predictive abilities.
We have some confidence in the models based on how well they represent the past.
At best you can directly verify that over the past x years (25?, 50?, 100? – I’m sure both then number of stars & quality of observations has exploded recently), the stars that have been observed closely enough have behaved as you currently expect them to have behaved.
No, there are stars with the same mass and composition as the Sun, but billions of years older. We observe those to behave as predicted.
Our instrumental data & experience with any significant variation of solar output over billions of years?
See just above.

November 5, 2011 7:27 pm

from the known properties of materials and mechanical laws we calculate how thick a beam must be to carry a given load, and we certainly expect that calculation to be correct.

November 5, 2011 8:35 pm

Rational Debate says:
November 5, 2011 at 6:21 pm
A model can easily contain a number of perfectly valid physics components, or even incomplete ones :0) , and still have no physical basis overall.
Works the other way too: a model with good physical basis overall could still have corners where the physics is shaky or even wrong, but those may not be fatal to the performance.
Rational Debate says:
November 5, 2011 at 6:12 pm
Higgs Boson is having fun with physics models. :0)
Actually the LHC is run by models. The detectors are modeled in great detail so we can interpret the data. Without models that accurately describe the reality of the equipment, the raw data would make no sense at all.
The Higgs is predicted by models of the interactions. Those are fully expected to have great predictive power, if not the whole thing makes no sense. If the boson is not found, we know that the models [and more importantly the underlying physics] were wrong and that is important too, so we can look elsewhere. I don’t think anybody would claim that if the boson is found just as predicted that since the models have no predictive power the discovery of the boson is pure coincidence. Actually from the discussions here I should modify that: there are probably people who would say just that :-[)
.

November 6, 2011 5:53 am

Leif writes “And you don’t consider that an important issue? There are many models in use in the petroleum industry [e.g. TDAS] and they work very well.”
Of course I consider it important. I’d never actually considered they might be using a tiny 3 parameter curve fitting model when they have real models to chose from. If they’re using a real model behind the paper then that supports my point of view. If they’re simply curve fitting then that supports your stated point of view.
That cuts both ways as I’m sure you’d appreciate. If they’re using a real model behind their paper then you’re going to have to change your opinion surely.

November 6, 2011 7:52 am

TimTheToolMan says:
November 6, 2011 at 5:53 am
That cuts both ways as I’m sure you’d appreciate. If they’re using a real model behind their paper then you’re going to have to change your opinion surely.
The model had two parts: one that was calculating the flow using a standard procedure [that was not tuned] and the other was curve fitting of the shape of the reservoir. The paper was solely concerned with errors in the latter. And the result was that mere curve fitting [random variation of the parameter] does not have predictive power. This is clear without any elaborate considerations. Now, if the parameters were actually measured or otherwise physically determined, the model would have significant power. This is why such models are used [and are useful] in the petroleum industry. The situation carries over to climate the same way: if a parameter is played with and chosen just because it gives a better fit [tuning] I would not expect the model to gain improved predictive power, but if a parameter is calibrated better from physical measurements or theory, I would certainly expect the model to improve. I’m at a loss why that is not obvious, but perhaps I should not be presumptuous about what is obvious.

November 6, 2011 2:36 pm

Leif : “one that was calculating the flow using a standard procedure [that was not tuned]”
Well immediately I can see a problem with your expectation. One of the parameters was permeability and that is clearly to do with flow rates and this one was tuned.

November 6, 2011 3:17 pm

TimTheToolMan says:
November 6, 2011 at 2:36 pm
Well immediately I can see a problem with your expectation. One of the parameters was permeability and that is clearly to do with flow rates and this one was tuned.
Of course, that was an input to the calculation and when input parameters are tuned to obtain the best fit, the model loses predictability.

November 6, 2011 3:34 pm

Leif “Of course, that was an input to the calculation and when input parameters are tuned to obtain the best fit, the model loses predictability.”
That was a parameter that was tuned to attempt to find the best value to apply to “average” permeability and hence make flow predictions. This is precisely analogous to the Charnock parameter you quoted earler that described an average surface roughness to be used.

Spector
November 6, 2011 4:40 pm

I am somehow reminded of a circa 1972 science fiction story, “When HARLIE was One,” by David Gerrold, which was about an artificial human intelligence, who, as I recall from almost 40 years ago, was tasked with building a super computer called G.O.D. which was supposed to find the answer to all the world’s problems. David Gerrold is also famous as the author of the “Trouble with Tribbles” Star Trek episode.

November 6, 2011 8:03 pm

TimTheToolMan says:
November 6, 2011 at 3:34 pm
That was a parameter that was tuned to attempt to find the best value to apply to “average” permeability and hence make flow predictions. This is precisely analogous to the Charnock parameter you quoted earler that described an average surface roughness to be used.
The difference is that the Charnock parameter is measured and not nilly-willy fitted[i.e. tuned] to have the overall fit be better. Try to read some of my discussion with Richard to see if you can grok the difference.

November 6, 2011 8:31 pm

Leif “The difference is that the Charnock parameter is measured and not nilly-willy fitted[i.e. tuned] to have the overall fit be better. ”
Do you think permeability isn’t a well known and “measured from rock and sand samples” quantity? The application of that parameter is almost certainly a “last mile” application to get the “expected from physics” value of the permeability calculated from the sand and rock types, moisture and whatever is used in permeability calculations believed to be in the basin being modelled into line with the reality that is measured. From what I can see, the Charnock parameter is no different and is an approximation based on an imperfect knowledge of features in the area.
I couldn’t see any discussion with Richard on this. I saw some with Rational Debate that might have been relevent. Do you want to give me a posting time or something?

November 6, 2011 9:38 pm

TimTheToolMan says:
November 6, 2011 at 8:31 pm
From what I can see, the Charnock parameter is no different and is an approximation based on an imperfect knowledge of features in the area.
Yes, there is no difference, except that the Charnock parameter is measured or at least estimated and not determined by tuning, which is varying the parameter at random until the fit with the past is optimal, and then hoping that that [unphysical] fit also predicts the future [which it might not do].
I couldn’t see any discussion with Richard on this. I saw some with Rational Debate that might have been relevent. Do you want to give me a posting time or something?
I meant Rational Debate, of course.

November 6, 2011 10:54 pm

Leif writes “Yes, there is no difference, except that the Charnock parameter is measured or at least estimated and not determined by tuning, which is varying the parameter at random until the fit with the past is optimal”
Who says its totally random? They’re certainly not saying there was an unlimited range on the values used. So if their tuning was constrained to be within what might be expected as possible values given the expected theoretical permeability from drill cores and whatever, then you’d agree that this paper does damn your argument?

November 6, 2011 11:29 pm

Having look again at the paper they say
“The ranges that the model parameters were allowed to take are: h 2 (0; 60), kg 2
(100; 200) and kp 2 (0; 50).”
And it looks like they’ve explored all the possibilities so none but the truth values gave them predictive power. And then once there were small errors introduced into the model (by perturbing the permeability) no values were predictive at all.
So whilst it may be arguable that they allowed values far away from a reality (I dont know what realistic values may have been) it also appears to be true that they tested for values close to reality.

November 7, 2011 6:47 am

TimTheToolMan says:
November 6, 2011 at 10:54 pm
So if their tuning was constrained to be within what might be expected as possible values given the expected theoretical permeability from drill cores and whatever, then you’d agree that this paper does damn your argument?
So whilst it may be arguable that they allowed values far away from a reality (I dont know what realistic values may have been) it also appears to be true that they tested for values close to reality.

To repeat the melt pond example: Allowed values for when melting occurs are the 12 months. If tuning shows that this test ‘if month = October or month = July or month = February then …’ gives the best fit then that does not improve predictability, but if calculation of the insolation is used or even an approximation [if month = June or month = July] then since that is based on sound physics, one could expect improved predictability.
We assume that if all the parameters were set to their true values, then their model should work and give useful prediction. The problem is that we do not know which one of the 7000 ‘predictions’ given is the right one.

November 7, 2011 5:40 pm

Leif writes “We assume that if all the parameters were set to their true values, then their model should work and give useful prediction.”
And we see by experiment that if the parameters aren’t exactly correct or even if they are exactly correct but there a small errors in the model then there is no predictive power in the model. This contradicts your fundamental assumption. And its not “intuition” its an actual demonstrable result.

November 7, 2011 6:29 pm

TimTheToolMan says:
November 7, 2011 at 5:40 pm
And we see by experiment that if the parameters aren’t exactly correct or even if they are exactly correct but there a small errors in the model then there is no predictive power in the model. This contradicts your fundamental assumption. And its not “intuition” its an actual demonstrable result.
I think you are not getting the fundamental point: if the model is tuned by just varying the parameters to produce the best fit then one cannot expect predictive power, but if the parameters are measured or constrained by physics independent of the model, then the model works. Otherwise the widely used model in question would be useless [and it is not – the petroleum industry makes a lot of money from well-performing models], because we can never get the parameters exactly right. But we can measure them [even with some errors] and with the measured values the model perform great. Same thing with the climate [or any other model for that matter].

November 7, 2011 7:00 pm

Leif writes : “Otherwise the widely used model in question would be useless”
I’m not saying models are useless, they are very useful in their ability to do “what if” scenarios where parameters are tweaked in various ways to give an insight into possible futures. However they dont forcast the future and they’re often simply wrong. They give a good starting point for further real investigations but aren’t the end result in themselves.
In other words if the model says 3C per doubling of CO2 then thats an indication that CO2 could warm the atmosphere. And nothing else. Its certainly not an indication that the atmosphere will warm 3C with a doubling of CO2.
There have been a number of real life examples in this thread where the model predicts something and reality is different. The semi-conductor and aeronautical industries were mentioned.

November 7, 2011 7:49 pm

TimTheToolMan says:
November 7, 2011 at 7:00 pm
I’m not saying models are useless, they are very useful in their ability to do “what if” scenarios where parameters are tweaked in various ways to give an insight into possible futures.
You are contradicting yourself. The climate models are “what if” scenarios. E.g. “what if” CO2 increases at current rate, “what if” solar radiation changes, “what if” land-use changes. Their predictions are ‘possible futures’. To project a possible future, the model must have predictive power.
And you have still not grasped the fundamental point: if we tweak parameters to get the best fit to the past, predictive powers do not follow. If we update the parameters because we have learned some physics and can represent the physics better, the expectation is improved prediction.

November 8, 2011 4:31 am

Leif writes “Their predictions are ‘possible futures’. To project a possible future, the model must have predictive power.”
No it doesn’t have to have predictive power at all because the model results will almost certainly be total rubbish for future prediction. Models can be run and tweaked with known values too, they’re not all about future prediction in the sence GCMs are.
For example I expect the model that tests a new CPU design will be all about trying to make each transistor run within its spec and work out where propogation delays adversely affect its performance. The future prediction in this case is represented by the time to perform say an register store and the model will actually achieve that register store but will model rubbish about the performance.
This is analogous to modelling the climate. You may think all the bits are working correctly (sure those small errors dont matter) but the end result is rubbish because hey they do.
I can see that you’re going to want to argue the case for “so many” GCMs all getting about the same results but as Willis has mentioned before those models have fundamentally different sensitivites. They cant all be right. And because they’re all the same, its a very strong indication they’re all plagued with confirmation bias.
Leif writes “And you have still not grasped the fundamental point: if we tweak parameters to get the best fit to the past, predictive powers do not follow.”
I have grasped that perfectly well. My point is even more fundamental than that.The models are not producing correct predictions period. No manner of tweaking or new physics or even physics improvements will improve them because like the model in this article the models are in error (much more than the 1% used in the paper in this article and always will be in my life anyway) and cannot give a correct result.

November 8, 2011 4:54 am

TimTheToolMan says:
November 8, 2011 at 4:31 am
I have grasped that perfectly well. My point is even more fundamental than that.The models are not producing correct predictions period.
So you are maintaining that no model can ever produce correct predictions. Do you know of any model that does? Or are all models always wrong? How about the models of stellar evolution?

RACookPE1978
Editor
November 8, 2011 6:30 am

Leif Svalgaard says:
November 8, 2011 at 4:54 am
TimTheToolMan says:
November 8, 2011 at 4:31 am
I have grasped that perfectly well. My point is even more fundamental than that.The models are not producing correct predictions period.
So you are maintaining that no model can ever produce correct predictions. Do you know of any model that does? Or are all models always wrong? How about the models of stellar evolution?

To continue the original intent of his question – perhaps to challenge your replies above.
A finite element model – which is what these glorified circulation models are – is only an approximation of the real world. FEA models are used reliably tens of thousands of times every day in engineering models. And they do produce predictable results that are close to what the real world of crystals and metals and thermal transfer and motions and stress and strains and mold relaxation pressures actually are.
But these FEA only work reliably to produce near-real-world results – that is, they only predict real world solutions accurately – as pointed out above as well, when the FEA “cubes”are near-uniform in shape,
when the total “nest” of all of the cubes most closely approximates the actual thing to be modeled,
when the “information” transferred across each and EVERY boundary is completely and accurately defined by the partial differential equations and boundary value equations of each and every cell,
when the “information” of a flow approximation actually “approximates” the actual flow – and even then, simple pipe turbulence and lamina flow in a simple round pipe going through an elbow fails!
Now. look at the circulations models. They don’t have uniform cubes: they use 100×100 km grids that are too thin in height to model the atmosphere, they don’t change areas as they approach the poles, and they don’t model “flow” accurately enough to even approximate the jet streams or cold fronts or storms or even hurricanes in the atmosphere. They don’t model flow through those cubes accurately: you cannot see even “artificial” AMO and PDO changes, El Nino’s and La Nina, or even routine tropical monsoons or changes in the atmosphere or deserts or the Arctic or the savannahs or heavy rainfalls over forests and jungles. They don’t contain “all” of the information that must be exchanged: only what the NCAR physicists “think” they need to model.
If you think the models are accurate: list exactly what “information” is exchanges at every boundary: Consider evaporation for example. Do they model sea conditions for each cube? Land? How accurately? Are sea conditions changes each night and day? Are storms approximated? Can you see the effects of the doldrums? Trade winds? Do winds in each land area accurately follow the real world in every cube? Do Co2 levels change in every cube as we now they actually do? How are cubes changed as the winds cross the Cascades? the AU desert? India’s lowlands and their mountains? Can you show me snowfall in the Alps and in Sweden and on the Mt Kilimanjaro at its elevation? Does the snow fall accurately – so albedo’s change correctly every month?
The real world is NOT uniform averages of solar light in, clouds above, surfaces below, mountains, seas, lakes, and coastlines. They don’t model sunlight at every latitude. They don’t model clouds differently at every cube, at very half-cube or even every 10×10 km area. They don’t change solar radiation over the year. They don’t rotate the “earth”. they don’t model gravity of the water-salt-temperature changes and Coriolis-created currents. They don’t start from known boundary conditions – they let the “model” run for thousands of “days” and assume that that result is an “average” earth worldwide.
You claim for parameters seems particularly strained: The only way for the global models to “work” in hindcasts is for each model to differently “assume” (uniquely change (er, model) its parameters for aerosols and reflectivity for each year between 1950 and 2010) so the result temperatures do match what actually happened in those years.
That very change in aerosol levels alone means the models have no predictive ability.
(2) If you disagree with the above characterizations, please direct me to a graduate level text that does define all the parameters and equations for your favorite models. What text is best? What paper does list every cube and every parameter used?
(3) Let us not get into solar models in this thread: There are 10^54 reasons to wonder about many standard solar theories. 8<)

November 8, 2011 10:12 am

RACookPE1978 says:
November 8, 2011 at 6:30 am
(2) If you disagree with the above characterizations, please direct me to a graduate level text that does define all the parameters and equations for your favorite models. What text is best? What paper does list every cube and every parameter used?
Here is a model: http://www.leif.org/EOS/CAM3-Climate-Model.pdf and here is a textbook: http://www.stanford.edu/group/efmh/FAMbook/FAMbook.html

November 8, 2011 12:03 pm

RACookPE1978 says:
November 8, 2011 at 6:30 am
(3) Let us not get into solar models in this thread: There are 10^54 reasons to wonder about many standard solar theories.
That is a cop-out. The stellar evolution models successfully predict the ‘climate’ on stars 10+ billion years ahead. A stellar example of successful prediction.

Editor
November 8, 2011 1:02 pm

Leif
I have been carrying out research into old climate records at the Met office archives. I was most struck by the frequent references to sightings of the aurora borealis in southern england between around 1550 and 1620.
Is there any scientific reason for this?
tonyb

RACookPE1978
Editor
November 8, 2011 5:15 pm

Leif Svalgaard says:
November 8, 2011 at 10:12 am
1. PDF saved, thank you.
2. Ordered.

November 8, 2011 9:54 pm

climatereason says:
November 8, 2011 at 1:02 pm
frequent references to sightings of the aurora borealis in southern england between around 1550 and 1620. Is there any scientific reason for this?
Yes, solar activity was high then
RACookPE1978 says:
November 8, 2011 at 5:15 pm
2. Ordered.
You’ll not be disappointed. I found it a very good read [and learning experience].