New study narrows the gap between climate models and reality

From the University of York:

michaels-102-ipcc-models-vs-reality

A new study led by a University of York scientist addresses an important question in climate science: how accurate are climate model projections?

Climate models are used to estimate future global warming, and their accuracy can be checked against the actual global warming observed so far. Most comparisons suggest that the world is warming a little more slowly than the model projections indicate. Scientists have wondered whether this difference is meaningful, or just a chance fluctuation.

Dr Kevin Cowtan, of the Department of Chemistry at York, led an international study into this question and its findings are published in Geophysical Research Letters. The research team found that the way global temperatures were calculated in the models failed to reflect real-world measurements. The climate models use air temperature for the whole globe, whereas the real-world data used by scientists are a combination of air and sea surface temperature readings.

Dr Cowtan said: “When comparing models with observations, you need to compare apples with apples.”

The team determined the effect of this mismatch in 36 different climate models. They calculated the temperature of each model earth in the same way as in the real world. A third of the difference between the models and reality disappeared, along with all of the difference before the last decade. Any remaining differences may be explained by the recent temporary fluctuation in the rate of global warming.

Dr Cowtan added: “Recent studies suggest that the so-called ‘hiatus’ in warming is in part due to challenges in assembling the data. I think that the divergence between models and observations may turn out to be equally fragile.”

Dr Cowtan’s primary field of research is X-ray crystallography and he is based in the York Structural Biology Laboratory in the University’s Department of Chemistry. His interest in climate science has developed from an interest in science communication. This is his second major climate science paper. For this project, he led a diverse team of international researchers, including some of the world’s top climate scientists.

###

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

374 Comments
Inline Feedbacks
View all comments
August 2, 2015 5:23 am

Well. I was wondering. Why do the models need training? No one trains F=ma.

Reply to  M Simon
August 2, 2015 8:49 am

Newton already trained it.

August 3, 2015 11:40 pm

My view of the debate thus far:
Frank and Oldberg continue to attempt disambiguation of the language of the debate thus heading off applications of the equivocation fallacy. Svaalgard and Courtney continue to attempt ambiguation of this language thus enabling applications of this fallacy. Svaalgard and Courtney exhibit aversion to addressing the issue of why one would wish to enable applications of the equivocation fallacy. Frank and Oldberg exhibit eagerness to address the same issue.

kim
Reply to  Terry Oldberg
August 4, 2015 3:54 am

Every time I run across this issue it is a diversion from the point that models are not mature enough for policy action. What amuses me is that the diversion is rarely deliberate; oh, the banality of the innocent.
================

Reply to  kim
August 4, 2015 9:39 am

Kim, I’ve been on your point for years. I’ve been trying to publish a paper on exactly the unreliability of models for more than 2 years now, in the face of the ‘agreement review’ process that controls climate science. Manuscripts are acceptable when they agree with the consensus, and rejected when they do not. In my experience, even if you get by peer review, an editor will find another reason to reject.
But it’s not a diversion to speak of other things. Other topics are legitimate, despite that you’re right that the central issue is the adamantine policy adherence to completely unreliable climate models.

Reply to  kim
August 4, 2015 10:08 am

kim
It is true that models are not mature enough for policy action. However, most arguments for the proposition that this is true draw conclusions from equivocations thus being logically improper. Thus, rather than being a diversion ridding arguments of application of this fallacy is essential.

kim
Reply to  kim
August 4, 2015 11:09 am

It seems we all agree,
It’s a noisome way to be.
====================

Matt G
August 6, 2015 5:13 pm

Predict
“say or estimate that (a specified thing) will happen in the future or will be a consequence of something”
Projection
“1.an estimate or forecast of a future situation based on a study of present trends:”
Climate models are definitely a prediction as they have never been based on a study of present trends. The present trends have always been ignored by the alarmists and always been about the future consequences.

Reply to  Matt G
August 6, 2015 11:33 pm

Matt G:
Your definition of “predict” is not completely accurate. To say that a specified thing will happen in the future is a “prediction.” To say that a specified thing will be a consequence of something is a “predictive inference.”
Example:
Prediction: Rain in the next 24 hours.
Predictive inference: Given cloudy, rain in the next 24 hours.
In the absense of a “predictive inference,”rain in the next 24 hours” is not an example of a “prediction” but rather is an example of a “projection.”
If there is a predictive inference, the observation of “cloudy” conveys the information to us that it will rain in the next 24 hours. If there is not a predictive inference, the observation of “cloudy” conveys no such information to us. Thus, it is not the prediction that conveys information to us but rather is the associated predictive inference. That this is so can be proved with the help of information theory.
To generalize from this example, one should not conflate “prediction” with “projection” as it is the predictive inference that is associated with a prediction but not a projection that conveys information to us. To treat “prediction” and “projection” as synonyms is to make the error of conflating a situation that conveys information to us with one that does not.
Global warming climatologists have made this error. The projections of their models convey no information to a policy maker but seem to convey such information because global warming climatologists plus true believers in their utterances use “prediction” and “projection” as synonyms. For control of the climate a policy maker needs information about the outcomes of events and though he has no information it seems to policy makers and true believers as though this information is available. Thus, though the climate is uncontrollable policy makers persist in attempts at controlling it.

Reply to  Terry Oldberg
August 6, 2015 11:43 pm

Again, you are completely wrong.
Let this be a teachable moment for you:
A projection is a forecast based on the current observed state and the current observed trend, i.e no input from physics, just statistics. Thus is an inference based on extrapolation..
A prediction is a forecast computed from the physics of the phenomenon, using the current [or the past for that matter] state as input [but not the trend] and solving for the time evolution of the governing equations [possibly calibrated using observations]. Thus is not an inference but a real-world expectation.
This closes the debate.
.

Reply to  lsvalgaard
August 7, 2015 8:31 am

lsvalgaard:
Your high regard for the cogency of your own argument is misplaced. My example is chosen for simplicity. Though it incorporates no natural laws there is no barrier to incorporation of them into a model that makes a predictive inference, contrary to your assumption. Colleagues of mine and I have successfully built models having this characteristic.
As a model that makes no predictive inference conveys no information to a policy maker, it is a disastrous error to build a model that makes no predictive inference and to use it in making policy. Thus, a semantic distinction should be made between the output of a model that makes a predictive inference and and the output of a model that does not. The specifics of the terminology that makes this distinction are unimportant.
The terminology suggested by Dr. Trenberth is available for use for this purpose and in widespread use. The “projections” described by Trenberth as the outputs from currently available climate models are not products of a predictive inference. Thus, they convey no information to a policy maker and are unsuitable for use in making policy. If we reserve the term “prediction” for use in reference to the outputs from a predictive inference it then becomes impossible for it to be falsely concluded that modern climate models are suitable for use in making policy by application of the equivocation fallacy.

Reply to  Terry Oldberg
August 7, 2015 8:57 am

If we reserve the term “prediction” for use in reference to the outputs from a predictive inference it then becomes impossible for it to be falsely concluded that modern climate models are suitable for use in making policy by application of the equivocation fallacy
That is just meaningless gobbledygook..’Inference’ is ‘an educated guess’ and in no way is applicable to the result of physics-based models. The reason, modern climate models are not suitable for policy is simply that they don’t work. Not that they are ‘projections’.

Reply to  lsvalgaard
August 7, 2015 9:57 pm

lsvalgaard:
“That is just meaningless gobbledygook” is the conclusion of an argument. What are the major and minor premises to this argument and why are these premises true?

Reply to  Terry Oldberg
August 7, 2015 10:31 pm

This is not an argument, but simply a statement of fact. Take it to heart.

Reply to  lsvalgaard
August 7, 2015 11:12 pm

lsvalgaard:
You claim “that is just meaningless gobbledygook”is a “fact” but are unable to make an argument in support of this conclusion let alone prove it. Do you take the audience for this debate to be made up of fools?

Reply to  Terry Oldberg
August 7, 2015 11:14 pm

not generally, but there are certainly a couple around…

Reply to  lsvalgaard
August 7, 2015 11:24 pm

lsvalgaard:
You are unwilling or unable to defend your conclusion. I rest my case.

Reply to  Terry Oldberg
August 7, 2015 11:30 pm

We all wish that you would indeed do that. RIP.

Matt G
Reply to  Terry Oldberg
August 7, 2015 2:37 am

Terry Oldberg August 6, 2015 at 11:33 pm
Isvalgaard is exactly right, you state “Given cloudy, rain in the next 24 hours.” This is a projection not a prediction as confirmed in most English dictionaries. It is only a projection forecasting future based on now or recently, sorry you are wrong.
Weather models are based on projections to give accurate forecasts, if they were based on predictions they would be wrong all the time like climate models. Weather models increasingly become useless further from point of observation because they become a prediction and can’t rely on current, very recent observations.
The prediction of their climate models only gives information on the future and ignores information/no information based on recent/current knowledge.

Reply to  Terry Oldberg
August 7, 2015 9:31 am

Good grief, is this still going on?
Hi Doc
This is a big one, it is facing us; If it pops-up it could be interesting.
http://sdo.gsfc.nasa.gov/assets/img/latest/latest_1024_HMIIC.jpg

Reply to  vukcevic
August 7, 2015 9:34 am

And it sits on a Hale boundary, so is a good candidate for flaring activity.

Reply to  Terry Oldberg
August 8, 2015 10:22 am

Leif, how are model expectation values “predictions” when they have no knowable physical meaning?
It’s very clear that model outputs that conspicuously lack physically valid error bars are misrepresentations. It is quite obvious that models are being extended well beyond their resolution limit. What you call “predictions” are knowledge claims where no knowledge exists. Such claims cannot ipso facto be predictions.
I see now that you have admitted that models are “[possibly calibrated using observations].” Thank-you for conceding my point that models are tuned, a fact that you previously denied.
But in doing so you go on to misrepresent models fitted to observations by utilizing offsetting errors as being “calibrated.”
From Kiehl, 2007, again, “ All of these [climate model] simulations show very good agreement between the simulated anomaly in global mean surface temperature and the observational record. … Note that the range in total anthropogenic forcing is slightly over a factor of 2 [in climate models], which is the same order as the uncertainty in climate sensitivity. These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.
Models: tuned to observables by offsetting parameter errors. That’s what you call “calibrated,” Leif. Application of inverted errors is not calibration. Your argument engages a tendentious abuse of language.
Under the actual circumstances of physically insupportable knowledge claims and indulgence of model false precision, there are no grounds whatever to call their expectation values “predictions” in any scientific sense of that word.
Also, I note that you have avoided addressing your erroneous thinking regarding that ±15 C projection uncertainty. The content of your error was pretty clear in the way you expressed yourself, Leif. I.e., “no climate model asserts that.” I just hoped you’d show the intellectual courage to correct it yourself.

Reply to  Pat Frank
August 8, 2015 10:42 am

There are so many things wrong with your comment that it is hard to know where to begin.
Let me repeat the clear definitions:
A projection is a forecast based on the current observed state and the current observed trend, i.e no input from physics, just statistics. Thus is an inference based on extrapolation..
A prediction is a forecast computed from the physics of the phenomenon, using the current [or the past for that matter] state as input [but not the trend] and solving for the time evolution of the governing equations [possibly calibrated using observations].

Now, a prediction can be wrong as climate models apparently are. That does not affect the definitions.
Your notion of ‘tuning’ is nonsense. If the models were tuned to match the observations the models would always be correct. Calibrating responses is normal practice in science.
Your +/-15 degrees is absurd on its face. For one, it lacks a time horizon: is the error that large after the first time-step of five minutes?

Reply to  Terry Oldberg
August 10, 2015 5:57 pm

Leif, your definition of projection is merely you being insistently self-serving.
Standard IPCC usage is that modeled future climate states are “projections,” your insistence notwithstanding.
Likewise, the IPCC AR5 WG1:
Chapter 11, “Near-term Climate Change: Projections and Predictability
Chapter 12, “Long-term Climate Change: Projections, Commitments and Irreversibility
It seems, Leif, you’re wrong again. Fortunately for you, I knew more to deal with your wrongness than you with my rightness.
To qualify for prediction in science, a deduced state must include the threat of theory falsification. That means the prediction is constrained to be within observational bounds. Any output of a physical model that is so vague as to be unconstrained by any possible observation does not qualify to be called a prediction.
Climate models fall under that latter condition. They do not produce deductions that are constrained within observational bounds. Their outputs are not predictions.
Your comment that, “Your notion of ‘tuning’ is nonsense. If the models were tuned to match the observations the models would always be correct.” is so wrong you must have written it with your mind turned off.
There is no reason to think that a physical model tuned to past obsrvations will invariably produce correct predictions of future states. A good example of this problem can be found in the quasi-thermodynamic linear free energy relationships (LFERs) used in physical organic chemistry to understand reactivity or solute behavior in non-aqueous solvents. LFERs are tuned using observables, but have only very limited success in extrapolation beyond their verification bounds. The main problem is that solvent behavior is too complex for current physical theory.
I discussed the problem of climate models tuned to observations, here where it’s explained why a climate model that accurately reproduces observations is not “correct.” See also here, where the meaning of uncertainty from propagated error is discussed, as applied to climate model projections.
The reason they do not predict, Leif, is that climate models are not capable of producing unique solutions to the problem of the climate energy state. The falsifiability criterion, remember? Because of that, there is no way to know that the underlying physics is correct, even if the observables trend is reproduced.
That, of course, and the fact that propagated error makes the projection uncertainty grow so much faster than the magnitude of the model expectation value that no possible observation could ever falsify the model.
Finally, your comment that, “Your +/-15 degrees is absurd on its face. For one, it lacks a time horizon: is the error that large after the first time-step of five minutes?” merely shows that you’ve never bothered to read any of my analyses, here for example, and especially here (2.9 MB pdf) before posting your negative comments.
I assume here (hope, really) that you actually do understand the meaning both of propagated error and of the resulting uncertainty bars as an ignorance width.
Your entire discussion in this thread lacks care, Leif. Substantively, it’s been no deeper than Joel Jackson’s empty contrarianism.

Reply to  Pat Frank
August 10, 2015 8:52 pm

Pat Frank:
I’ll add that the lack of the threat of theory falsification is associated with the absence of identification of the statistical population underlying this theory. Though this threat is eliminated, the theorist can create the illusion of falsifiability through applications of the equivocation fallacy wherein polysemic terms that include “predict,” “model” and “science” are used in making arguments. Though these arguments appear to believers in the scrupulousness of their “scientists” to be syllogisms they are examples of equivocations. While the conclusion from a syllogism is true, the conclusion from an equivocation is false or unproved. Thus, a logical conclusion cannot be drawn from an equivocation. The deception is complete when the theorist draws a conclusion from an equivocation. Those who have believed in the scrupulousness of their “scientists” have been screwed!

Reply to  Pat Frank
August 10, 2015 11:12 pm

By your comment you admit that you didn’t learn anything.
The standard definitions in science are:
A projection is a forecast based on the current observed state and the current observed trend, i.e no input from physics, just statistics. Thus is an inference based on extrapolation..
A prediction is a forecast computed from the physics of the phenomenon, using the current [or the past for that matter] state as input [but not the trend] and solving for the time evolution of the governing equations [possibly calibrated using observations].

A model can use a projected variable as input. That is called a ‘scenario’. Based on the projected values, the model now predicts the future evolution. That prediction may fail [often does], but that does not alter the meaning of the words.

Reply to  Terry Oldberg
August 10, 2015 7:24 pm

I noticed, by the way, that in the WG1 Report, “The Physical Science Basis” of the IPCC 5AR, neither Chapter 9, “Evaluation of Climate Models nor Chapter 11 “Near-term Climate Change: Projections and Predictability” discuss the failed predictability showed by perfect model tests, except for the North Atlantic (out to 10 years).
That is, the very chapters that purport to evaluate climate models are silent about their predictive failure. The same analytical lacuna is found in the AR4 as well; discussed here.

Reply to  Terry Oldberg
August 11, 2015 9:57 am

Terry Oldberg, agreed. The equivocation fallacy you describe so well is unfortunately very common in social debate. The real tragedy here is that it’s become both deliberately used and mindlessly allowed among otherwise science professionals.
Leif, I see by your doctrinaire insistence that you’re unwilling to grapple with the substance of prediction in science.
Your definition of prediction is not how climate modelers themselves describe what they do. It further does not describe what climate models do, because within the models much physical description is replaced by parametrizations. This replacement is particularly relevant as regards the climatological effect of GHGs, because the magnitude of the GHG effect falls below the energy flux magnitudes of the parametrized sub-systems.
The extent of parametrization in climate models also negates your argument that models make predictions, based upon your own criterion that prediction is “computation from the physics of the phenomenon.” There it is, Leif. Your own definition of prediction destroys your own argument about climate models.
Your description of prediction further does not include the necessity of falsification. It’s not a prediction if it cannot be falsified, Leif. Climate models cannot be falsified because the propagated uncertainty is always far larger than their expectation values. No conceivable observation can falsify model outputs when they are accompanied by propagated uncertainties that extend beyond physical bounds.
You can turn your blind eye to that as much as you like, but you can’t avoid being wrong. One might observe, in your continued avoidance of this point, that you tacitly “admit that you didn’t learn anything.” Or maybe can’t. Or maybe won’t. None of that saves you from being wrong.

Reply to  Pat Frank
August 11, 2015 10:11 am

substance of prediction in science
Your desperate demonstrations of your ignorance about science are so typical of bias-driven people.
So one more time:
The standard definitions in science are:
A projection is a forecast based on the current observed state and the current observed trend, i.e no input from physics, just statistics. Thus is an inference based on extrapolation..
A prediction is a forecast computed from the physics of the phenomenon, using the current [or the past for that matter] state as input [but not the trend] and solving for the time evolution of the governing equations [possibly calibrated using observations].

Even NOAA agrees:
https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/climate-prediction

Reply to  lsvalgaard
August 11, 2015 1:35 pm

lsvalgaard:
By binding definitions to the words “predict” and “project” that differ from the definitions that were bound to them by Dr. Frank and me, you have altered the terms of our debate. To get this debate back on track toward reaching a logically valid conclusion, let’s employ the made-up words “prediction-a” and “prediction-b.” That they are made-up will allow us to bind whatever meanings to the two words are necessary for the purpose of drawing a logically valid conclusion from our argument.
I’ll stipulate that prediction-a and prediction-b are both monosemic. Thus, use of them will free our arguments from the danger of drawing logically illicit conclusions from equivocations. I’ll further stipulate that prediction-a’s are the products of modern day climate models.
It is easy to prove that prediction-a’s have the properties of: a) lacking falsifiability and b) conveying no information to a policy maker about the outcomes from his/her policy decision. I’ll stipulate that prediction-b’s have the opposing properties of: a) falsifiability and b) conveying information to a policy maker about the outcomes from his/her policy decision.
We have discovered that prediction-b’s have the properties that are needed for regulation of the climate, that prediction-a’s have none of these properties and that prediction-a’s are the product of modern day climate models.
Let “prediction-c” be a made-up word that is polysemic and takes on the meanings of both prediction-a and prediction-b. If the word “prediction-c” is used in making an argument and changes meaning in the midst of the argument this argument is an “equivocation” by the definition of this term. As an equivocation is not a syllogism it is logically illicit to draw a conclusion from it. By drawing such a conclusion one can prove a falsehood, for example that prediction-a’s have the properties that are needed for regulation of the climate.
Thus, a scrupulous person would be attracted to using the monosemic terms prediction-a and prediction-b in preference to the polysemic term prediction-c in making an argument. An unscrupulous person would be attracted to using prediction-c.

Reply to  Terry Oldberg
August 11, 2015 12:36 pm

Leif, nothing on your reference site supports your definition of prediction.
In fact, it’s self-description of climate models as “Special numerical models,” in contrast to physical models flies right in the face of your insistence. That is, you’re contradicted by your own source.
There is no “desperation” in my comments, which have been clear, consistent, and explicit throughout. The content of the thread does make it obvious, however, that you’re either completely unwilling or completely unable to thoughtfully engage the subject of prediction in science. You’ve offered no considered view. You’ve shown no recognition of the impact of propagated error, or of its resulting uncertainty, on predictive status. Instead, you just rote-repeat the same inadequate conception.

Reply to  Pat Frank
August 12, 2015 11:36 am

one last time:
The standard definitions in science are:
A projection is a forecast based on the current observed state and the current observed trend, i.e no input from physics, just statistics. Thus is an inference based on extrapolation..
A prediction is a forecast computed from the physics of the phenomenon, using the current [or the past for that matter] state as input [but not the trend] and solving for the time evolution of the governing equations [possibly calibrated using observations].

Even NOAA agrees:
https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/climate-prediction

Reply to  lsvalgaard
August 12, 2015 2:31 pm

lsvalgaard:
Do you mean to say that we should change the meanings of “prediction” and “projection” in the midst of our argument?

Reply to  Terry Oldberg
August 12, 2015 2:37 pm

You should simply use generally accepted meanings employed by scientists:
A projection is a forecast based on the current observed state and the current observed trend, i.e no input from physics, just statistics. Thus is an inference based on extrapolation..
A prediction is a forecast computed from the physics of the phenomenon, using the current [or the past for that matter] state as input [but not the trend] and solving for the time evolution of the governing equations [possibly calibrated using observations].

Now, in a certain sense it doesn’t matter which words you use. You could also define as follows:
A drageef is a forecast based on the current observed state and the current observed trend, i.e no input from physics, just statistics. Thus is an inference based on extrapolation..
A putlihoot is a forecast computed from the physics of the phenomenon, using the current [or the past for that matter] state as input [but not the trend] and solving for the time evolution of the governing equations [possibly calibrated using observations].
If those were the words in general use, you’d do fine. If not, you are a bit in trouble [as with your current usage]. What is important is not what word you use, but what the concepts behind the words are.

Reply to  lsvalgaard
August 12, 2015 4:05 pm

lsvalgaard:
I gather that under your definition of “prediction” each GCM makes predictions. However, no GCM makes a conditional prediction aka predictive inference. That a prediction is “conditional” implies that each of its outcome probabilities is conditional.
Information theory establishes that a model conveys no information to us in advance of observing the outcomes of events unless its outcome probabilities are conditional. This information is called the “mutual information.”
A non-nil level of mutual information is required for regulation of the climate. Currently, the EPA cannot regulate the climate because the mutual information from each of its models is nil.
Thus, these models are useless for the purpose of making policy. Nonetheless they are being used in making policy. This regulatory absurdity is a consequence from defining “prediction” as you’d like us to define it.

Reply to  Terry Oldberg
August 12, 2015 4:19 pm

Information theory establishes that a model conveys no information to us in advance of observing the outcomes of events unless its outcome probabilities are conditional.
Nonsense. The result of predicting the position of Mars from the physical theory of gravity is not ‘conditional’ and is not an ‘inference’.

Reply to  lsvalgaard
August 12, 2015 6:36 pm

lsvalgaard:
You are correct in stating that “The result of predicting the position of Mars from the physical theory of gravity is not ‘conditional’ and is not an ‘inference’.” However the correctness of this statement does not support your conclusion that I wrote “nonsense.”
An “inference” is an extrapolation from an observed state of a system to an unobserved state of the same system. Conventionally the unobserved state is called the “condition” while the observed state is called the “outcome.” An inference is “predictive” when the condition precedes the outcome.
When the outcome probabilities are conditional a kind of inference is made. It is called a “predictive” inference. It is through the use of a predictive inference that one can estimate the position of Mars on Jan. 1, 2026.
Though there is a predictive inference by which one can estimate the position of Mars, there is not a predictive inference by which one can predict the outcomes of events for Earth’s climate. The basis for my claim is an eight year search for the statistical population underlying each GCM in which I’ve found nothing resembling a statistical population. If present a statistical population would provide the means for assigning values to the conditional probabilities of the outcomes of the events. In the place of a statistical population I’ve found applications of the equivocation fallacy that create the illusion of a statistical population.

Reply to  Terry Oldberg
August 12, 2015 8:47 pm

Though there is a predictive inference by which one can estimate the position of Mars
No, the prediction is not an ‘inference’, but the result of applying physical laws. Similarly, any other application of physical laws [e.g. climate models] are also not inferences.

Reply to  Terry Oldberg
August 12, 2015 8:07 pm

Leif, your mode of argument is no more than oracular declamation.
You have repeatedly avoided discussing anything. That includes your notable avoidance of the subject of physical error propagation and predictive uncertainty, indicating either that you know nothing of them or that you do not wish to admit your mistake.
You denied the falsification criterion of prediction, ludicrously dismissing 350 years of scientific practice. And thereby implying no important distinction between scientific deduction and the loopy theorizing of the liberal arts.
The impression given is that you’d rather be insistently fatuous than ever admit a mistake. Which in your case have been plenty and obvious, and include self-contradiction. Which never seems to bother you.
You’re a scientist to be emulated, Leif, no doubt about it.
Terry, there appears no point in continuing the debate here. Evidently, Leif’s tactic is to stubbornly insist so as to avoid admitting his mistakes, and to endure with methodological vacuity until everyone leaves. Victory!

Reply to  Pat Frank
August 12, 2015 8:39 pm

Pat Frank:
I agree on the merits of continuing the debate. It was a rare treat to have a person who was versed on general systems theory, information theory, probability theory, statistics, philosophy of science and related topics as an ally!

Reply to  Pat Frank
August 12, 2015 8:53 pm

to endure with methodological vacuity until everyone leaves. Victory!
On the contrary, it is about educating you. To the extent that you don’t seem to learn, my effort here is a failure.

Reply to  Terry Oldberg
August 13, 2015 9:03 pm

On the contrary, it is about educating you.. The sad of it, Leif, is you evidently believe that.
Terry, you were right about any scientific prediction being both an inference and conditional.
Scientific inferences as deduced from valid theories are logically coherent, quantitative, and tightly bounded. These traits separate scientific inferences from all other sorts.
The tightly bounded criterion enforces the unique solution necessary to impose the threat of observational falsification.
Scientific inferences, for all that, are conditioned by the known uncertainties. Known uncertainties produce the bounds around a quantitative prediction. They condition the prediction in terms of our state of knowledge.
The accuracy of Leif’s Mars orbital prediction, for example, is conditioned by the level of systematic uncertainty in the gravitational constant.
The inference of Mars orbital position from Newtonian mechanics would also be conditioned by the uncertainties in the mass and orbital parameters of Mars. Not to mention the small relativistic error.
Even though these uncertainties are small, they necessarily condition any predicted position of Mars, and may become sources of significant uncertainty over very long prediction time-lines.
Orbital prediction, especially over long times, will also be conditioned by fluctuations in the general gravitational background around Mars, exerted by the rest of the solar system. These are again small and often unpredictable, but one might be able to estimate some general average time-wise deviation due to them. But nevertheless, these again condition any statements about the future orbital position of Mars.
These uncertainties might be called the known unknowns, and such things necessarily condition our scientific knowledge statements. They will put uncertainty bounds, albeit tight bounds, around any prediction from theory.
So, once again, you were right and Leif wrong. There was no need for you to later qualify your statement.

Reply to  Pat Frank
August 13, 2015 11:34 pm

The accuracy of Leif’s Mars orbital prediction, for example, is conditioned by the level of systematic uncertainty in the gravitational constant.
Not at all, that uncertainty does not enter at all. End-of-education.

Reply to  Pat Frank
August 14, 2015 9:02 am

Pat Frank:
My understandings on various issues are the result of a background in information theoretically optimal model building. By building a model in this way, the builder of it can ensure that the maximum possible information about the outcomes of unobserved events will flow to the users of the model. Maximization of this information (the “mutual information”) ensures the best possible result if the model is used in an attempt at controlling a system.
When a model is built in this way, the many inferences that are made by the model are selected by optimization. The alternative is to select them by intuitive rules of thumb aka heuristics. There are many possible rules of thumb and they select many different inferences. Thus, the method using rules of thumb violates the law of non-contradiction thus creating David Hume’s “problem of induction.” Optimization satisfies the law of non-contradiction, solves the problem of induction and yields the best possible result from attempts at control.
Though the technology for building a model by optimization is 52 years old it caught on among relatively few scientists. Consequently, virtually all model builders select the inferences that will be made by their models by rules of thumb. One result is variability in the quality among the models that are constructed.
One of the more disastrous possibilities is for a model to be constructed that conveys no information to its users but seems to them to convey information. Thus, they believe they can control a system when in fact it is uncontrollable. This has been the fate thus far for global warming climatology. That this has happened has been obscured by frequent applications of the equivocation fallacy on the part of global warming climatologists. These applications exploit the fact that in the climatological literature many words and word-pairs that are descriptive of methodology are polysemic. Among the words, as you are well aware, is “predict.”

RD
August 10, 2015 11:53 pm

Thanks for a great discussion all.