Model-land, Butterflies and Hawkmoths

Guest Essay by Kip Hansen

 

featured-imageWelcome to Model-Land, ladies and gentlemen, boys and girls!  “Model-land is a hypothetical world in which our simulations are perfect, an attractive fairy-tale state of mind in which optimizing a simulation invariably reflects desirable pathways in the real world.”  Here in Model-Land you’ll see fabulous “computational simulations and associated graphical visualizations [that] have become much more sophisticated in recent decades due to the availability of ever-greater computational resources!” Where “…[t]he qualitative visual appeal of these simulations has led to an explosion of simulation-based, often probabilistic forecasting in support of decision-making in everything from weather forecasting and American Football, to nuclear stewardship and climate adaptation.”

If you come and play, you’ll want to stay! ™

[the foregoing is a Paid Advertisement from the fictional makers of Model-Land ®]

* * * * *

Model-land (in the leading image) looks to have all the requirements for an ecological study:  hills, valleys, grass, trees, bushes, a little river, sky and clouds. Yet any attempt to transfer the implications of changes in model-land to the real world are doomed to fail.  Why?

Because “Model-land is a hypothetical world in which our simulations are perfect, an attractive fairy-tale state of mind” say Erica L. Thompson and Leonard A. Smith in a new paper that appears in the e-journal Economics titled “Escape from model-land”.

“Both mathematical modelling and simulation methods in general have contributed greatly to understanding, insight and forecasting in many fields including macroeconomics. Nevertheless, we must remain careful to distinguish model-land and model-land quantities from the real world. Decisions taken in the real world are more robust when informed by our best estimate of real-world quantities, than when “optimal” model-land quantities obtained from imperfect simulations are employed.”

“Computational simulations and associated graphical visualisations have become much more sophisticated in recent decades due to the availability of ever-greater computational resources. The qualitative visual appeal of these simulations has led to an explosion of simulation-based, often probabilistic forecasting in support of decision-making in everything from weather forecasting and American Football, to nuclear stewardship and climate adaptation. We argue that the utility and decision-relevance of these model simulations must be judged based on consistency with the past, and out-of-sample predictive performance and expert judgement, never based solely on the plausibility of their underlying principles or on the visual “realism” of outputs.”

 “Model-land is a hypothetical world in which our simulations are perfect, an attractive fairy-tale state of mind in which optimising a simulation invariably reflects desirable pathways in the real world. Decision-support in model-land implies taking the output of model simulations at face value (perhaps using some form of statistical post-processing to account for blatant inconsistencies), and then interpreting frequencies in model-land to represent probabilities in the real-world. Elegant though these systems may be, something is lost in the move back to reality; very low probability events and model-inconceivable “Big Surprises” are much too frequent in applied meteorology, geology, and economics. We have found remarkably similar challenges to good model-based decision support in energy demand, fluid dynamics, hurricane formation, life boat operations, nuclear stewardship, weather forecasting, climate calculators, and sustainable governance of reindeer hunting.”

This paper is a Must Read for anyone whose interests intersect with the output of computational models or computer simulations of any type and for any purpose.

WARNING:  Model-haters should not get their hopes up — this essay is not a justification for the “all models are wrong” viewpoint.  What the highlighted paper attempts to do (and succeeds in doing) is illuminating the dangers of misunderstanding what models are capable of doing under what circumstances and for what purposes, and suggesting approaches to an escape from model-land into the real world.

Right out of the box it warns that tuned models are intrinsically bad at projecting effects in the long tails of probability, events which have a very low probability or “Big Surprises” which are, inside the model world, inconceivable.

There are lots of different types of models of different types and classes of physical and social processes, but the authors interestingly classify them in two general types:

In “weather-like” tasks, where there are many opportunities to test the outcome of our model against a real observed outcome, we can see when/how our models become silly.

In “climate-like” tasks, where the forecasts are made truly out-of-sample, there is no such opportunity and we rely on judgements about the quality of the model given the degree to which it performs well under different conditions and expert judgement on the shortcomings of the model.”

If I have a model running that makes projections of movements of the Dow Jones Industrial average, which changes by the minute,  I can easily validate the accuracy of my model output by comparing it to the real world market index. This DJIA model would be a “weather-like” model — easily checked for reliability — I could test run it for a few days or weeks before actually putting my money at risk following its projections.   Even then, I must be aware that exceptional circumstances could, in the real world, cause changes in the DJIA that my model could not even conceive of thus I would be wise not to bet the bank on any one trade.

On the other hand, if I have a model of the real estate market for multi-bedroom apartments at the high-income end of the market, in which returns are to be measured on a multi-decadal scale, this might be considered a “climate-like” model — in which the past might not be a good predictor of the future, with no ready opportunities to test the model against the real world.  Thus, the authors posit,  I would need to take into consideration expert judgement rather than depend on the quantitative output of the model alone.

An example:  A model of the real estate market in a nearby town, initiated 30 years ago, might have shown that there would be a strong and continuing market for up-scale high-end apartments for young professionals and their families — this market niche had been growing steadily over the previous twenty years.  However, real estate markets can be complex.   Had a company relied on the model output and built a series of expensive multi-story multi-bedroom apartments  30 years ago, the project would have been a financial disaster.  Why?  The model would not have been able to foresee or even conceive of the sudden departure (25 years ago) of the primary employer of professionals — which abruptly closed is offices, research center, and manufacturing plant, leading to a mass emigration of highly paid professionals and their families out of the area.

It is comfortable for researchers to remain in model-land as far as possible, since within model-land everything is well-defined, our statistical methods are all valid, and we can prove and utilise theorems. Exploring the furthest reaches of model-land in fact is a very productive career strategy, since it is limited only by the available computational resource. While pure mathematicians can, of course, thrive in model-land, applied mathematicians have a harder row to hoe, inasmuch as, for large classes of problems, the pure mathematicians have proven that no solution to the problem will hold in the real world.

Reasons_for_staying

Thompson and Smith go on to explore the implications of imperfect models (every model is imperfect outside of pure mathematics).  Of course, in non-linear numerical models, any, even minute, change in initial conditions can lead to vastly different projections, which has been called The Butterfly Effect. The specific effect is shown clearly by UCAR’s Large Ensemble Community Project , which I have previously covered in my essay Lorenz Validated at Judith Curry’s excellent blog, Climate Etc.  If you are not fully aware of what the Butterfly Effect means for Climate Models, you should read the Lorenz Validated essay now, then continue with this piece. [ An interesting video example – opens in a new tab or window ].

They describe another problem, developed over a period of years by a group, including the present authors,  at the London School of Economics (LSE), as The Hawkmoth Effect, a poster of which has been shown around various conferences, including at the AGU (Dec 2013) and LSE (2014).  Primarily the Hawkmoth Effect says “in a chaotic system if our model is only slightly mathematically mis-specified then a very large difference in outcome will evolve over time even with a “perfect” initial condition[s].” Paraphrased, nonlinear numerical models of complex systems are at high risk of exhibiting Structural Instability, in which small changes to the structure of the model can produce large changes in the outcomes of the models:

Hawkmothcopyright_text

The Hawkmoth hypothesis is a scientific controversy (mathematical and philosophical), with a series of papers supporting the idea, and another series of papers attempting to refute the idea.  Various efforts have been made to denigrate the Hawkmoth Effect as it applies to climate models (and here) and in the deep maths world, there is pushback on whether the effect is truly ubiquitous.

In the Climate Model field, the approach to handling the Butterfly Effect has been to use “ensemble means”:

“If we have (somehow) perfectly specified our initial condition uncertainty, but have a   structurally imperfect model, then the probability distribution that we arrive at by using multiple initial conditions will grow more and more misleading – misleadingly precise, misleadingly diverse, or just plain wrong in general. The natural response to this is then, by analogy with the solution to the Butterfly problem, to make an ensemble of multiple model structures, perhaps derived by systematic perturbations of the original model. Unfortunately, the strategy is no longer adequate. In initial condition space (a vector space), there a finite number of variables and a finite space of perturbations in which there are ensemble members consistent with both the observations and the model’s dynamics. Models lie in a function space where, by contrast, there are uncountably many possible structures. It is not clear why multi-model ensembles are taken to represent a probability distribution at all; the distributions from each imperfect model in the ensemble will differ from the desired perfect model probability distribution (if such a thing exists); it is not clear how combining them might lead to a relevant, much less precise, distribution for the real-world target of interest.

model_runs

It is understandable why there is concern in the Climate Modelling world regarding this continuing Hawkmoth Effect effort at LSE — an effort which started as a PhD thesis (Erica L. Thompson) in 2013 — and is still going strong in this latest paper in 2019. There is no question that the Butterfly Effect is real and operates in climate models (repeating the link to Lorenz Validated)   If the Hawkmoth Effect is real and is shown to be operationally effective in the current collection of climate models, which the image of multiple model outputs above seems to imply, then confidence in long-term climate projections will be seriously shaken.  Different models, initialized differently, produce projections that grow in divergence with time — and none of the models,  or scenarios, mirror real world observations.  [the graphic includes a Meaningless Mean added by minds happily living in Model-Land.]

Thompson and Smith offer suggestions on how to execute an escape from Model-land and thereby avoid some of its pitfalls.   I plan a follow-up essay which will cover Thompson and Smith’s exit-from-model-land strategy and some recent real world examples of what happens when an attempt is made to apply the output of climate models in real world planning.

Escape from model-land [pdf]  is an easily read 8 pages, open access, and well worth your time if you are interested in models, modelling and the results of models.

# # # # #

 

Author’s Comment policy:

Judith Curry listed the original link to the Model-land paper in one of her Week in review – science editions sometime recently.  Judith’s efforts help to expand the breadth of my exposure to interesting ideas and the latest science – and this is not restricted to the climate field.  Thank you, Judith.

There is a rising movement in other scientific fields to rein in the seeming over-confidence in models.  Reasonable minds are beginning to shake their heads in perplexity as to how we got here — where, in some fields, model projections are demanded by organizations giving research or project grants, despite the known problems and the in-applicability of model outputs to conditions on the ground:  more on that in the next part of the Model-land series.

I do read every comment left here by readers.  I try to answer your questions,  supply further links  and discuss points of view reasonably close to being “on topic”.

# # # # #

Advertisements

109 thoughts on “Model-land, Butterflies and Hawkmoths

    • Bruce ==> It is models that bring us trustworthy weather reports and projections every day. The real issue is to know what a particular model can and cannot do and the limitations of its quantitative output when attempts are ,made to apply them to the real world.

      The Butterfly Effect has a huge impact on Climate Models — making long term, maybe even decadal –forecasts unreliable. CliSci has turned to “ensemble means” to pretend that they can get valid projections — averaging a number of results produced by the Chaos found in numerical models of climate and weather [profound sensitivity to initial conditions] has always been nonsensical.

      The LSE team has determined that non-linear models also have another problem — Structural Instability — in which tiny changes in the structure of the model — changing a formula for some small aspect — can also lead to wildly different results.

      This is a serious issue.

      • Kip,
        The paper you link to by Nabergall et al. shows quite clearly that the Hawkmoth effect does not
        exist since it cannot be precisely defined in any sensible way. In addition the authors show that
        even if the underlying equations are structurally unstable (however you want to define it) that
        does not mean that the results of model diverge faster than what the divergence caused by lack of knowledge of the initial conditions. The LSE did not show that for sensible definitions of “small changes” in the model widely different results. Again this is explicit in the paper by Nabergall.

        • Izaak ==> Good for you! You actually read the paper at the other end of a link!

          Nabergall et al. are certainly of the opinion that the Hawkmoth Effect is not a mathematical entity. These are the deep math types and their response to the papers being produced ny the LSE people. I am afraid their opinion on the matter is not a final word. Like many mathematicians, they want only “pure math” definitions and declare that that cannot be stated in an unassailable higher maths definition does not exist. Gee, why hasn’t the team at the London School of Economics simple folded up their tents and gone home? The fact is, the LSE team doesn’t agree with them — nor do a lot of other scientists and mathematicians in other fields.

          I can’t say I agree with them either — but the area of study is controversial — that’s why I linked to that paper as an illustration of some of the contrary ideas.

          • Kip,
            The LSE team nowhere define precisely what they mean by the Hawkmoth effect
            and the examples they do give are very contrived. Mathematicians talk about topological
            stability of equations and there are lots of examples of equations that display topologically instability. The LSE team also appear not to realise that real numbers in any finite are as uncountable as functions in a function space. Both sets have the same cardinality. If the LSE want to show that the Hawkmoth effect is real and important then they need to define precisely what they are talking about and show it exists in real examples.

          • “In neither route to escape from model-land do we discard models completely: rather, we aim to use them more effectively. ”

            I believe that when the object of the model is as complicated as the system of the earth and you are using it to forecast climate, the only useful thing you can get out of the model is to run the simulations and develop hypothesis based on certain peculiar results of the simulations. These surprising results have to be tested against real world observations for validation. If that is not possible then they are useless. The fault of climate scientists is to trick themselves into believing that the simulation found some real world process that then becomes established physics. This happens in other fields. Dark energy and Dark matter come to mind.

            friendsofscience.org/assets/documents/Gilbert-Thermodyn%20surf%20temp%20&%20water%20vapour.pdf …

            “The physics embedded in the GCM models predict a constant relative humidity throughout the troposphere as surface temp increases.”THE MODELS ARE WRONG . This mistake has existed since the 1st paper on it in 1967 by Mannabe and Wetherald” Gilbert’s paper goes on to say “The physics of PV work energy in the atmosphere results from the release of latent heat under the influence of gravity”

            Forecasting weather is different; as the paper pointed out. In that field being right more than 50% of the time for a certain limited time forecast into the future; has some economic and social benefit.

    • I don’t quite understand why climate modelers haven’t taken a practical approach to developing models. Taking multiple different models, based on the same physical model- 1deg rectangular grids with limited boundary conditions and then making multiple runs on different models and averaging the results really doesn’t mean anything since the model results aren’t statistically suited to averaging.

      A more logical method would be make a 1deg hexagonal grid. Select the model variables that can be modeled, and then run multiple runs of the same model starting with values from say 1900 with different values one significant figure less than the machine limit- the smallest change that can be made. Use factorial design methods to choose the most significant variables and optimize those for stability.(they will still blow up, but hopefully the stability can be improved to last at least a decade).

      The entire approach taken by the modelling community is founded on totally opaque principles, with now real goals.

      • Philo ==> Let’s see if a modeller is reading today, and see what they have to say about that.

  1. One would think the best approach to chaotic systems would be to look for something like attractors. Run many many simulations with slightly altered initial conditions and see if some outputs are more likely than others.

    • To be honest for a chaotic system yes, you want to start in every possible place and see if your series converges around an attractor, and/or several attractors.

    • commie & Leo ==> The climate has already presented us with “the attractor” (or maybe “the attractors”) of the climate system — it is the past. That simple — the past is the true output of the living climate model — Earth’s climate past is the attractor(s).

      But, because the climate system is a coupled non-linear system — with at least two major non-linear systems interfaced to one another — the atmosphere and the oceans — being able to discover the attractor(s) is very complex and difficult. CliSci is working on exactly that problem, even if they don’t know it. The discovery of the major oceans cycles, the atmospheric cycles, the Stadium Wave cycle, w.’s tropical thunderstorm mechanism — and the few hundred things we haven’t discovered yet are all included inside the system that produces the climate as an attractor for the total system.

      • Kip – which ‘past’?

        Ice age past, climate optimum past or any of the many other past conditions in between?

        • John in Oz ==> This planets past….. and yes, the whole thing.

          I’m not a fan of any of the multi-verse ideologies.

  2. ” our simulations are perfect, an attractive fairy-tale state of mind….” EEeeeewwwww!

    Someone please tell me what chemicals these people are ingesting. Is it the red pill or the blue pill?

    Whatever it is, I don’t want them or anything they touch anywhere near me.

    • It’s an elixir brewed from a conflation of logical domains with special and peculiar secular seasoning.

    • Sara ==> They are speaking sarcastically, mockingly, of those who have been beguiled by computer models. They want to undo what that mindset has caused and lead science back into the real world with a better understanding of what models can do and what they can’t do, and what we should take away from model output.

      • Oh! That is a relief, Kip Hansen!!!! I thought it was a sort of “Wild, Wild West” game online that would pull in the naive and innocent and turn them in to zombies. (I have heard of stranger things.)

        Thank you for relieving my mind. 🙂

  3. The formula one team that could not afford a wind tunnel but used CFD (computational fluid dynamics) to model the airflow over the car instead, was spectacularly last in the championship.

    Compared with climate, F1 car aero is about a million times easier to model.

    • Leo ==> Now you’re getting the right idea. CFD models are good at one or two things — the main one is establishing where the non-linear fluid flow system breaks down into its chaotic realm (which usually results in airplanes and thing shaking themselves to [pieces).

      Airplane designers have learned their lessons and keep their designs well away from the transition to chaos.

      • I would challenge anyone confident of Climate Models to fly in a plane tested solely in mdels. When they refuse, I would ask why.

      • High performance fighter aircraft tend to be built right at the edge of chaos. That is why they can be so danged hard to fly. When you want the craft to be able to do crazy maneuvers, you build ever closer to the edge. Most modern fighters can’t be flown without the flight computer keeping them stable.

        • ALL modern fighters have been designed with negative stability, kept under control by computerized flight systems. The design philosophy started with the F-106 center of gravity system for supersonic flight. The pilot essentially gives the flight system “requests” and it responds with what the airplane can do to fly as requested. The last major plane that still had hydraulic flight controls was the DC-9. The Boeing 757-767 both still had direct flight controls with computer assistance. had plenty of computer augmentation, but it could be flown solely by the mechanical controls. Every airliner since, and every fighter starting with the F-16 has only computer control. They simply cannot be flown by a human pilot because the pilot cannot react with millisecond or less adjustments of multiple control surfaces.

          • Philo ==> Well, that is interesting….with the news this week about the Boeing jet liner.

  4. Where models seem to often fail is where the underlying understanding of what affects the phenomenon one is trying to model is lacking. If one is missing something that is a significant influence, model making does not seem the way to discover what that factor is, other than that it exists.
    Throw in chaotic relationships, and the models get even worse.

    • Chaos, incompletely, and, in fact, insufficiently characterized and unwieldy non-linear processes. Also, assumptions/assertions (e.g. independence, uniformity, invariance) that are only valid in a limited frame of reference.

    • Tom ==> Things aren’t so bad as long as we acknowledge the limitations of model and don’t pretend that their quantitative output represents the real world. Models, even toy models, can help well-grounded scientists discover new things about the climate by posing problems to the model and analyzing the results — but they mustn’t foll themselves into thinking that the model output represents real climate states in the real world future.

      It is possible to get “good enough” results from a model over short time period — weather, hurricane track forecasts, maybe even rainfall forecasts for regions. running them into their chaotic realms produces chaotic output which relates only to the breaking point of the model. (See above about CFDs)

      • Certainly, for a limited range of conditions, models work fairly well. Until the conditions go outside the simple relationship range and fall into a chaotic condition, as with some airflow models.
        Climate does look a wee bit more complex, so modeling something one does not quite understand with factors that might go chaotic is rather difficult.

        • Tom ==> Yes — this paper puts forth the concept of the Hawkmoth Effect — alluding to the Butterfly Effect, of course. Hawkmoth is the concept that some models — including climate models — are inherently Structurally Unstable — that small changes to the structure of the models (like one of the underlying linearized non-linear formulas) can produce wildly different outputs from the model.

          CliSci has tried to circumvent the Butterfly Effect by finding the mean of chaotic model output and calling it “Macaroni” — actually pretending that the mean of chaotic outputs represents the most likely projected future with the wild range of possible projections labelled as “natural variation.” There is no reason to believe that this is the case.

          To circumvent the Hawkmoth Effect, CliSci seems to be trying ensembles of models then finding the mean of the ensemble outputs — as if that is a real world projection — even sillier.

          • It is like airflow models, which work well until the conditions become turbulent. In that limited range, the model works well, but lumping in where the model goes chaotic looks indefensible.

      • ” … but they mustn’t foll themselves into thinking that the model output represents real climate states in the real world future. … ”

        As per natural variability’s endless quirks, there seems to be a natural group of humans who can’t seem to do that too well. Thanks for the article Kip, an interesting paper.

        • WXcycles ==> Thank you — I wish more readers would understand that CliSci is not some sort of ideological battle, “This is not the 1960s!”, there are a lot of interesting things to be learned — some of them are True, some true, and some just interesting.

          The LSE team is solidly in the Climate Team’s camp — they attend their conferences, present at their meetings, etc etc. But the Hawkmoth team there has not thrown in the towel on this bit of Truth just because there are a few detractors out there. The Hawkmoth Effect + the Butterfly Effect = some serious limitations for Climate Models.

          Of course, the limitations are seen in the results, here and elsewhere, quite regularly. Eventually some honest Climate Modeler will break ranks and admit to the problems and we will see far less of the “disastrous futures” nonsense that “out models show that….”

  5. I’d be interested in your comments about the ‘Fluid Catastrophe’ – reference here at http://blackjay.net/?p=588

    My take on it (I am a maths dolt so cannot begin to follow the thing very far inside) is that, because Fluid Dynamics hasn’t made it into the Quantum Physics world, any model using its present state of existence is by definition deterministic, not stochastic, and is therefore incapable of modelling turbulence in any form. And as Climate is nothing if not Turbulent, it follows that the modelling is inherently faulty.

    Newton’s ideas were successful when applied to the dynamics of planetary motions because friction and turbulence are negligibly small at planetary scales. Planetary motions are, to all intents and purposes, deterministic and amenable to a description in terms of differential calculus. Machinery is intentionally designed to minimise friction and turbulence and to be amenable to a deterministic description. This even applies to semi-conductor design where so called “race conditions” are eliminated in order to preclude any possibility of stochastic behaviour in electronic components.

    But it is not true of fluids. Stochastic behaviour in the form of turbulence clearly plays a major role in the dynamics of fluids. The Navier–Stokes equations, which describe fluids in purely Newtonian terms, fail in high Reynolds Number regimes where turbulence is generated. This is the “Fluid Catastrophe”. In effect the Quantum Revolution bypassed Fluid Dynamics, whose practitioners still cling to the 19th century idea of the continuum.

    The belief that any real fluid can only be dealt with as a deterministic, Newtonian continuum has had a stifling effect on development. Fluid Dynamics has become the province of Applied Mathematicians who are skilled in the manipulation of partial differential equations but in very little else. They are not trained to perform experiments. They do not have an empirical, “Popperian” outlook. They are mathematical Rationalists who only pay lip service to the scientific method. They are not really scientists at all but they they think they are.

      • I’m clearly very late to this party:-)

        I’ve read all of the references now. Fully explained, great piece of work.

        Thanks for your gentle reminder to RTFM. And keep up the work: it needs endless repetition to break through the otherwise fixed convictions, established and horrendously expensive proposals for e.g. Managed Retreat from oceans which may rise meters in a few years’ time, and the ’12 years till we All Fry’ meme.

        • Wayne ==> The Dedicated Alarmists have retreated in recent years from the fireball Earth projections and have established their beachhead at SEA LEVEL RISE — unfortunately, the actual scientific support for the Alarmists on sea level rise is very weak — but at least they can show that a rising sea can be problematic — not because the seas are rising in any alarming way, but because humans are dumb and have a hard time learning from the past. The midWest is flooding — but it is flooding on the flood plains, which are now cities and farms.

    • Chemical engineers, for one, use the Moody chart to deal with turbulent flow in pipes. The real trick is to avoid the transition region where flow changes from laminar to turbulent. The fluid behavior can then depend on which way you approach the transition, and if you stick around the transition region, bad things can happen in heat transport and chemical reactions.

  6. Well! I disagree !! The above labeled “An interesting video example” is entirely predictable, after three minutes and thirty-nine seconds, due to friction (think CO2) all six thingamajigs came to a complete stop!

    • Steve ==> Even really neat demos have to end sometime…sad but true. Just hope the climate doesn’t just stop….

  7. The butterfly effect and the hawkmoth effect do not help the skeptic cause.

    Policy makers will have to make decisions under uncertainty. To do this they seek information
    from decision support experts.

    1. You present your model results. they are uncertain
    2. You present the structural unknowns ( hawkmoth types)
    3. You present the best understanding from observations
    4. They ask for your expert opinion.

    Given all this they make a decison.

    Here is what they dont do

    They dont ask Kip what he thinks
    They dont read blogs.
    They dont ask dudes who havent published in years what they think.

    • No, they don’t.

      They program models to show warming based on CO2. They adjust everything else to match supposed historical records as near as possible.

      When models fail to match current values, they happily adjust historical values to match the models, and/or adjust the models to fit and call it a good match.

      Of course that must be wrong because all adjustments to historical records are apparently reducing the rate of warming (according to some…).

    • Mosher ==> If only that were true in the real world……

      What I think they don’t do is say “Well, our models don’t predict the present very well, they predict a very wide range of results constrained only by the limits written into the code (or they’d be even more all over the place), different models produce wildly different and even conflicting results so we really can’t tell you very much except that it’s complicated and predictions of the future are hard.”

    • “To do this they seek information from decision support experts.”

      What happens when these “decision support experts” have no way of demonstrating any level of expertise on the question at issue – when their expertise is just a facade for the terminally gullible?

      In reality the “expert” is usually just a professor or other academic, who has nary a single real-world accomplishment to their name, opining on something for which it’s not possible to know with any degree of certainty at all. The politician (or journalist) simply calls them an “expert” despite the absence of any demonstrable expertise, because their “expert judgment” by some freak coincidence just happens to align with a policy that the politician (or journalist) wants enacted.

        • Sy ==> The Inc. article is quite good, and their “new model for sustainable economic growth” contains items that are win-win and no-regrets.

          Thanks for the link.

          Note that whenever a political movement can create uncertainty and unrest, then there is money to be made by those who have the nerve to step up. In this case, selecting the right economic zones can actually improve the world a bit.

  8. Ah yes, we have climate modelers, often called climate scientists for some odd reason, who are experts on their models. Unfortunately, that appears to be where their expertise ends. As such, any relationship between climate models and the climate of planet Earth is largely a coincidence.

    I’ve quit calling these people scientists. They don’t have any idea how real science works. Until they can understand the differences between their models and the real world, their models aren’t even useful.

    • Richard ==> I try not to conflate serious scientists who have been beguiled by models and their really cool visualizations — those living and working exclusively in Model-land — with those scientists who know better using models outputs [which I believe they know are not real] to promote panic and fear about the future of Earth’s climate.

      Models are interesting and useful in limited contexts — are can be used responsibly. For some, they are like a drug and lead to irresponsible use and interpretation.

  9. Models are simply articulations, so to speak, of understanding. It’s impossible to model something that is not thoroughly understood, because, after all, a model is just a computer program, and anyone who has programmed computers knows that you have to get every detail right, and a computer is not going to do something it wasn’t instructed to do. It’s not like a Google search where you can misspell a word and Google software can usually process your intended meaning. Programming (modeling) doesn’t work that way.

    As I see it, the problem with climate models is that they are highly reductionist – the vast complexities governing climate are ignored to focus only on basic radiative physics; most everything else is parametrized (i.e., fudged) to get the answers climate scientists want. I think models are hugely seductive to career-oriented scientists (as opposed to truth-loving scientists, whether professional or amateur) because they allow the limitations of rigorous observation-based science to be bypassed in order to produce results that justify professional privilege. Unfortunately, many times those results are faux/pseudo knowledge that merely serve selfish career interests rather than the advancement of true knowledge for the common good.

    • icisil ==> If you read the original paper, Escape from Model-land, you’ll see you are in good company. This paper is not the first on this topic — a team at the London school of Economics started on this in 2013.

      The Battle of the Hawkmoth is being fought in the journals and on some blogs. It looks a lot like the Battle of the P-value — in which serious statisticians are attempting to change the meaning of “statistically significant’ and what determines a real effect.

      • Reading through it now. My first impression is that it reminds me of books I read years ago – Have Fun At Work and Friends In High Places by William Livingston

  10. Kip– What will happen when these types of models are combined with the self-learning artificial intelligence systems like those we’ve seen succeed spectacularly in Go or Chess (like AlphaZero)? Perhaps those systems might find unexpected ways to deal with the chaos and create more successful models of climate systems?

    • It’s impossible to model something accurately that’s not clearly understood. Man has no ability to create intelligence that understands something he doesn’t. The phenomena governing climate are so vast and complex that climate scientists don’t even know what they don’t know.

    • TDBraun ==> My 14 year old son wrote a self-learning program that was able to learn to play the coin game “15” over ten games or so to the point that it was impossible to win against the program. Self learning programs might actually be able to predict weather over a few days and I suspect that many meteorologists use weather predicting software (think of the weather radar window on a national weather site that offers weather radar into the future (a few hours).

      Numerical climate models will never be reliable in projections of the distant future until modellers find a real way to keep models from hitting chaotic states, overcoming the Butterfly, circumventing the Hawkmoth, and have a great deal better understanding of the Earth’s climate. In my opinion, this is very unlikely to happen for reasons you can find in reading the links in the essay.

      • For weather applications the model does not have to be perfect or precise, it only has to be accurate, and in some regions at some times they’re shockingly accurate. If the aim is to make them more accurate, that’s all we need (or can hope for) because a forecast is always an indicator of a possible need to plan to counter a trend, or even to leave an area. An accurate trend prediction is incredibly useful for significant weather events.

        Models I used ten years ago that were then usefully accurate are now passe, while newer models that were more comprehensively researched, tested and developed are proving to be amazingly accurate, in detail. What this has shown is that chaos may control trend forecast accuracy, yes, but the chaos is being steadily tamed and the trends are becoming remarkably accurate for longer.

        10 years ago models were providing surprisingly usefully and accurate trend indications out to 4 to 5 days. Now you can get useful accurate forecasts for 7 to 8 days, with some head’s up indications out to 10 to 12 days.

        In other words the “butterfly-effect” concept has been significantly overstated with respect to useful weather forecast model accuracy. It is not the legendary show-stopper meme it’s been painted to be. There are a lot of things wrong with models, but they’re mostly in the way they’re being used or abused by humans or organizations rather than a fundamental incapacity for them to be useful and highly beneficial.

        Garbage in = Garbage out

        Sure, however, it’s also true that:

        High-quality data + High-fidelity WX forecast model = High-quality weather trend forecasts

        That’s what we also need to be equally clear about. No need to escape model-land if you feed, develop and use models properly and recognize the omissions and limits. They’re a situational awareness tool, and a very good one. Only a dope is going to presume reality will perfectly mirror a model but you can certainly get a useful reflection out of a model that reality will more or less agree with most of the time. When you can do that globally, and they keep getting better I see not reason to discount models.

        I do discount climate models though, because their forward predictions of trends are untestable and existing ones clearly don’t work in any useful way. It’ll be 1,000 years before they can be taken in any way seriously.

        i.e. not useful for humans.

        • WXcycles ==> “That’s what we also need to be equally clear about. No need to escape model-land if you feed, develop and use models properly and recognize the omissions and limits.”

          This is very very close to the message of the LSE Hawkmoth Team. They point out that some models are “weather-like models” and some “climate-like models.

          Weather forecasting falls into the linear range of weather models. Out too far and the Butterfly comes to roost.

          The posited Hawlmoth Effect may explain the wide ranges for projections by different models — which are not truly independent [except possibly the Russian model] — the slight structural f\differences between them may invoke the Hawkmoth.

          The Hawkmoth Team (it is really part of the CATS — Centre for the Analysis of Time Series) is a Climate Team unit — they have videos in support of mainline climate themes.

  11. I have been doing weather forecasting for more years than I care to remember. I have grown up with the weather models, back from the days of the barotropic model. I have done programming in atmospheric models, done lots of verification. I still do contract forecasting. The models are an integral part of forecasting. I dare say we could not forecast weather much without them. However, meteorologists recognize their strengths and weaknesses. They are a tool, a very valuable tool.

    In addition to looking at one or two models, I do also use ensemble forecasts, be it different runs of the same model, or a number different model runs. I generally use the ensemble forecasts as a way to give confidence in the forecast. If the various ensemble runs diverge greatly then I ascribe less confidence to my forecast and explain that to the clients. Conversely, ensemble runs that show little divergence give me greater confidence in my forecast.

    What I have found through experience is that taking the ensemble average seldom gives the best results. The actual outcome tends to be closer to one of the runs, which may well be one of the outliers. Different weather patterns will give different spreads in the ensembles. Some weather pattern are quite stable and easier to forecast, while some are much more chaotic.

    • Max ==> Yes, weather forecasting with reliable models is possible — as mentioned in both the essay above and in the paper the essay discovers. Weather models run into the further future hit the speed bumps of chaos — extreme sensitivity to initial conditions, the necessary use of linearized approximations of non-linear equations, and become unreliable with their results not transferable into the real world.

      Thanks for sharing your professional viewpoint.

    • “However, meteorologists recognize their strengths and weaknesses. THEY ARE A TOOL, a very valuable tool.” (Caps mine)

      Yes indeed Max.
      Something that WUWT denizens don’t get, as regards GCMs.
      They are not the science but rather tools for the furthering of it and under constant improvement (which will come as much with computational power advances as anything else)

      BTW: I agree entirely with your post.
      (I was a meteorologist with the UKMO for 32 years).

      Kip: would you like to provide confidence intervals for the observational data on the Christy/Spencer graph ?

      Mr Christy and MR Spencer failed to do so.
      That wouldn’t be acceptable if on the “other foot”, now would it?

      Could you also bring it up to date by another 7 years please?
      To see where we are now that the “hiatus” is long over.

      • Anthony Banton ==> The Spencer graph (model projections vs observations) is used, as explained in the text, as an illustration of the wide range of outputs from climate models. Climate models have not yet shown any tendency to coalesce on narrow future states. Spencer does provide other up-to-date comparisons of Models vs. Observations on his own blog.

  12. “On the other hand, if I have a model of the real estate market for multi-bedroom apartments at the high-income end of the market, in which returns are to be measured on a multi-decadal scale, this might be considered a “climate-like” model — in which the past might not be a good predictor of the future, with no ready opportunities to test the model against the real world. Thus, the authors posit, I would need to take into consideration expert judgement rather than depend on the quantitative output of the model alone.”

    This makes no sense. If there are insufficient real-world opportunities to test the efficacy of the model, why would you presume that there were sufficient real-world opportunities to develop the skill of the “experts” in the first instance. Models have to prove their skill, but experts don’t? A person gets deference as an “expert” because of the letters after their name or the number of hypothetical research papers they have written?

    If a model can’t be validated, neither can the people who presume to be experts.

    • Kurt ==> To be absolutely honest, my real estate model may not be the best choice, but it is a real life example. what I think you have not grasped is that the “experts” from whom we might seek expert opinions are surely NOT the same model-land crew that built and informed the model. In my poarticular example, the past supports more apartment housing, but if they had sought true expert opinion form a number of experts, they may have been able to supply the caution needed which would have required knowledge of the ongoing changes in the attitudes of corporate America towards its employees — which in the 1960s was paternal and in the 1980s began to shift away from that care for employees position. Corporate attitudes towards the workforce would not likely be something that would have been included in the real estate model, but can be 9and turned out to be) a major influencing factor.

      Your point is well taken though, if one misidentifies the sources of expert opinion as those who built the models, then you can really get in trouble. Note that the composite “opinion” of the modellers is the model itself, which is already in evidence. You need outside broader expert opinions to inform you and ground-truth the model output.

      • All I’m saying is that to even qualify as an expert, that person needs to have first demonstrated that expertise in the real world. Certainly experts do exist across a wide range of fields. True handwriting experts can actually demonstrate their ability to accurately attribute handwriting samples to individuals. Expert marksmen can hit targets at distances few other people can. There are many such examples of actual expertise, but in all of those examples, the real world provides the individual with the opportunity to both gain expertise through practice and feedback, and to demonstrate the expertise once gained.

        But the Earth’s climate system is not a subject in which any person can ever gain or demonstrate expertise. Life’s too short. Climate scientists don’t hold conventions on star-filled nights in a Nevada desert and call down thunderstorms. Climate scientists don’t show up with a stack of references from alien civilizations testifying to how well they were able to engineer climates on other planets orbiting distant stars. The only means at a climate scientist’s disposal to demonstrate expertise with respect to the Earth’s climate system is to predict future changes in our climate, and since we can only measure changes in climate by averaging data over many, many decades, it is simply not possible for a person to gain or show expertise in climate in such a pathetically small period of time as a human lifespan.

        Climate scientists try to use mathematical models for this purpose, but the models either explicitly fail or are indeterminate because many decades or even centuries would have to pass to verify the predictions. Since the models, programmed with the all the knowledge of the climate scientists, still can’t predict climate changes, why would any intelligent person presume that a person’s “expert judgment” can do what the climate models can’t, and that we shouldn’t even ask that this “expert” first prove they are good at predicting climate changes before we even bestow them with the title of “expert?”

        My issue with your real estate hypothetical is a strictly logical one – if a real estate model for a subset of a market can’t be validated, because the real world doesn’t provide sufficient opportunities to reliably test the model, as your hypothetical states, how can the real world have provided sufficient opportunities for an individual to both gain and demonstrate expertise in that same subset of the market, and to pass judgment on some given output of the model? Weather models, for example, can be tested in short time frames and this provides the opportunity to test both the model and the weather forecaster’s judgment on what circumstances the model can be relied upon. But in your example, this circumstance does not exist.

        You seem to be taking for granted the existence of expert individuals who can know when to rely on models, and when not to rely on models, whereas I see expertise as something that has to be proved at the outset, and for the specific issue the “expert” is exercising judgment on.

        • Kurt ==> You have correctly identified one of the problems with expertise — how to select those with expertise in a specific field, when that field is operating under deep certainty.

          Do not make the mistake of identifying modellers with experts, or denigrating real-world experts because they are not omniscient.

          There are experts in the climate field, but usually in its many sub-branches. Nils-Axel Mörner is an expert on sea level rise — but not all-knowing, but he knows what he doesn’t know, for the most part. So he could be used as an expert to ground-truth sea level predictions.

          Most climate modellers themselves are not coding geniuses — they use these geniuses to do the coding. And the climate scientists that drive that effort are themselves not experts across all fields. But you see, there are true experts in sub-fields that can add their expertise to model output to improve the situation.

          • I don’t think I’ve ever denigrated people who are experts. In fact, I’ve acknowledged that experts do exist across many areas of expertise (handwriting, marksmanship, weather forecasting and many others). Nor have I implied that they must be omniscient – I only indicated that to be considered an expert, a person must be very much more skilled at doing something in their area of expertise than is an ordinary person – a standard far short of omniscience.

            And it’s in that careful definition of expertise where I think we differ. You had mentioned “real world” experts and wrote of “true experts” who “add their expertise . . . to improve the situation.” In the passage that started our discussion, you had stated a “need to take into consideration expert judgement rather than depend on the quantitative output of the model alone.” So I assume that you implicitly acknowledge that expertise is something distinct from mere academic knowledge about a subject, and instead relates to a skill or ability (cf. “judgment”) at doing something. But then you retreat from the idea that experts should first have to demonstrate that skill before we call them experts and accordingly rely on their judgment.

            So when you say that Nils Morner is an expert at “sea level rise,” I don’t know what that means – you haven’t indicated any expert ability of his, but just stated a subject and asserted that he is an expert in it. Tell me what he can do with respect to sea level rise. I assume that you don’t mean that he can control it. Can he predict it, and if so can he predict it more accurately than me just saying that the historical rate is going to continue indefinitely, which is the easy call? Or do you just mean that he’s carefully documented it? If the latter, I think that’s being knowledgeable, but I don’t see that as strictly being expertise, because in and of itself there is no demonstrated use to that knowledge.

          • Kurt ==> I didn’t mean to criticize your view, just to add to the discussion. One can’t use the modellers as the givers of expert opinion as they wrote the models (for the same reason that you can’t test a model with the same data that was used to tune it).

            You may web search on Nils-Axel Mörner to see why I consider him an expert.

            Who might be an expert in what and on what basis is far to philosophical for discussion here.

  13. Slightly off topic but there is a YouTube creator, ‘The Spiffing Brit’ (or words to that effect) that makes content based on the idea of taking a computer game and exploiting a flaw in its modelling for comic results.

    Most of the games he destroys are the ‘world building’ types. He has done things like playing Prison Architect and turning his ‘prison’ into a giant forestry plantation (sans actual prisoners) to make a fortune and Civ VI where if you have the correct combination of buildings and political edicts the game allows you to raise units for free (coupled with a trading bug where the AI nations will sell you resources before happily buying them back off you at a massive loss, or to word it another way, Free Money).

    So if teams of professional code monkeys programming relatively simple models can screw that up – and remember computer gaming is a billion dollar industry despite what your personal view of it may be – then how does anyone honestly expect someone to correctly model ‘climate’.

    • Craig from Oz ==> Climate Models are certainly not perfect creations — and even if they were perfect-ish, they would still have to contend with the Butterfly and the Hawkmoth.

      That said, there are some weather models that are quite good, and we are the recipients of their services every time we log-on to “check the weather’.

      Complex computer programs, like modern games, are very hard to test to the level where the game really covers all contingencies. I have a close friend who has spent his career as a Crasher — working for a very large international computer business — paid to find all the weaknesses of their major programming products. He never runs out of things to do and no program has ever survived his toolbox of crashing tools. Once he has them crashed, he sends them back for repair.

  14. Climate Change and the Death of Science

    climate change models are a form of “seduction”…advocates of the models…recruit possible supporters, and then keep them on board when the inadequacy of the models becomes apparent. This is what is understood as “seduction”; but it should be observed that the process may well be directed even more to the modelers themselves, to maintain their own sense of worth in the face of disillusioning experience.
    …but if they are not predictors, then what on earth are they? The models can be rescued only by being explained as having a metaphorical function, designed to teach us about ourselves and our perspectives under the guise of describing or predicting the future states of the planet…A general recognition of models as metaphors will not come easily. As metaphors, computer models are too subtle…for easy detection. And those who created them may well have been prevented…from being aware of their essential character.
    https://buythetruth.wordpress.com/2009/10/31/climate-change-and-the-death-of-science/

  15. In 2013, back when he was a skeptic, Robert Brown of Duke University wrote an interesting paper about the statistical folly of averaging model outputs to obtain a model mean. See https://wattsupwiththat.com/2013/06/18/the-ensemble-of-models-is-completely-meaningless-statistically/

    Also, a paper was published on WUWT some time ago showing the output of a single climate model with starting temperature (I believe) varied a very small amount, on the order of 0.1 or 0.01 degrees. I cannot find it, but it showed very large variations in model output.

      • The Curry site is the one. Question – if one runs a single climate model with identical starting points for 30 times, does it always give the same result? If it does, then is not the model deterministic? And if deterministic, how can it represent climate? And if it is chaotic, how can it predict anything?

  16. A perfect description of what inside the heads of folks who engage in Magical Thinking.

    Where chronically & chemically depressed minds very effectively brain-wash themselves.
    They fixate on ‘something’ then:
    they cherry pick
    they seek out like-minded souls
    they relentlessly seek out & confirm biases.
    The chemical depressor reduces inhibition (by definition) and thence promotes violence – verbal as well as physical.
    Hence the ‘D’ word = little hesitation in the summary shooting of messengers carrying adverse information

    Do your research- ask any police-person (UK wise – best to go look for them inside fast cars on busy main roads, seem to be invisible everywhere else) and they will say that the vast majority of violent acts take place under the influence of a very potent chemical depressor.
    Something they drank.

    It is ‘something we eat’
    It is something we all eat
    Something we no longer have much choice about eating because there is little else *to eat*
    We are all trapped in a land of Magical Thinking = Model Land
    Even skeptix

    Thanks Ancel -you have very effectively poisoned an entire generation and your work is raging through the next one also

  17. Doesn’t this all boil down to the old saw of “the map is not the territory”? You can make the map increasingly as complex as you want but it will never actually be the territory. There will *always* be some difference between the map and the territory. The question is just how big is that difference. The map can indicate the change in elevation on a hill but it can’t tell you if the hill is covered in pasture you can walk through or in brush so thick it is impenetrable.

    • Tim Gorman ==> One can only wish it were that simple. Much of what passes for commerce, business, and public policy is based on computer models of the matter to hand. Some of them better than others.

      The LSE’s “weather-like models” are models that can be tested over a brief period and their strengths and weaknesses noted so that they can be used properly.

      While “climate-like models” can only be tested against the past or present — on which they are already based. You can not test a model with the data used to create it — that’s not a test. Models have to correctly predict other things that were not used to make the model — such as the true future. It boils down to climate-like models are very hard to test because one has to wait for the future to take place. So, if you can wait 50 years, we can see if this climate-like model is any good.

      IT IS NOT JUST CLIMATE MODELS — any model that has that similarity is in the same boat.

      The Butterfly Effect is well known and well supported. The Hawkmoth Effect is still in the stage of being a scientific controversy.

      • Kip- The map is not the territory really has little to do with either the butterfly effect or the hawkmoth effect.

        I understand that weather models have become more complex and better over time. Living here in the central US, however, we get proof everyday that the weather models still have a long way to go. Rain/no rain on a county level is still not very accurate 24 hours into the future. Same for snow, wind, etc. The territory, e.g. major river valleys, interstate highways (which many times are geography based), etc, certainly affect the path of weather but it’s not obvious the weather models do a good job of integrating these features of the territory. I grew up with farmers in Kansas that could forecast tomorrow’s weather just as accurately using their own models based on things like animal activity and their arthritus.

        Models will never be the territory. You can test for how well the model predicts things but you’ll never get to 100%. If you can get to 75% it’s probably a good enough model for most people.

        If a model can’t predict tomorrow then how can it predict 50 years from now? If you can’t get the near future right then how do you get the far future correct? Pure chance?

        • Tim Gorman ==> “The map is not the territory really has little to do with either the butterfly effect or the hawkmoth effect. ” Really that has nothing to do with the Butterfly or the Hawkmoth. If something i wrote gave that impression, let me know, it shouldn’t have.

          The Butterfly makes long-term numerical forecasting impossible — probably. The IPCC says ““The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.” They are right.

          The Hawkmoth Effect may explain why the different climate models have such divergent outputs.

  18. Models are an excellent tool for assessing things …. the problem arises when people become so attached to their pet theories that they begin to believe their model is correct and reality is wrong. This is what has happened in Climate Science. …. we see it all the time …… if the model doesn’t fit the observations, then we must adjust the observations.

  19. Kip,

    Thank you for mentioning the Hawkmoth Effect. As a long-time computer programmer I suspected that something like that might exist but I did not know that there was serious work being done to quantify the effect. I look forward to reading both the pro and con papers about the effect.

    • RicDre ==> I have personally experienced “Hawkmoth-like” problems with programs I wrote or used back in the day….it happens when some single line of code or a single function is tweaked to improve the program but instead leads to Blue Screening some versions of Windows or a looping of error messages. I learned to never never never alter code without a doubly redundant, different machine, backup and an end-of-day backup off-site.

      A google search for Hawkmoth Effect will give you a good starting point on exploring the literature on it.

      • Kip:

        One of the first things I noticed when reading about the Hawkmoth Effect was this statement about the Butterfly Effect: “This problem [the Butterfly Effect] is easily solved using probabilistic forecasts.” I have not heard of anyone demonstrating that that statement is true, have you?

        • How do you detrrmine the applicable probabilities? Doesn’t that require knowledge the models are supposed to provide? If you already know ahead of time then what good are the models?

        • RicDre ==> Probabilistic forecasts do not overcome the problems presented by the Butterfly effect. That is wishful thinking.

          Making forecasts that give large ranges of possible outcomes is the best that could be done.

          There are deep mathematicians working on the idea that predictions might be able to be formed from truly chaotic data sets — but, for me, I am not only not convinced, I don’t think it is possible based on first principles of Chaos.

  20. The Curry site is the one. Question – if one runs a single climate model with identical starting points for 30 times, does it always give the same result? If it does, then is not the model deterministic? And if deterministic, how can it represent climate? And if it is chaotic, how can it predict anything?

    • DHR ==> “f one runs a single climate model with identical starting points for 30 times, does it always give the same result? ” YES if, and only if, the same exact (to the last decimal point) same initial values are used. Climate Models are entirely deterministic.

      However, change even just one initial condition, by one/one-trillionth, and you get 30 different winters.

      Current, misguided, CliSci thinks it can take the mean of 30 chaotic outputs and call it a predictive projection of the future.

  21. Kip, thanks for this post and links that I will be looking at. The discussion of financial models reminded of a paper by Paul Pfleiderer on The Misuse of Theoretical Models in Finance and Economics .

    “In this essay I discuss how theoretical models in finance and economics are used in ways that make them “chameleons” and how chameleons devalue the intellectual currency and muddy policy debates. A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy.”
    His essay is at https://www.gsb.stanford.edu/sites/default/files/research/documents/Chameleons%20-The%20Misuse%20of%20Theoretical%20Models%20032614.pdf
    My synopsis is https://rclutz.wordpress.com/2016/05/28/cameleon-climate-models/

    https://mathbabe.files.wordpress.com/2014/09/screen-shot-2014-09-29-at-8-46-46-am.png?w=595&h=528

    • Ron ==> Thanks for the links on economic models! I will follow-up on them. LSE CATS is not just about clmate models — it is the London School of Economics, after all.

    • We have a whole Metrics system for supposed Wealth, “Economics” with an increasingly tenuous relationship to the underlying physical system for which it purports to be a Proxy.
      It is in an attempt to address this serious and fundamental deficiency that people have been working on EROI (as an example)
      Conventional Economics takes no account of and is not in accord with the 2nd Law of Thermodynamics, as pointed out in Charles Hall’s lecture below.

      Charles A. S. Hall discusses faults of the “Dismal Science”

      “Economics is not a science because it doesn’t use the scientific method”

      “ Don’t tell me dollars. Tell me energy. Because Dollars are only a lien on energy. That’s all they are”

      “Encourage us not to teach fairytales in economics classes. We teach a million young people fairytales in our Economics classes”

      “I had a wonderful talk at our biophysical economics meeting last week. And the speaker was an historian. He said the discovery of the 2nd law of thermodynamics absolutely transformed chemistry first, then physics, then all of the.. geology.. all of the sciences.. ecology.. Except one.. Economics. ”
      http://tinyurl.com/mbqowln

      • “dollars are only a lien on energy” A interesting analysis concludes that world petroleum supply and prices are driven less by demand and more by the value of the US dollar.

        “So what’s the biggest factor when it comes to oil? The answer came up early in this piece. It’s the U.S. dollar. It’s very simple, really. When the dollar is strong and stable as it was during the Reagan and Clinton years, oil is cheap. When the greenback is declining as it was during the Nixon/Carter ‘70s, and the Bush/Obama ‘00s, the price of oil (and other commodities) is soaring. Considering fracking, it’s only economic insofar as the dollar is cheap. Considering Saudi power, it’s most evident when the dollar is cheap.”

        https://rclutz.wordpress.com/2019/03/21/behind-the-oil-price-curtain/

        • WRT Saudi sensitivity to criticism wrt Khashoggi, there were some pretty severe/extreme worst case consequences aired. (In this case implying potentially constant physical demand for oil (at least initially B4 second order effects) but weaker dollar)

          Oil priced $400 in yuan, Russian military base – Saudi insider says kingdom mulls 30 anti-US moves
          https://www.rt.com/news/441270-saudi-retaliation-us-sanctions/

    • The Dismal Science Remains Dismal, Say Scientists
      The paper inhales more than 6,700 individual pieces of research, all meta-analyses that themselves encompass 64,076 estimates of economic outcomes. That’s right: It’s a meta-meta-analysis. And in this case, Doucouliagos never meta-analyzed something he didn’t dislike. Of the fields covered in this corpus, half were statistically underpowered—the studies couldn’t show the effect they said they did. And most of the ones that were powerful enough overestimated the size of the effect they purported to show. Economics has a profound effect on policymaking and understanding human behavior. For a science, this is, frankly, dismal.
      One of the authors of the paper is John Ioannidis, head of the Meta Research Innovation Center at Stanford. As the author of a 2005 paper with the shocking title “Why Most Published Research Findings Are False,” Ioannidis is arguably the replication crisis’ chief inquisitor. Sure, economics has had its outspoken critics. But now the sheriff has come to town.
      https://www.wired.com/story/econ-statbias-study/

      • Brent ==> Thanks for the link to the stuff on the reproducibility crisis in Econometrics. I have written several essay following Ioannidis’ work across the landscape of Science. LSE is doing a lot of work along the same lines.

        • Kip. Thanks for all your efforts. I may be becoming sadly jaded, but really feel we are at a point where outside audits of various fields are required. I don’t trust LSE to fix Economics. But if they do they can start by revolutionizing the field by recognizing Thermo 2nd Law!!
          There are of course outstanding individuals, such as Ioannidis working (principally) within their fields but inertia is a huge obstacle.
          cheers
          brent

          • Brent ==> Welcome aboard the effort to try to bring the sciences back to Earth. There are a lot of worthy efforts — the P-value fight, the pre-Registration of studies (starting with pre-registering and perr reviewing of study designs), open access code and data, ……

  22. The results of any computer model degrade over simulated time. The successful models mentioned in some comments (like weather and aircraft control) are successful because they are constantly brought into conformance with reality.

    Weather prediction is very poor much more than a week in advance but the models are reset daily or more frequently.

    Aircraft controls are of course constantly kept updated with numerous inputs about the aircraft state and its environment. They would be totally incapable if they worked entirely on internally generated predictions.

    • Jim ==> Quite right — weather-like models are updated with actual conditions as they occur (almost in real time) — and airplane computers keep themselves updated “instantaneously”.

      When the weatherman is wrong, it rains on our parade. When the airplane computer glitches, the plane falls out of the air.

      With climate-like models, when they try to project the far future, we don’t know that they’ve bungled it until 2050 arrives, which is too late to take back all the hardship that acting on their projections has caused.

  23. Thanks for the interesting post. Food for thought.

    BTW, is there no update on the mid-troposphere graph you included since 2012? It would be interesting to see how the balloons and satellites are doing now.

  24. Kip,

    Thank you for drawing attention to recent controversies regarding the Butterfly and Hawkmoth Effects in mathematical modelling. Whilst this is an area of interest that is gaining prominence, I suspect that the underlying issues have been the source of vague misgivings for many years already. Take, for example, the following quote I came across within a paper published back in 1998 by Professor Van Der Sluijs. It relates to what a modeller said back in 1990 regarding model predictions for ECS:

    “What they were very keen for us to do at IPCC [1990], and modellers refused and we didn’t do it, was to say we’ve got this range 1.5 – 4.5°C, what are the probability limits of that? You can’t do it. It’s not the same as experimental error. The range is nothing to do with probability – it is not a normal distribution or a skewed distribution. Who knows what it is?”

    The mystery is not why there is so much confidence in the probabilistic basis for climate model predictions, it is why there should now be confidence when, hitherto, there was none, since nothing has happened in the intervening period to explain where the new-found confidence could possibly have come from. Back in 1990, climatologists understood that there was no stochastic basis upon which to sample the function space of model structural uncertainty. What works for the Butterfly Effect does not work for the Hawkmoth Effect because the former is dealing with aleatory uncertainty and the latter deals with epistemic uncertainty. Intrinsic, real-world variability constrains the former, yet only the communal regulation of expert certitude constrains the latter. If only for the sake of respecting the likely lack of ergodicity, one should not engage in the aleatory analytical methodologies to model epistemic uncertainty. This was a lesson I learned many years ago when I first witnessed colleagues trying to model software development project risk by soliciting expert opinion on likely task durations and then running Monte Carlo Simulation to produce a probabilistic outturn curve for project end dates. You might think that anybody who has actually experienced delays in software development projects wouldn’t afford such risk models any credibility. And yet they did! The discord between model predictions and the real world never did hit home.

    • John ==> My personal take on it is that climate modellers (living in model-land) happily churn our “projections” on a basis that is totally wonky to statisticians and modellers from other fields.

      The the 40 Earths: NCAR’s Large Ensemble reveals staggering climate variability” project clearly revealed that climate models are hopelessly compromised by “extreme sensitivity to initial conditions” — by running the exact same model 40 times each with a change of 1/1 trillionth of a degree difference in starting GAST. They got 40 different future climates — not just magnitudes changed, by signs changed, locations changed, results were contradictory. They labelled all this as “natural variation” when they MUST KNOW that that is just what Lorenz tried to explain to them decades ago — The Butterfly Effect. It is a mathematical, numerical, not Nature.

      Instead of accepting the facts, they have pretended to themselves that the “true prediction” would lie close to the MEAN of the chaotic output of the model. There is no scientific reason to believe that that approach is true in any sense.

      When they discovered that multiple model ensembles (running many models many times) gave just a wildly divergent results, the sdecided to pretend that finding the MEAN of the MODELS would produce a true projection and hedged their bets with probability statements. There is no scientific reason to believe that that approach is true in any sense, either.

      Don’t get me started…..

  25. “(every model is imperfect outside of pure mathematics)”

    ’cause mathematics is just another human mental according language

    and every human mental according language is able to transport real world nonsense like “have you too been on the sun last holidays? “

Comments are closed.